Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.
Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!
Biology 103 Web Paper Forum |
Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.
Selective Advantages of the Mutant CFTR Gene Name: Erin Myers Date: 2002-09-26 13:41:55 Link to this Comment: 2910 |
Cystic Fibrosis is the
most common, lethal disease among Caucasian people. In the
Cystic Fibrosis is an
autosomal recessive disease. For one to
have Cystic Fibrosis both of his or her parents must have the mutant gene
(ignoring the minute chance of spontaneous mutation). He or she has inherited two mutant genes, one
from each parent (he or she has no normal CFTR genes. When two carriers have a baby there is a 25%
chance their baby will not carry a mutant CFTR gene (homozygote dominant), a
25% chance their baby will have cystic fibrosis, carrying two mutant CFTR genes
(homozygote recessive), and a 50% chance
their baby will be a carrier of a mutant CFTR gene (heterozygote), as
illustrated in the diagram below.
|
MOTHER normal
(N) mutant (n) |
|
F A N T H E R n |
non-carrier NN |
carrier Nn |
carrier Nn |
Cystic
Fibrosis nn |
Cystic Fibrosis
affects the respiratory and digestive systems.
The CFTR protein normally forms channels in cell membranes, though these
channels flow chloride ions. In the
lungs this washes away bacteria, mucus and other debris. In the intestines it washes away pathogens and
brings digestive enzymes in contact with food.
In sweat glands these channels recycle salt out of the glands and back
into the skin before it is lost to the outside world2. In a person with Cystic Fibrosis thick mucus
blocks these channels. Their body cannot
perform these functions. This leads to
mucus buildup in the lungs, a prime breeding ground for bacteria. In the small intestine the enzymes that break
down fat cannot get to the food and digestive problems arise. On a hot day a person with Cystic Fibrosis is
at risk to dehydration10.
For years scientists
have been studying the possible benefits of the mutant CFTR gene. In 1967 A.G. Knudsen, L. Wayne, and W.Y. Hallett published an article in the American Journal of
Human Genetics. They collected data of
the numbers of live offspring of the Grandparents of CF children. They found that the mean number of offspring
for grandparents of CF children was higher (4.34) than the grandparents of a
control group (3.43) with only 0.30 standard error, concluding that CF
heterozygotes was associated with successful childbirth, and selectively
beneficial.
In more recent years
scientist identified the advantages of mutant CFTR carriers surviving
cholera. The lethal strain of Cholera, Vibrio cholerae,
produces a toxin that binds to the cells of the small intestine opening all of
the transmembrance regulating ducts pumping out considerable amounts of
chloride ions and water about five gallons a day1. If the salt and water are not quickly
replaced the infected person dies of dehydration. Sherif Gabriel, a
cell physiologist of UNC Chapel Hill, experimented with mice that carried the
CF mutation and cholera. Not
surprisingly, the intestines of mice with cystic fibrosis infected with cholera
secreted no fluid. They lacked chloride
channels. The amazing discovery he found
was that mice that carried one mutant CFTR gene secreted only half the liquid
than the non-carrying mice. Gabriel
concluded that when cholera infected humans that carried a mutant CFTR, half as
much fluid secretion may have been enough to flush the intestines of the toxin
without succumbing to diarrhea, dehydration and death2. This
selective advantage to the many European outbreaks of cholera may explain the high
frequency of the gene mutation in Caucasian people of European decent. This argument, however, has been challenged
recently on the basis of time.
Not enough time has
past since Cholera reached
The reason the
frequency of CFTR heterozygotes4 in the
Caucasian Populations is so much higher at 1 in 25 than those of Hispanics
(1/46), Blacks (1/60), and Asians (1/150) has to do with a comparative
disadvantage that out ways advantages in the indigenous areas of these people. Physiologist Paul Quinton of the
It seems that one way
or another, dehydration is the variant in the frequency of cystic
fibrosis. In the cooler climate of
Northern Europe the mutant CFTR gene protected people from many fatal diseases
that cause diarrhea and dehydration (cholera, typhoid, E. coli), but in the
warm climates of the Americas, Southern
Europe, Africa and Asia this advantage was out weighed by the threat of heat
related dehydration. It is interesting
to see the advantages of diversity even in something as small as a gene.
Internet Sources
1How Cholera Became a
Killer, the one deadly strain of Vibrio
Cholerae
2Hidden Benefits,
the 52,000 year survival of the mutant gene that causes CF
3Cystic Fibrosis and
Typhoid Fever, rejection of the cholera hypothesis
4US Population
Frequency, statistics of the frequency of CF affected and carriers
5Selective Advantage
of CF Heterozygotes, 1960's study of live births among CF carriers
6Canadian Cystic
Fibrosis Foundation, tons of basic facts with search option
7WebMDCystic Fibrosis,
symptoms, cause, treatment, references
8Cystic
Fibrosis Research Directions, more sophisticated fact sheet
9United
Kingdom Cystic Fibrosis
10Scientific American CF
article, genetic defects underlying the disease
The Atkins' Diet: Friend or Foe? Name: Kathryn Ba Date: 2002-09-26 22:09:44 Link to this Comment: 2918 |
In a society that is continually obsessed with being thin and dieting, the following quote might seem like an intriguing promise:
FORGET THE FIGHT AGAINST FAT! BREAK THE SUGAR-STARCH HABIT TODAY AND ENJOY STEAK, EGGS, CHEESE, EVEN WINE AS YOU GET HEALTHY AND LOSE WEIGHT WITH SUGAR BUSTERS! (1).
Sounds good doesn't it? All the dieter must do is eliminate carbohydrates and he or she will loose weight, right? If this sounds too good to be true, you are not alone in your hesitation. An increasing number of nutritionists and doctors are warning that diets that eliminate carbohydrates in favor of protein and fat are not effective and may be dangerous to one's health. As Sheila Kelly, a clinical dietician, says, "It's a seductive concept. Watch the pounds melt away while you eat all of the high-fat foods you want. Even better, don't bother watching your caloric intake or worrying about regaining your weight. All you have to do is avoid 'poison' carbohydrates" (2). The Atkins' Diet, one of the best known of the low carbohydrate diet programs, promotes the idea that carbohydrates are an overweight person's barrier to loosing weight (3). This essay will examine specifically what the Atkins' Diet calls for and the mounting body of evidence against low carbohydrate diets.
In 1972 Dr. Robert Atkins published Dr. Atkins' Diet Revolution and in 1992 published Dr. Atkins' New Diet Revolution, an updated version of his first book. Atkins' books promote a controlled carbohydrate diet and provide the dieter with a four-step program to loosing weight (3). The first step is a 2-week "induction" period, during which one attempts to reduce his or her carbohydrate intake to less than 20 grams a day. During the remaining three steps the dieter incrementally raises his or her carbohydrate level, but never surpasses one's "critical carbohydrate level." Noncarbohyrate foods are permitted whenever the dieter is hungry and Atkins also recommends large amounts of nutritional supplements (4). By following these four steps, the dieter will induce ketosis, a process Atkins describes as equivalent to fat burning. When a person's body does not receive enough carbohydrates to burn for energy it turns to fat for its energy. He says, "There is nothing harmful, abnormal or dangerous about ketosis" and that is it a natural process within the body. The dieter will only have to wait about two days for ketosis to begin, which, according to Atkins, explains why a dieter following the Atkins' dieter sees results so quickly. For whom is this diet safe? Atkins suggests that overweight people over the age of 12 can benefit from his diet (3).
Why does Atkins claim that carbohydrates are responsible for weight gain and play a role in the failure of many diets? A basic understanding of what carbohydrates are is necessary for one to answer this question. Carbohydrates are nutrient-rich starch and sugars that effect blood sugar, called glucose. Muscle and liver glycogen stores are fueled by carbohydrates, as is the brain (5). For this reason, the intake of carbohydrates in children is particularly important because they affect learning ability (6). Carbohydrates are the body's primary source of fuel and if a person is attempting to lose fat but is simultaneously eating carbohydrates, he or she will use those carbohydrates as energy and the excess fat will remain, according to Atkins. He says that a person must eliminate carbohydrates in order to induce the fat burning process of ketosis, as outlined above (3).
The importance of carbohydrates in the diet is reflected by the government's "Food Pyramid." The foods at the base of the pyramid are considered the staple of a healthy diet: refined carbohydrates such as bread, rice, and pasta. At the top of the pyramid, foods that should be avoided or limited, are fats and oils (7). The American Heart Institute and the National Institutes of Health recommend that a balanced diet include 250 to 300 grams of carbohydrates a day, about 15 times the amount recommended by Atkins (2). Ross Feldman, an exercise physiologist, says, "there is no evidence that eating a diet rich in carbohydrates is associated with obesity" (5). Why does such a large discrepancy exist? Many sources do agree on one thing; the Atkins' Diet may temporarily help with weight loss but it may also pose significant health risks.
The primary health risk of the Atkins' Diet is dehydration. After carbohydrates are significantly reduced, ketosis begins and the dieter initially looses liver glycogen. This storage of carbohydrates is lost because the body does not have enough glucose to maintain blood sugar so it turns to the liver glycogen. Glycogen consists of a large number of water molecules and when the body converts glycogen to glucose, the water is lost from the body. This explains much of the initial weight loss on the Atkins' Diet, rather than Atkins' claim that the initial weight loss is fat (1). The large amount of water loss poses the risk of dehydration, but is not the most potentially severe consequence of the Atkins' Diet. The high fat content may put the dieter at risk for coronary heart disease, hyperlipidemia (high blood fat), and hypercholesterolemia (high blood cholesterol). The high protein content may put extra strain the kidneys, which can lead to electrolyte imbalance, decrease the kidneys' ability to absorb calcium, which could lead to the early stages of osteoporosis (5). The results of a study conducted by the University of Kentucky after a computer analysis of a week's worth of sample Atkins' menus report that a dieter is at risk of cancer, among other serious risks (4).
One might wonder why the Atkins' Diet has been successful even though studies have exposed these serious risks. Perhaps the most obvious reason is that the rapid weight loss provides the dieter with rapid reinforcement for his or her weight- loss effort. The dieter might assume that the weight-loss is fat reduction, as Atkins would have us believe, while ignoring possible heath risks. But how will the dieter enjoy the weight loss? Actually, no one knows. There have been no long-term studies on the effectiveness of the Atkins' Diet, even by Atkins. Atkins cites vague reasons that his diet has long-term worth by claiming that the permitted, low- carbohydrate food is so delicious that dieters would have little difficulty following his diet for an extended period of time. The longest amount of time that Atkins cites for successful weight loss maintenance is six to twelve months. In fact, one study estimated the weight regain from the Atkins' Diet to be 96%. At any rate, the need for a long-term study on the effectiveness of the Atkins' Diet is clearly needed, as supported by many sources (1), (3), (4), (7).
Without results from a long- term study one can not safely assume that low- carbohydrate diets, such as Atkins', are effective. The dieter is ultimately responsible for his or her own decision to ignore health risks in favor of shedding a extra weight. As Keith Anderson, spokesperson for the American Dietic Association, says, "We need to know much more before people start making claims...Shouldn't diet doctors prove safety first, rather than write books and then say 'OK, prove harm?'" (7). Instead of opting for an extreme dieting method, such as Atkins, one might better benefit from using common sense. There is no magical weight loss program and routine of exercise and a healthy, realistic, balanced diet is a dieter's best bet.
1) href="http://www.cce.cornell.edu/food/expfiles/topics/levitsky/levitskyoverview.html">Cornell Cooperative Extension: Food and Nutrition, article entitled Low-Carbohydrate Diets: Heresy or Hype
2)HealthAtoZ.com, article entitled Low-Carb Diets Unhealthy Trend
3)Atkins Nutritionals: Home, the Atkins Homepage
4)Quackwatch Home Page, a critical article about low-carbohydrate diets
5)DiscoverFitness.com, a article about fad diets
6)CNN.com, an article entitled 'Extreme eating' may equal extreme problems
7)ABCNEWS.com, an article entitled The Low Fat Legend
The Rise of the Machines:The Controversy oName: Laura Bang Date: 2002-09-28 13:55:05 Link to this Comment: 2964 |
Robby, Gort, Rosie, T-800, C-3PO - what do these names have in common? They are some of science fiction's most memorable androids - artificial intelligence robots resembling humans - from Hollywood's imaginings of humans' experiments with creating artificial intelligence. (6) There are several branches of artificial intelligence (abbreviated 'AI'), but the one to be focused on for this paper is the branch of AI that is trying to imitate human life. If successful, these artificial humans would have a major impact on our way of life and how we view ourselves.
There are many different definitions of AI. Artificial intelligence, according to John McCarthy of Stanford University's Computer science department,
"is the science and engineering of making intelligent
machines, especially intelligent computer programs.
... Intelligence is the computational part of ability to
achieve goals in the world. Varying kinds and degrees
of intelligence occur in people, many animals, and some machines." (1)
Scientists who work in the field of AI are primarily working to make intelligent machines, not androids or other machines that attempt to fully imitate human intelligence and behavior. They are first seeking to create intelligent computer programs that can interact with their users. (3)
Yet Hollywood and the science fiction genre in general most frequently portray the branch of AI dealing with the creation of artificial humans (6), and the primary definition of artificial intelligence in Merriam-Webster's Dictionary is "the capability of a machine to imitate intelligent human behavior." (4) One can conclude that this definition indicates the creation of androids and other artificial humans because it mentions behavior as well as intelligence, and behavior is a humanistic characteristic, not just a quality of an intelligent machine.
We as real humans are fascinated by the idea of an AI machine that can very closely imitate humans (at least this is the case in the U.S.). All five "Star Wars" movies are in the top fifteen of the 250 top-grossing movies in the U.S. (7) These movies all feature "droids" who have decidedly human characteristics, the most memorable of which is C-3PO, who also happens to look somewhat like a human coated in metal. Eleven of the top fifty science fiction movies (by popular vote, not the top-grossing) have humanistic robots or androids as key characters (7), and two of the American Film Institute's top 100 movies of all time are science fiction movies featuring humanistic robots, specifically C-3PO from the original "Star Wars" and HAL from "2001: A Space Odyssey." (8)
In most science fiction movies and books featuring AI robots, the focus is either on their lack of emotions, or their heightened bad emotions (such as jealousy, hatred, etc.). These AI machines are almost always seen as less than human, however closely they are able to imitate humans, most recently portrayed in the movies in 2000's "AI: Artificial Intelligence." (6) The real humans have trouble understanding the humans they have created. In this sense, the controversy of AI somewhat resembles the controversy of cloning.
Clones are a small step in creating AI humans because the clones were not conceived naturally - they would never exist if we did not cause their conception, thus artificially creating them. If a human were successfully cloned, how would the clone feel, knowing that he/she was not created naturally, knowing that he/she is a DNA replica of someone else? And, perhaps more importantly, how would the naturally conceived humans treat the artificially conceived humans?
How would you treat someone if you found out that he/she was an AI human - how would you treat that person if he/she looked exactly like a human and your first impression was of a human, but then you found out that he/she was entirely built by humans?
The controversy over the creation of AI humans has been compared to the revolutionary stir caused by Charles Darwin's publications of The Origin of Species and The Descent of Man. (2) There are some who are excited by the prospect of AI humans; there are some who are scared that machines will enslave the real humans, like in the movie, "The Matrix" (7); and there are some who are still not sure what to believe. Some claim that "... because computers lack bodies and life experiences comparable to humans', intelligent systems will probably be inherently different from humans." (3) Others call AI research "incoherent ... impossible ... obscene, anti-human, and immoral." (1)
The repercussions of creating AI humans are manifold, but one of the most frightening is how our view of ourselves would change. If we are able to create true AI humans, then are we as real humans any different than machines? Are our minds more complex than computers or can humans really be imitated by AI? "... [A]ny brain, machine or other thing that has a mind must be composed of smaller things that cannot think at all. ... Are minds machines?" (3)
There is so much controversy about AI research, and this "fairy story is hardly past its 'once upon a time.'" (5) Current technology has barely begun to take its first tentative steps toward creating AI, and if AI humans become a reality at any point in our future, there will be still more questions to be answered. If we can create artificial life, then what are we? How do we know we are not someone else's AI "project"? If we manage to create the "perfect" AI human, then do we believe that they have souls and therefore are able to continue in an afterlife? And - much more disturbing to religion - what about God? If we believe that God created all living things, and then we create artificial life - which is still life - then do we become gods? It is these last troubling thoughts that frighten people the most about AI and also put the pressure on AI researchers - if they succeed, they will essentially overthrow God.
"With relief, with humiliation, with terror, he understood that he too was a mere appearance, dreamt by another."
~ from "The Circular Ruins" by Jorge Luis Borges (9)
1) What is Artificial Intelligence?
John McCarthy, Computer Science Department, Stanford University
Last updated: 20 July 2002
2) Artificial Intelligence and the Human Mind
Joseph M. Mellichamp, University of Alabama
last updated: 4 May 2002
3) AI Topics
AI Topics for students, teachers, journalists, and others interested in AI
Provided by the American Association for Artificial Intelligence (AAAI), 2002
4) Merriam-Webster Online Dictionary
5) AI Magazine, Volume 13, Number 4 (Winter 1992)
"Fairytales" - Allen Newell
6) Official Movie Site for Warner Brothers' "AI: Artificial Intelligence"
8) The American Film Institute's (AFI) "Top 100 Movies of All Time"
9) Borges, Jorge Luis. Labyrinths. New Directions Publishing Corp., New York: 1964.
10) Click here for the transcript of a chat I had with an AI robot online.
Williams Syndrome Name: Roseanne M Date: 2002-09-28 14:05:09 Link to this Comment: 2965 |
When I was 14 years old, my baby boy cousin was born. I was thrilled to have another cousin since I only had 2, both of which were much older than I. However, as the years passed, I noticed that my cousin looked neither like my aunt or uncle; he had puffy eyes and thin lips that resembled nothing of his parents. Soon he was 3 years old and still illiterate aside from the fact that he mumbled words or beats to songs, which continued for the next few years. Compared to other children of his age, he was lighter in weight and very active- active to an extent of being violent and hurting others around him. It was obvious that smacking other children was his way of showing affection in order to make friends; he didn't realize what he was actually doing since he smiled and laughed while the other child cried. However, after noticing that he could not make friends this way, he would get rather irritated and run crying to his mom. My cousin grew more and more aggressive and impatient and above all, because he was still illiterate, my aunt and uncle could not send him to a 'regular' nursery school. When he turned 5, I asked my parents if he would remain illiterate and what the consequences were. 'He has Williams Syndrome' my parents answered, 'it is very rare with no cure.'
It has been 7 years since my cousin has been a part of my life and I only recently knew what exactly he was 'wrong.' Because this is such a rare disorder that many people have never heard of, I thought it would be a great opportunity to research further what the symptoms are, and create awareness to those who have never heard of Williams Syndrome before. My cousin is now 7 years old with features and characteristics much like what I have researched below.
Williams syndrome is the deletion of one of the two #7 chromosomes and is missing the gene that makes the protein elastin (a protein which provides strength and elasticity to vessel walls) (3). Named after cardiologist Dr. J.C.P. Williams of New Zealand, it was recognized in 1961 (2). Dr. Williams recognized a series of patients with similar distinctive physical and intellectual characteristics. It was soon discovered that Williams syndrome is a very rare genetic disorder, occurring in about 1/25,000 births- the Williams Syndrome Foundation only hears of 75 cases a year (1) (4). The disorder is present at birth but its facial features become more apparent with age. These features include a small upturned nose, long philtrum (upper lip length), wide mouth, full lips, small chin, and puffiness around the eyes. Blue and green-eyed children with this syndrome can have a prominent "starburst" or white lacy pattern on their iris (1). This disorder has a 50% chance of passing it on to his or her children. There is no cure to Williams Syndrome.
Those with Williams Syndrome have some degree of intellectual handicap. Children with Williams Syndrome experience developmental delays such as walking, talking and toilet training. After my cousin started to walk (at the age of 3), he walked on his toes instead of from heal to toe and 'tipie-toed' wherever he went, which he still continues to do. Distractibility is often a problem, but gets better as they grow older. They also demonstrate intellectual strengths and weaknesses. Their strengths can be speech, long term memory, and social skills, while their weaknesses can be fine motor and spatial relations. People with Williams Syndrome have extremely social personalities (2). I can recall a time when my family was at a restaurant and my cousin suddenly jumped from the table to say hello and waved at other children there. He does this to people of all ages, color, and sex. His friendly gesture puts a smile to everyone's face. They have unique and expressive language skills, and are extremely polite. They are unafraid of strangers and show a greater interest in contact with adults than with children of his own age (2).
People with Williams Syndrome can have significant and progressive medical problems. The majority of them have some type of heart or blood vessel problem. Typically, there is narrowing in the aorta (producing supravalvular aortic stenos is SVAS), or narrowing in the pulmonary arteries (3). There is an increased risk for development of blood vessel narrowing or high blood pressure over time.
Children with Williams Syndrome may have elevations in their blood calcium level. Children with hypercalcemia can be extremely irritable and therefore may need dietary or medical treatment (2). In most cases, the problem is resolved naturally during childhood, however the abnormality in calcium or Vitamin D metabolism may continue for life. Along with these abnormalities, many children have feeding problems because of low muscle tone, severe gag-reflex, and poor sucking and swallowing. Because of this most children have lower birth-weight than their brothers or sisters and their weight gain is slow (2). My cousin has a younger brother, now 4 years old, who is more 'plump' looking than his brother. I am assuming they would be the same height in a year or two; my cousin with Williams Syndrome is small for his age and is not as tall as the average 7-8 year old. As a result, they are smaller than average when fully mature.
Although my cousin may have 'weaknesses' and 'differences' from the average human, he shines in his other uniqueness and qualities that should be considered before categorizing him as 'the one with a disorder.' From this research I could conclude that despite the possibility of medical problems, most people with Williams Syndrome are healthy and lead active, full lives.
1)The Williams Syndrome Foundation, The umbrella organization for Williams Syndrome foundations, groups, and sites.
2)The Lili Claire Foundation, An organization made from one family that dealt with Williams Syndrome
3)Medical Site on Williams Syndrome, a detailed medical site on Williams Syndrome
4) The Williams Syndrome Foundation UK, The UK Williams Syndrome Foundation
Euthanasia: Should humans be given the right to pl Name: Mahjabeen Date: 2002-09-28 16:28:26 Link to this Comment: 2973 |
The term 'Euthanasia' comes from the Greek word for 'easy death'. It is the one of the most public policy issues being debated about today. Also called 'mercy killing', euthanasia is the act of purposely making or helping someone die, instead of allowing nature to take its course. Basically euthanasia means killing in the name of compassion. On the contrary, it promotes abuse, gives doctors the right to murder and in addition, is contradictory to religious beliefs.
Whether one agrees or not, past experiences as well as the present continuously point out that euthanasia promotes abuse. Dr. J Forest Witten warned that euthanasia would give a small group of doctors "the power of life and death over individuals who have committed no crime except that of becoming ill or being born, and might lead toward state tyranny and totalitarianism." (1)
An example of this very statement by Dr. J Forest Witten was seen in Pennsylvania, in 1947 when forty seven year old Ellen Haug admitted having killed her ailing seventy-year-old mother with an overdose of sleeping pills. Her excuse was that she couldn't endure her crying and misery. Ellen said that her mother had suffered too long and Ellen, herself was on the verge of collapse. Her excuse was that "if something had happened to her, what would have become of her mother?" (2) Her reason was not only vain; as a matter of fact it was very selfish. Ellen was not putting her mother out of misery but she was getting herself rid of a responsibility. She was merely taking the advantage of calling her cold-blooded murder euthanasia. Likewise, a recent Dutch government investigation of euthanasia came up with some disturbing findings. In 1990, 1,030 Dutch patients were killed without their consent. Twenty-two thousand and five hundred deaths were caused due to withdrawal of support, 63% (14,175 patients) were denied medical treatment without their consent and twelve percent (1,701 patients) were mentally competent but were not consulted. These findings were widely publicized before the November 1991 referendum in Washington State, and contributed to the defeat of the proposition to legalize lethal injections and assisted suicide.(3) Euthanasia, at the moment is illegal in most parts of the world. In the Netherlands it is practiced widely even though it remains illegal. The Dutch incident is an ideal example of how euthanasia has promoted abuse in the past and therefore as the old proverb goes we should "learn from past mistakes to avoid future ones".
Euthanasia gives physicians, who are only humans-the right to murder. Doctors are people who we trust to save and cure us, we regard them as the people who have been trained to save our lives but euthanasia gives doctors the opportunity to play God and most seize this opportunity. A perfect example of an opportunist would be Dr. Jack Kevorkian, better known as "Dr. Death" who took advantage of his patients' sorrows and tragedies and murdered them. In fact, Kevorkian has helped more than 100 people commit suicide and not all of his patients were terminally ill. In addition, in the late 1980s the lunatic created a machine for murder, it was a "suicide machine" that allowed a person by pressing a button, to dispense a lethal dose of medication to himself or herself. Later, Dr. Kevorkian was sentenced to ten to twenty-five years in prison for second-degree murder for providing lethal injection to a seriously ill patient.(4) Dr. Jack Kevorkian, however, is not the only example of a doctor who tried to "play God".
One can also learn a lot from the mass murder that took place in Germany during World War II. Over 100,000 people were killed in the Nazi's euthanasia program. During the War, the doctors were responsible for, selecting those patients who were to be euthanized, carrying out the injections at the killing centers, and generating the paperwork that provided a medically credible cause of death for the surviving family members. Surprisingly, organizations such as the General Ambulance Service, Charitable Sick Transports, and the Charitable Foundation for Institutional Care transported patients to the six killing centers, where euthanasia was accomplished by lethal injections or in children's cases, slow starvation.(5) Throughout the past and the present, euthanasia has given doctors an excuse to get away with their crimes; it has given mere humans the power to play God.
The physician's role is to make a diagnosis, and sound judgments about medical treatment, not whether the patient's life is worth living. They have an obligation to perform sufficient care, not to refrain from giving the patient food and water until that person dies. Medical advances in recent years have made it possible to keep terminally ill people alive for beyond a length of time even if it is without any hope of recovery or improvement. The American Medical Association (AMA) is well known for their pro-abortion campaigns and funding. Ironically, the AMA funds many hospices and other palliative care centers. They have a firm stand on life. The AMA has initiated the Institute for Ethics, designed to educated physicians on alternative medical approaches to euthanasia during the dying process.(6)
Other than promoting abuse and giving doctors the right to murder, Euthanasia also contradicts religious beliefs. Euthanasia manages to contradict more than just one religion and is considered to be gravely sinful. For instance, the Roman Catholic Church has its own opinion on Euthanasia. The Vatican's 1980 Declaration on Euthanasia said in part "No one can make an attempt on the life of an innocent person without opposing God's love for that person, without violating a fundamental right, and therefore without committing a crime of the utmost sin." It also says that "intentionally causing one's own death, or suicide is therefore equally wrong as murder, such an action on the part of a person is to be considered as a rejection of God's sovereignty and loving plan."(7)
In fact, a Jewish Rabbi Immanuel Jakobovits warns that a patient must not shrink from spiritual distress by refusing ritually forbidden services or foods if necessary for healing; how much less he may refuse treatment to escape from physical suffering. As there is no possibility of repentance or self-destruction, Judaism considers suicide a sin worse than murder. Therefore, euthanasia, voluntary or involuntary is forbidden.(8)
Islam too finds euthanasia to be immoral and against God's teachings. Actually, the whole concept of a life not worthy of living does not exist in Islam! There is absolutely no justification of taking life to escape suffering in Islam. Patience and endurance are highly regarded and rewarded values in Islam. Some verses from the Holy Quran say- "Those who patiently preserve will truly receive a reward without measure" (Quran 39:10) and "And bear in patience whatever (ill) may befall you: this, behold, is something to set one's heart upon" (Quran 31:17). The Holy Prophet Mohammad (PBUH) taught "When the believer is afflicted with pain, even that of a prick of a thorn or more, God forgives his sins, and his wrong doings are discarded as a tree sheds off its leaves." When means of preventing or alleviating pain fall short, this spiritual dimension can be very effectively called upon to support the patient who believes that accepting and standing unavoidable pain will be to his/her credit in the hereafter, the real and enduring life. (9) This shows that euthanasia is contradictory to most religious beliefs and is certainly baloney to those who believe in God and the sanctity of life.
Euthanasia should not be legalized. It is by no means a solution to human suffering. Though euthanasia is a controversial subject, it is evident that it only disrupts the normal pattern of life and leads toward creating a more violent and abusive society. Life is a gift and not a choice and practices such as euthanasia violate this vital concept of human society.
(1) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia." End of Life and Euthanasia, the above-mentioned book can be found here.
(2) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(3) Anti-Euthanasia Homepage
(4)Cavan, Seamus. "Euthanasia: The Debate Over the Right to Die."
(5) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(6) American Medical Association Homepage
(7) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(8) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(9) Euthanasia and Islam.
Turner's Syndrome-A Woman's Disease Name: Melissa Br Date: 2002-09-29 15:00:13 Link to this Comment: 2993 |
Imagine that you are 13 years old. All your friends are growing: they are getting taller; they are starting to menstruate; they seem to know exactly what to say at the right moment. You, on the other hand, are conspicuously shorter than your peers; you don't have your period and you seem to blurt out whatever comes to your mind. You would probably feel that you are awkward and begin to develop low self-esteem. This could be the life of a teenage girl with Turner's Syndrome.
Turner's Syndrome is a chromosomal problem that affects one in every 2000 females (1). So in the tri-college community, there may be at least one woman with Turner's Syndrome (TS). Although, you may not know someone with Turner's Syndrome it can safely be assumed that you have unknowingly encountered someone with the disease because of the frequency of the illness. Turner's Syndrome is named after Dr. Henry Turner who described some of the features of TS like short stature and increased skin folds in the neck(1). TS is sometimes also called Ullrich-Turner Syndrome because of the German pediatrician who, in 1930, also described the physical features of TS (1).
Why is it that TS only affects women? Well, TS arises from an abnormality in the sex chromosome pair. In the human body, there are 46 chromosomes grouped into 22 pairs of autosomes (all chromosomes that are not the sex chromosome) and the sex chromosome pair which influences whether a girl has TS. Men have a sex chromosome pair that is XY where the X chromosome comes from the mother and the Y chromosome comes from the father. Women have an XX chromosome pair with one X chromosome coming from the mother and the other X chromosome coming from the father. However, a female baby who has TS has only one X chromosome or is missing part of one X chromosome (1). The female baby receives only one X chromosome because either the egg or the sperm ended up without a chromosome when it was being split in half to make sex cells. The baby girl may be missing part of one X chromosome because there is a deficiency in the amount of genetic material (4).
TS is determined by looking at a picture of the chromosomes which is known as a karyotype. This technique was not developed until 1959(1). Karyotyping was not available to Dr. Turner and Dr. Ullrich in the 1930s. These doctors defined the disease by the physical features that a TS sufferer may have. Some of these are lymphoedema of hands and feet, or puffy hands and feet, broad chest and widely spaced nipples, droopy eyelids, low hairline and low-set ears. There are also clinical ailments that are associated with TS like hearing problems, myopia or short-sightedness, high blood pressure and osteoporosis. People who suffer from TS also have behavioral problems and learning difficulties (1), (3).
In spite of the physical, social and academic problems that a woman with TS may have, she can still be successful in life. Women who have TS have become lawyers, secretaries and mothers. It may be more challenging for a woman suffering with TS to accomplish her goals but they are not impossible. TS is a "cradle to grave" condition which means that it is lifelong and must be treated throughout the sufferer's life span (1). When the girl or woman has been diagnosed she should go under the care of an endocrinologist who is a doctor who specializes in hormones.
There are various medical methods that could be used to make the girl's life as normal as possible. Girls can have an average stature by undergoing growth hormone treatment before growth is completed. Oxandrolone, an anabolic steroid, can also be used to promote growth. Oestrogen is used when the girl is about 12 or 13 to produce physical changes like breast development and for the proper mineralization of bones. Progesterone should also be used at the appropriate time to start the period (1), (3).
Sufferers of TS also have problems like heart murmurs or the narrowing of the aorta which may require surgery. Women with TS are more prone to middle ear infections. If they recur frequently, they may lead to deafness so a consultation with an ear, throat and nose specialist would be helpful. Some of the health concerns of women with TS are encountered by all women. High blood pressure afflicts women with TS as well as diabetes and thyroid gland disorders but the latter afflicts women with TS at a slightly higher rate than non-sufferers of the disease. Osteoporosis may start earlier in TS sufferers because the women lack oestrogen so HRT (Hormone Replacement Therapy) may be considered to delay the onset of Osteoporosis (1), (3).
Women who have TS are further challenged socially because they are disruptive; they blurt out whatever comes to mind and have difficulty learning social skills. A recent study suggests that women with TS may be more disruptive depending on whether the X chromosome comes from the mother or the father. If the woman's X chromosome came from her mother she has more problems learning good social skills than a girl whose X chromosome came from her father. The study insinuates that the X chromosome from the mother instructs the girl to misbehave while the X chromosome from the father tells her to control herself (2).
A girl's disruptive behavior may make her feel uncomfortable in social situations. Her discomfort increases if she has difficulty speaking clearly. However, visits to a speech therapist can improve her ability to speak well. Such behavior can be particularly detrimental in school. Furthermore, people who have TS usually have learning disabilities so they find school less appealing. Parents should present teachers with a leaflet entitled "TS and Education, An Information Leaflet for Teachers" which will help the teacher better instruct the child in class and make learning a less burdensome activity(1).
School is where children and teenagers spend most of their time. For girls who suffer from TS school becomes less welcoming during the pubescent years when social, physical and academic skills are increasingly important. Negative experiences can bring about low self-esteem. Young women who suffer from TS should join a support group where they can find allies and express their feelings. Alternatively, the reticent girl can keep a journal where she can privately reveal her concerns about her life as a TS sufferer. Parents who notice that their daughter is being adversely affected by her inability to "fit in" with her comrades should seek professional help (3).
There are many challenges faced by women who have TS. Some of these challenges require a lot of medical assistance while others only require small alterations to the sufferer's daily life. TS is not an ailment that is intermittent or can be cured. The woman with TS lives with the syndrome every day for the rest of her life. It is important to remember that TS is not transmitted from person to person but it is a syndrome that is borne out of chance; the possibility randomly exists that a female embryo may not have two complete X chromosomes. Since TS does not affect men it can be overlooked despite the frequency with which women are born with it because we live in a patriarchal world. We, as women, should be allies to highlight the diseases that only women have.
1) Turner Support Syndrome Homepage,gives information about Turner's Syndrome to those interested in TS.
2) Bizarre Facts in Biology, unusual biological information from recent studies
3) TeensHealth. Provides information about health problems faced by teenagers.
4) Endocrinology and Turner's Syndrome, gives information about how endocrinology is helping those affected by Turner's Syndrome.
Instinctive Behavior Name: Amanda Mac Date: 2002-09-29 15:50:53 Link to this Comment: 2994 |
Perhaps it can be said that the distinguishing factor between humans and animals is that animals act out of instinct and humans out of will. What are instinctive behaviors and do humans ever act out of instinct rather than their own will? This paper will determine innate activity and decide whether or not this may be an appropriate difference between animals and humans.
Ethnologists, those who study animal behavior, believe that every species have routine movements that appear to be automatic in a way that relates to their structural systems (1). Konrad Lorenz, one of the leading scholars in this field, names these patterns as "Fixed Action Patterns" (2). Further defining instinctive behavior, ethnologists found particular characteristics, which include inherent structured systems and the adaptive functions (1).
Inherent structured systems are highly correlated with innate activity; many behaviors of animals are sufficiently unvarying and provide as particular characteristics of bodily structures. For example, the web spinning movements of spiders are a direct usage of its bodily construction. Or, the burrowing habits of marine worms employ operations of structure (3). Such movements that are typical to instinctive behavior include, eating, care of body surface, escape from predators, social behavior, and sexual interaction. Most of these innate activities involve the particular usage of a physical structure that is specific to each species.
Not just simple responses to an external stimulus play a role in instinctive behavior; instinctive activity involves sequences of behavior that run a predictable course. These behaviors may last seconds, minutes, hours or even days. Exemplifying this, we can refer to a particular species of digger wasp, which finds and captures only honeybees. With no previous experience, a female wasp will unearth an intricate burrow, find a bee, paralyze it with a careful and precise sting to the neck, pilot back to her discreet home, and, when the larder has been supplied with the correct number of bees, lay an egg on one of them and seal the chamber. The female wasp's whole behavior is designed so that she can function in a single specialized way. Ethnologists believe that this entire behavioral sequence has been programmed into the wasp by its genes at birth (3) thus resulting the high correlated sequences between heredity and instinctive behavior.
Given that instinctive behavior supposes to be hereditarily based, and therefore shaped by the forces of natural selection, it follows that most of the outcomes of instinctive activity contribute to the preservation of an individual or to the continuity of the species; instinctive activity tends to be adaptive, which implies the alteration of a living organism to its surroundings. There are two different types of adaptation; one, which involves the accommodation of an individual organism to a sudden change in environment and the other, occurs during the course of evolution and hence is called evolutionary adaptation. (1) Looking at the development of monotremes and marsupials, we can observe evolutionary adaptation. When Australia became a separate continent some 60 million years ago, only monotremes and marsupials lived there, with no opposition from the placental mammals that were emerging on other continents. Although only two living monotremes are found in Australia today, the marsupials have filled most of the functions open to terrestrial mammals on that continent (3). Thus, these animals developed changes in their genetic structures over time, creating different innate behaviors.
Overall, one of the main distinctive features of instinctive activity is the ability to react to an external stimulus the correct way the first chance (and every time thereafter) the animal receives. This feature distinguishes this particular behavior from what ethnologists call learned behavior, which scientists have discovered are actions that take place from conditioning an animal to learn the right way. Will, which can be defined as the power of choosing one's own actions (4), may be related to learned behavior; in order to choose, one must have a sense of what the outcome will be, therefore causing the choice to be learned rather than instinctive.
The physiological adaptations that made humans more flexible than other animals allowed for the development of a wide range of abilities and an unparalleled adaptability in behavior. The brain's great size, complexity, and slow maturation, with neural connections added through at least the first twelve years of life, means that learned behavior largely modifies stereotyped, instinctive responses. So, those behaviors that form heredity and adaptation change, according to each individual, to develop into learned actions. Scientists believe that each new infant, with relatively few innate traits yet with a vast number of potential behaviors, must be taught to achieve its biological potential as a human (3). Therefore, many of the human actions are instinctively learned behaviors in that the brain which is genetically structured to obtain learned information.
While animals mostly act out of instinctive behavior and humans, due to their particularly designed brain, act out of learned behavior (or as I related to will) this is not a sufficient characteristic to distinguish between humans and all other animals. Ethnologists do believe that there are some features of humans that are instinctive, such as eyebrow raising when eyes widen in social interactions, but this field remains unsound. There seems to be many arguments, which claim that all behaviors within the animal kingdom are learned and others who believe that most are instinct. Therefore, the difference of learned and instinctive behavior is not one that can classify animals and humans.
(1) 1) Encyclopedia Britannica Homepage , an online reference guide
(2) 2)Nobel Peace Prize Homepage , an autobiography on Konrad Lorenz
(3) 3) Microsoft Encarta 2000 "Animal Behavior."
(4) 4) Flexnar, Stuart Berg ed. The Random House Dictionary of English Language, 2nd Unabridged ed. "will," Random House: New York. 1987.
PMDD: Fact or Fiction Name: Margaret H Date: 2002-09-29 16:17:37 Link to this Comment: 2995 |
"PMS, PMDD, or whatever label you put on it, is, has been, and probably always will be one big excuse for being grumpy and nasty," posts Marianne E (1). A faceless Internet user posting her thoughts on a web forum, Marianne shares an opinion with many other Americans. Many people, mostly men, feel that female sexual disorders exist purely as a defense for a bad mood. A handful of women and a few members of the medical community might agree with Marianne. However, a significant amount of research and medical opinion contradicts Marianne's assertation. As many women can attest, PMDD, or Premenstrual Dysphoric Disorder, can be a fact of life.
It is estimated that 70-90% of women will experience some form of premenstrual grief at some point during their fertile years. Of those women, between 30-40% of women can be diagnosed as having Premenstrual Syndrome. Narrowing the field even more, 3-7% of those women have Premenstrual Dysphoric Disorder (2).
In general terms, PMDD can be considered a severe form of Premenstrual Syndrome, or PMS. Because the two disorders share many of the same symptoms, a problem results in distinguishing between the two. A simple answer exists in terms of severity: a woman with PMDD experiences the same ailments as a woman with PMS, only the woman with PMDD suffers to a far greater degree. The medical community has attempted to provide clinical descriptions to help specify these disorders. A PMDD website maintained by the drug company Lilly describes PMDD as a combination of psychological and physical effects occurring from one to two weeks before a woman begins her period (3). Furthermore, all of the symptoms associated with the onset of a woman's period can be separated into three categories: PMD, or Premenstrual Discomfort; PMS, or Premenstrual Syndrome; and PMDD, or Premenstrual Dysphoric Disorder. The most common symptoms associated with Premenstrual Discomfort consist of physical changes: bloating, weight gain, acne, dizziness, headaches, breast tenderness, cramping, backaches, food craving, and fatigue. Those symptoms associated with Premenstrual Syndrome tend to be more psychological changes: sudden mood swings, unexplained crying, irritability, forgetfulness, decreased concentration, and emotional over-responsiveness. Premenstrual Dysphoric Disorder consists of symptoms more commonly associated with chronic depression: sad, anxious, or empty moods; feelings of pessimism or hopelessness; emotions such as guilt or worthlessness; insomnia; oversleeping; change in appetite, resulting in weight gain or loss; suicidal thoughts/attempts; uncontrollable rage or anger; lack of self control; denial; anxiety; and frequent tearfulness (4).
PMDD is often confused not only with PMS, but also with depression. As previously mentioned the PMDD symptoms must exist in such severity as to inhibit the woman's day to day living, to separate the disorder from PMS. PMDD affects a woman's work environment, personal relationships and family life. What separates PMDD from depression is a sudden disappearance of most symptoms shortly after a woman's period begins. To further complicate matters, if PMDD is left untreated for several years, the symptoms may override the menstrual cycle, occurring during ovulation or at any time during the cycle (5).
Because PMDD shares symptoms similar to many other disorders, debate exists over where to classify PMDD. The fourth edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-IV) lists PMDD in its index, calling it a depressive disorder (6). However, lack of information and understanding of exactly how PMDD works prevents it from being classified in an official mental illness category. Basic research links the onset of PMDD to neurological and hormonal differences in some women's bodies. A study completed by the National Institute of Mental Health linked PMS with abnormal levels of estrogen and progesterone (7). In the article introducing the study as it was published in the New England Journal of Medicine, Dr. Joseph Mortola wrote, "premenstrual syndrome is probably the result of complex interaction between ovarian steroids and central neurotransmitters," (7). A Psychiatric News bulletin describes how PMDD specifically works, "in a press release on the advisory committee's recommendation, Lilly said that although the etiology of PMDD is not clearly established, it "could be caused by an abnormal biochemical response to normal hormonal changes." Routine changes in estrogen and progesterone associated with menses may, in vulnerable women, induce a serotonin deficiency that could trigger the symptoms of PMDD." (8).
Some women's bodies cannot effectively handle the hormonal shifts that occur every week in a menstrual cycle. Lilley suggest that these women lack the level of serotonin, a neurotransmitter, needed to make smooth hormonal and emotional transitions from week to week. Several antidepressants have had the most successful results in terms of strong effects on serotonin levels -- the medical community has dubbed these drugs as SSRIs, or Selective Serotonin Reuptake Inhibitor (9). The FDA has only approved two SSRIs in the treatment of PMDD: Sarafem and Prozac. These two drugs contain Fluoxetine, which is thought to correct the serotonin imbalance in women who experience PMDD.
Three options exist for treatment of PMDD (9). Doctors may choose to take a medicinal approach, administering antidepressants, antianxiety drugs or hormones. Health care providers may also try focusing on the psychobehavioral aspects of the disorder. This includes stress management, psychotherapy, and relaxation. The third option is a nutritional modification, including dietary restrictions, extra vitamins, rigorous exercise, and herbal remedies. Women are encouraged to speak to her gynecologist to find the most appropriate method of treating her PMDD.
Many factors contribute to the reason why PMDD is regarded as a controversial topic. Little is known about the disorder: the American Psychiatric Association has not formally accepted PMDD as a mental illness; PMDD is listed merely as a disorder. Many doctors have found homeopathic remedies are most effective, thereby decreasing the validity of Fluoxetine drugs. Furthermore, since such a small percentage of women suffer from PMDD, it is entirely possible never to hear a personal experience. After hearing just one woman's story, it becomes that much more difficult to doubt the legitimacy behind her experience. With continued research, the medical field may be able to separate the divide between those who see PMDD as fact and those who see PMDD as fiction.
1) It Sure Feels Real; Forum response to article, , "Women Behaving Badly?" by Neil Osterweil.
2)USA Today Health Section, "PMS and PMDD Cause Serious Suffering," by A.J.S. Rayl.
3)PMDD informational site, maintained by drug company Lilly
4)Essay: PMS and PMDD - an Expose", by Anthea.
5)informational site, maintained by drug company Lilly
6)ABCNews.com, "The PMDD Debate: A Real Condition, or Just PMS by another name?"
7)Medicine and Biology article,"Estrogen, Progesterone Implicated in Provoking PMS," by Kenneth J. Bender, Pharm.D., M.A.
8)Psychiatric News, FDA Panel Recommends Fluoxetine for PMDD.
The Biology Behind "Rolling:" Trends, Effects and Name: Emily Sene Date: 2002-09-29 16:43:31 Link to this Comment: 2996 |
MDMA (3, 4-Methylenedioxymethamphetamine), or ecstacy, belongs to a category of
substances called "enactogens" which literally means "touching within." "It is a Schedule I synthetic, psychoactive drug possessing stimulant and hallucinogenic properties." (6) It is composed of chemical variations of the stimulant amphetamine or methamphetamine and a hallucinogen, usually mescaline.
An slide show of the chemical process that occurs when MDMA is introduced to the body
can be found on the website www.dancesafe.org. (2) Basically, the group of brain cells that are affected by ecstacy is the serotonin neurons. Each one of these cells has multiple axon terminals which release serotonin to the rest of the brain. The exchange of serotonin from cell to cell occurs in the gap between the axon terminal of the serotonin neuron and the dendrites of the next neuron. This region is called the synapse. Serotonin is critical to many brain functions, including the regulation of mood, heart-rate, sleep, appetite, pain, and many others. As a result, it is extremely important that the neurons release the proper amount at the right time.
After the serotonin is released into the synapse, it comes in contact with receptors on the dendrite of another cell. When a molecule attaches to one of these receptors, it sends a chemical signal to the cell body. Based on information from all the receptors put together, the cell body decides whether or not to fire an electrical impulse down its own axon. If a certain amount of receptor binding occurs, the axon will fire, causing the release of neurotransmitters into the synapses of other cells. This is how brain cells communicate and regulate the amount of neurotransmitters present at any given time. Research has shown that the amount of serotonin receptor binding influences your mood. When more receptors are active, you are happier.
Along with the receptors on the dendrite, serotonin molecules also bond with "reuptake transporters" on the axon's membrane. The transporters are responsible for reducing the amount of serotonin in the synapse if the cell body decides that there is enough receptor binding. One way to look at the system is to think of a revolving door. The serotonin enters on one side and the transporters spin around and deposit it on the other. Molecules can only move from the synapse to the axon.
When MDMA is present in the brain, enormous amounts of serotonin is released into the synapse. This increases serotonin receptor binding which causes changes in the electrical impulses sent throughout the brain. MDMA also causes serotonin that has been removed from the synapse by the reputake transporters to be re-released from the axon. In a sense, the revolving doors are frozen in the open position and the cell body becomes overwhelmed with serotonin. It flows freely into the receptors and is recycled over and over again. This alteration in normal brain function produces the effects associated with ecstacy. These include euphoria, an enhanced sense of pleasure and self-confidence, peacefulness, and empathy. It is fairly common for a pill of ecstacy to be laced with other drugs and this can alter the experience. If speed is present, for example, teeth-clenching, depressed appetite, and insomnia are present in addition to the effects of MDMA. Ecstacy begins to work on the brain after 20 to 40 minutes, with the peak effects after the first hour.
After a few more hours, the MDMA begins being broken down by the body and reputake
transporters resume normal functioning. They usually remove much of the serotonin from the synapse after approximately three hours, although there is still enough present to maintain the feeling of the full effects. However, most of the serotonin will be gone by the end of the fourth hour. An enzyme called "monoamine oxidase" (MAO) is also present in the brain and aids in the breakdown of serotonin.
The state of the brain where serotonin levels begin to return to normal after being under the influence of MDMA is called "coming down." There are fewer activated receptors because so much serotonin has been released in the past few hours that most of the supply in the brain has been used up. At this point, there might be even less serotonin circulating than before the MDMA was introduced. Some people choose to take more MDMA at this point to make coming down easier, but eventually the drug will have no effect at all because there is no serotonin left in the entire brain. Serotonin levels might remain depleted for up to two weeks while the brain rebuilds its supply.
One negative side effect of ecstacy results from this period of recovery after taking a pill. Perpetually lowered serotonin levels have been proven to cause depression. If MDMA is present in the brain on a regular basis, serotonin is never fully replenished before it is released all at once again. Normally, it takes a long time for the brain to produce new serotonin because it involves a complicated series of metabolic reactions. The process does not normally need to be accelerated because the brain would never release such large quantities of serotonin without the influence of MDMA. As a result, when levels are depleted so rapidly, the brain needs time to recover. This is when people experience depression.(2)
The long-term effects of ecstacy use are still relatively unknown. One current theory is that ecstacy is a neurotoxin, meaning it causes permanent brain damage and psychiatric disorders later in life. Much of the current press on this issue is exaggerated, but studies have shown that MDMA degenerates the serotonin axons of lab animals. So far, studies of frequent human users (75 times or more) have shown reduced brain serotonin levels and reduced serotonin uptake. No signs of cognitive or psychological problems were noted. Evidence regarding the memory loss associated with ecstacy use is inconclusive. It is still unknown whether this is due to neurotoxicity or temporary changes in brain chemistry that correct themselves with time.(2)
Ecstacy is not a new drug. It was first synthesized in 1914 by a German pharmaceutical company that believed it had uses as an appetite suppressant. The drug first appeared in America sometime around 1970 and was legal until 1985. Although the most common instances of ecstacy use today are recreational, it was first utilized by a small group of psychotherapists as a tool to treat Post Traumatic Stress Disorder. However, its effectiveness was outweighed by its unknown and unpredictable side effects. Shortly after its use in psychotherapy was discontinued, a new market for MDMA emerged. Ecstacy was initiated into the illicit drug culture around 1980. It was not widely accepted until it was picked up in the club scene and began appearing at raves.(6)
The popularity of ecstacy has continued to increase in recent years. A study done by
Harvard University of 14,000 college students from 119 U.S. four-year colleges revealed that the prevalence of ecstacy use increased 69% between 1997 and 1999. A smaller sample of ten colleges showed that this trend remained consistent in 2000.(1) The conclusion of the study stated that "ecstacy use is a high risk behavior among college students which has increased rapidly in the past decade.".
Talented Emily Name: Jennifer R Date: 2002-09-29 18:09:07 Link to this Comment: 2997 |
"The house is mte witout you." That is a sentence from a letter my twelve-year-old sister wrote me last week. Emily has what doctors refer to as "auditory dyslexia", which in simple terms means that her brain doesn't properly process what she hears. Emily was officially diagnosed with dyslexia four years ago. After five years in the Special Education Program (SPED) program in the Boston Public Schools and a long battle my parents fought with the city, Emily was finally diagnosed with dyslexia. It took five years for doctors to figure out that Emmy was not going to start reading like everyone else. With those five years behind her, Emily, who is currently in the 5th grade at a new school, reads at a 3rd grade level, overcoming a two and a half-year disadvantage in one year.
When I found out my sister had auditory dyslexia, I did not know anything about it. I was baffled at what was wrong with her. I immediately took interest in finding out more information on what exactly it was and how she got it. Just in the dictionary I found out that "Dys" means 'difficulty' and "lexia" means 'words'. In more complicated scientific terms, Dyslexia is one of several distinct learning disabilities. It is a specific language-based disorder of constitutional origin characterized by difficulties in single word decoding, usually reflecting insufficient phonological processing abilities. These difficulties in single word decoding are often unexpected in relation to age and other cognitive and academic abilities; they are not the result of generalized developmental disability or sensory impairment. Dyslexia is manifest by variable difficulty with different forms of language, often including, in addition to problems reading, a conspicuous problem with acquiring proficiency in writing and spelling. (1)
Dyslexia is a complicated disorder and is not always easily detected. At first doctors thought that Emily had Attention Deficit Disorder (ADD) because it was so common among kids her age. With daily medication, ADD is treatable. However, Emily had something that medicine was not going to fix. There are different types of dyslexia which are important in the prognosis so as treatment can be readily attainable. Emily's doctors said that kids would often recognize individual letters but have trouble getting them in the right order. As well as visual dyslexia, many experience an auditory form of the condition, making it hard for them to recognize different sounds, hold information in short-term memory and process language at speed. (2)
This explains why word problems and sentence formation was Emmy's biggest problem. It is important to be able to distinguish between different types of dyslexia especially in the treatment process.
Many dyslexics also have behavioral issues mainly due to low self-esteem at an early age. However, a study done in London argues that auditory dyslexics tend to be innocent and therefore vulnerable; they have no behavioral problems, other than those caused by the frustration of their disability. (3)
This was the problem with the Boston Public School system; they would place all learning disabled children in the same class without clarifying each individuals need. Emily was put into a classroom with thirty-five children, more than half of which had severe behavioral problems. With nothing being accomplished in the BPS system, my parents started to search for different alternatives.
Like most forms of dyslexia, auditory dyslexia does not have a definite cause, but doctors and scientists study dyslexia in-depth everyday. According to a study published in the July 15 issue of Biological Psychiatry, dyslexia is caused by a genetic flaw in the part of the brain used for reading. While non-impaired reading is concentrated in the back region of the brain where letters and sounds are integrated, the researchers found that this area is disrupted in children who are dyslexic--their brain activity during reading is concentrated in the frontal region, which governs articulated speech. (4)
It is also proposed that dyslexia is genetic and with that doctors might be able to diagnose dyslexia in its earliest stages, allowing treatment and prevention early on in life. In Emily's case, learning disability's definitely run in the family, although they are not as severe as dyslexia, her father and older brother both suffer from attention deficit disorder. Some social scientists argue that learning disabilities are common in dysfunctional families and are found in families with bad parenting. There are numerous studies that prove that theory wrong.
Emmy always hated school, ever since day one. It was always a struggle for Emmy to do her homework after school; even reading a few sentences from a reading book made her mad. At the age of seven, she was a wild, free-spirited little girl, but as soon as she entered the classroom, she would hide under her protected shell, my family refers to as attitude. Never meaning to be unkind, it would happen automatically because she was scared of what her classmates would say to her when she said something wrong. She never raised her hand in class or volunteered to participate in anything. In the school situation, a dyslexic child may find he or she is experiencing failure, but is not able to understand why. This frequently results in low self-esteem and a severe loss of confidence, which can lead to the child being reluctant to go to school. At this stage something has got to be done, and this is when a lot of parents seek specialist help and advice. (5)
At home however, she was a natural born actress. It was clear that Emily had other skills that made her who she was. She needed to be in an environment where she could exercise her other talents and explore new options.
As first grade came to an end, my parents had to make a decision of whether or not to keep her back a year. With this decision, they decided to get her tested for learning disabilities. Tests and tests came back negative... "All Emily needs to do is work on reading a little extra every night", is what psychologists told my parents. My mother wouldn't accept what the doctors told her; after two years in special reading programs and daily after school extra help, my mother resorted to what most people would not have the time to do. She hired doctors from Children's Hospital in Boston to perform the same tests on Emily. When those tests came back, it was evident that Emily had auditory dyslexia and she needed to get help. At that time, Emily was in the third grade and just barely reading at a first-grader's level. The doctors were helpful as to what Emily needed- a new school. She was put on a waiting list at the Carroll School in Lincoln, Massachusetts, where the focus is on learning disabled children.
Doctors were confident that if Emily received the proper teaching methods for her dyslexia that she would be caught up to her level in no time at all. Given the proper help, in most cases a dyslexic child can succeed at school at a level roughly equal to his or her classmates. Moreover, dyslexic children often have talents in other areas, which can raise their self-esteem if they receive lots of praise! Artistic skills, good physical co-ordination and lateral/creative thinking and are often areas in which they may excel. (6)
In the 4th grade, Emily was accepted to the Carroll School after attending their summer program, where she has flourished ever since.
Currently, Emily is a fifth grader at the Carroll School where she is heavily involved in school activities and participates in class discussions, a tactic that was foreign to her until now. At the Carroll, classes are small, with six to eight students. All teaching is direct, multisensory, and integrates technology into the learning process. (7)
Doctors that work with Emily at the Carroll, estimate that she will only need two more years there in order to be caught up to the level she should be on. Emily's self-esteem has sky rocketed and she feels confident in whatever she does. At her new school and new classes, she participates in school productions, learns something new about the Internet and computers every day and is captain of the girls' basketball team. She has brought her classroom skills out into the community as well.
Fad Diets: Seduction and Deceit Name: Anne Sulli Date: 2002-09-29 18:55:26 Link to this Comment: 2999 |
Americans have long been plagued with the serious problem of obesity. As the country obsesses over weight loss and the newest diet plans, the population ironically continues to experience increased body fat. The basic premises to healthy living seem simple: eat a balanced diet while remaining physically active, and burn more calories than those consumed. Americans are even given specific guidelinesoutlined in the food pyramidas to the appropriate quantities of each food. Why, then, is obesity one of the leading health risks confronting Americans? It may be that the seemingly "simple" and healthy road to weight loss is actually an arduous and long-term process. It is therefore enticing to substitute sensible diets and exercise regimens with what are known as "fad diets"diets that promise quick and easy results. These diets have achieved enormous popularity despite the copious research proving their dangers and inefficiency. The following exploration will hopefully elucidate many of the mysteries and myths surrounding "fad diets."
Although they may assert very different "truths" about human biology and resulting dietary needs, most fad diets share several common characteristics. The majority claim to provide revolutionary information and insight, but are, in fact, simply replicas of older fad diets (2). Many will posit the vast claim that a specific food or group of foods is the "enemy" and should be banned from one's diet (2). This is a myththere is not a single product which is capable of causing weight gain or loss (2). Fad diets usually promise immediate results and offer lists of "good" and "bad" foods (5). They are usually not supported by scientific research or evidence. Rather, the information they provide is derived from a single study, or by an analysis which ignores variety among human being (5).
The popular diet commonly known as "The Zone" falls into the category of fad diets. This plan was created by Barry Sears, Ph.D., author of The Zone, in 1995. Sears' principal argument is that human beings are genetically programmed to function best on only two food groups: lean proteins and natural carbohydrates (3). He claims that the cultivation of grains is a modern development, and that our genetic makeup has not yet evolved to require such foods. Essentially, carbohydrates cause excessive weight gain and are responsible for America's obesity epidemic (3). Consumption of carbohydrates, according to Sears, stimulates insulin productiona process that converts excess carbohydrates into fat (3). He argues that America's phobia of fat has inspired a diet which is counterproductive. The solution is to substitute complex carbohydrates for fat (2). Critics of this diet argue that Sears' theory regarding insulin production is an "unproven gimmick" (4). The diet is potentially dangerous because scientific research observes a strong correlation between animal fatwhich contains more carcinogens from industrial waste than any other productand cancer (4). Sears also ignores both the problem of cholesterol and the fact that vegetarians have a smaller chance of developing heart disease and cancer (3).
A second well-known fad diet is called Sugar Busters!. This plan, created by H. Leighton Steward and associates, labels sugar as the enemy because it releases insulin and is then stored as body fat (6). Sugar Busters! demands that both refined and processed sugars be abolished from one's diet (this includes potatoes, white rice, corn, carrots, and beets) (6). The revised diet also becomes a high protein, low carbohydrate plan that poses the same threats as does "The Zone." Sugar is not, in fact, naturally toxic and it is dangerous to eliminate complex carbohydrates which are a good source of fiber (6). Again, this plan calls for the complete elimination of a certain food, ignoring the fact that the human body needs a multitude of foods to remain healthy (6). Other fad diets include Protein Power Lifeplan , (5) and Dr. Atkins New Diet Revolution, which also malign carbohydrates (5). Both of these diets promote high fat foods which increase one's risk for heart disease, cancer, high cholesterol, and liver and kidney damage (5).
Fad diets are clearly extreme and often irrational plans that lack valid evidence and scientific research. Aside from being unhealthy, they are often ineffective as well. High-fat diets may promote short term weight-loss, but most of the loss is caused by dehydration (4). As the kidneys try to destroy the excess waste products of fats and proteins, water is lost. High fat diets are low in calories, causing the depletion of lean body mass with little fat lossanother reason for immediate weight loss (4). Fad diets argue that the human body responds to carbohydrates in a way that causes weight gain. If Americans are gaining weight, it is due to the quantities they consumethe excessive calories, not the carbohydrates themselves, encourage obesity. If fad diets work, in spite of being extremely unhealthy, it is because one's calorie intake decreases (The Zone's recommended diet calls for less than 1,000 calories a day) (1). There is nothing miraculous about the foods which these diets prescribe. Furthermore, these diets are extremely difficult to maintain, since they often ban certain products and require the repeated consumption of othersmaking long-term weight loss impossible.
A proper diet should place long-term health before immediate results. Fad diets do just the oppositelong term use of these plans may pose serious health risks. They tend to be low in calcium, fiber, and other important vitamins and minerals (2). As previously stated, fad diets are usually high-fat diets. This presents a host of dangers: increased risk for heart disease and atherosclerosis (a hardening of the arteries), and an increase in low density lipoproteins (LDL), which carry cholesterol to the body's tissues, are among the most serious (2). Furthermore, a drastic reduction in carbohydrates causes the body to believe that it is being starved (7). Continued practice of these extreme diets may cause irrevocable damage to the liver and kidneys. The liver converts proteins into the necessary amino acids, and urea and nitrogen are the two by-products of this process (7). But excessive protein in the body places great stress and overwork on the kidneys and liver (7).
The obvious health dangers posed by fad diets combined with their failure to encourage long-term weight loss would logically deter people from embracing these "gimmicks." They continue, however, to remain the preferred substitute for healthy diet and exercise plans. What is so appealing about fad diets? Our world is set up in a way that encourages obesity. Modern transportation and technology have rendered physical activity unnecessary . (1) In addition, Americans have access to an enormous variety of delicious, and often unhealthy, foods. It clearly requires great effort to maintain a healthy weight. Rather than suffering the long and difficult process required by sensible diet plans, most are content to embrace the "easy fix"the fad diet (1). It is, after all, human nature to seek the easy route, the short cut. When someone knows one person that lost weight quickly, he/she is likely to ignore the warnings in the quest for fast results.
It is important to note that it is entirely possible for fad diets to prove effective for certain individuals. Each person's body is different, operating and reacting to certain diets in various manners. Although fad diets are, in general, dangerous and ineffective, it is crucial to note that they may work for the particular individual whose body is programmed to respond positively to such extreme constraints. Similarly, some of these diets show signs of a rational philosophy. Sugar Busters!, for example, advocates caution against sugar products. This argument is indeed valid (it is only when this plan is taken to the extreme that it becomes dangerous). This idea divulges perhaps the most significant gap in fad diet theorythat which involves the great diversity in human genetic makeup. Fad diets operate under the assumption that the body functions and responds to certain foods in a standard and fixed way. Diversity, however, is the most basic principle in human biology. What works for one person may be completely ineffective for another. The fact that fad diets blatantly disregard this most fundamental truth renders them unreliable and ineffective.
2)Fad Diets: What you may be Missing,
3)Key #1 Follow The Zone Diet,
5)Popular Diets: The Good, The Fad, and The Iffy ,
6) Is the Sugar Busters Diet For You? ,
7)Protein Fad Diets: Knowledge Does not Always Alter Behavior
Children and Bipolar Disease Name: Heather D Date: 2002-09-29 20:15:52 Link to this Comment: 3001 |
For the past 11 years I have been working with children at my church, and I have found it disturbing that for the past several years, the number of students we have had with major learning disabilities has skyrocketed. We had a range of students with a wide range of problems, from obsessive-compulsive disorder (OCD) to attention-deficit disorder with hyperactivity (ADHD), from conduct disorder (CD) to oppositional-defiant disorder (ODD). Eventually, out of every five children that we had, one of them had some form of learning disability. Though almost all of these children were undergoing some form of treatment or therapy, there were a few that never seemed to get better. One student in particular, seemed to get worse as he received more treatment. At first he was diagnosed with ADHD because he could not concentrate on one particular task. However, when he started receiving treatment for ADHD (including a heavy dose of Ritalin), his behavior became more erratic and at times, violent. Then he was diagnosed with ODD, however the same problem occurred when he started his treatment. Finally, this spring, he was diagnosed as Bipolar, and now that he is receiving the right treatment, he can finally live a somewhat normal life.
It is now estimate that upwards of one million children in the United States are suffering from early onset bipolar disorder, and that more than half are not getting the proper help that they need (1). Though this statistic may be somewhat shocking, it is also evidence of a well needed change in the way we think about bipolar. Originally, it was thought that bipolar was strictly an adult disease. Children with bipolar were always labeled with learning disabilities and often as simply "bad kids," when in reality these children are suffering from serious and frightening disease. Bipolar in children is becoming more common in children, and is only being researched. As these researchers learn more about these children, they are realizing that this disorder is even more frightening in children.
"Typically adults with bipolar disorder have episodes of either mania or depression that last a few months and have relatively normal functioning between episodes, but in manic children we have found a more severe, chronic course of illness. Many children will be both maniac and depressed at the same time, will often stay ill for years without intervening well periods and will frequently have multiple daily cycles of highs and lows. These findings are counterintuitive to the common notion that children would be less ill than their adult counterparts," states Barbara Geller, MD, head researcher from Washington University School of Medicine in St. Louis (2).
This rapid cycling is what has made it hard for doctors to associate these children with bipolar disorder instead of typical hyperactive disorders.
Another major problem with bipolar disorder in children is that no clear treatment path has been established. While it is known that medicines used for hyperactive children does not work at all and can actually make the disorder work, it is not known how other medications affect the bipolar child. Lithium, traditionally used on most adult patients dealing with the disorder, has only been successful with a small number of bipolar children. Mood stabilizers are much more effective in children, but because there are so many varying types, it takes a long time to find the "right" drug for the child. These "stabilizers" are only half of the drug cocktail these children need though. There is also the need for an anti-depressant that will not send these kids flying into mania, and they also need a medication that calms their manic rampages with out sending them into a nasty depression (3).
Many people are now saying that these children simply need psychotherapy and that overmedicating the child is worse than the actual disease. However, it is shown that if the child is not medicated, most therapy is wasted and has no value in for that child in the long run because they are unable to process it due to the disorder itself. Also, if the child goes on completely un-medicated, he or she can develop much more serious symptoms later on such as delusions, hallucinations, borderline personality, narcissism or antisocial personality (3). With the threat of failing in school and even suicide, the need for medication is incredible.
I guess the question that follows this research is how we find the right balance for our children between medication and therapy that allows them to get the most out of their lives. As of now there is not even a test to properly diagnose these children with bipolar because the standard adult test often does not apply to them because of the rapidness of their cycling. More research must be done to ensure these children a more normal life because with the genetic nature of bipolar disorder, this disease is only going to spread further and effect more people who will need this help.
.
1)Time Magazine Homepage, an article on children with bipolar disorder
2)"Child Psychiatry Researchers from Washington University School of Medicine in St. Louis Report Bipolar Disorder in Children Appears More Severe than in Most Adults."
3) Child and Adolescent Bipolar Foundation Homepage
Sugar, a Trick or a Treat? Name: Anastasia Date: 2002-09-29 21:10:46 Link to this Comment: 3004 |
As parents walk the streets with their children going door-to-door trick or treating, do they realize the severity behind this celebration of collecting refined sugar? As enthusiastic citizens donate king size Snickers to the cause, do they believe they are making a five year old's dreams come true, or are they aware that cavities and weight gain will result from their kindness? As children dump out their night's accomplishments onto the kitchen floor do they realize that consuming all that candy could result in diabetes? Halloween, although fun, could lead to future problems for all participants. Why aren't there police patrolling the streets trying to stop all the madness that occurs on this one night? How could there be a holiday celebrating the decay of humans everywhere? If sugar is really that bad for you, why do children and adults everywhere enjoy it so much? The truth must be out there somewhere.
The sweet truth behind sugar is that it really isn't as bad as the "experts" make it out to be
The most common myth surrounding sugar is probably that it causes hyperactivity. Hyperactivity is excessive physical activity of emotional or physiological origin, usually seen in young children, which is one of the components of attention deficit hyperactivity disorder. The cause of ADHD is unknown, although there appears to be a genetic component in some cases. Intake of sugars, preservatives, and artificial flavorings is no longer considered to be a factor. In most cases, sugar and carbohydrates seem to have calming effects on children
A second myth needing correction is that sugar causes diabetes. Diabetes is a chronic disorder of glucose (sugar) metabolism caused by inadequate production or use of insulin, a hormone produced in specialized cells in the pancreas that allows the body to use and store glucose. The lack of insulin results in an inability to metabolize glucose, and the capacity to store glycogen (a form of glucose) in the liver and to transport glucose across cell membranes are impaired. Diabetes is the result of many factors including genetics and lifestyle
Myth number three lingers in many of the conversations that occur between the Weight Watchers walls and many body conscience individuals. It seems that many misinformed dieters believe that sugar causes weight gain. Correcting this one idea may be the key to their success. When our body takes in more energy (calories) than it can use, it stores this unused energy as fat, which leads to weight gain. There is not one individual food that alone causes weight gain since all foods contain calories. Many sugars contain similar amounts of calories as most other carbohydrates and proteins. Also, it is interesting to note that since more "sugar free" items have taken over the shelves in supermarkets, there has been a rise in obesity numbers in the United States. "Overweight and obesity are among the most pressing new health challenges we face today," says U.S. Department of Health and Human Services secretary Tommy G. Thompson. "Our modern environment has allowed these conditions to increase at alarming rates and become a growing health problem for our nation"
Many individuals claim that the cause of sugar intake is sparked by sugar addictions. Not possible, claim many sugar experts
To make a long story short, for several years sugar has been the scapegoat. Sugar is not a "bad" food but yet essential to human life. It is important to remember however that in order to maintain a healthy lifestyle eating in moderation is important. If your energy needs are low, go easy on the amount of sugars you consume, as well as the amount of fat. Try consuming mostly nutrient-dense foods, which provide other nutrients besides sugar or fat. Don't be scared to eat sweets once in a while. Dress up on Halloween and don't be scared to bring the biggest pillowcase you can find to make sure you collect the largest amount of candy possible. Sugar, once considered a trick has really been proven to be a treat.
WWW Sources
1)The Sweet Truth About Sugar, challenges the myths
4)Attention Deficit Hyperactivity Disorder
5)Obesity Problem Getting Worse in USA
It is of the belief today that foods high in sugar are bad for you. For example, chocolate, since it is considered candy, is thought of as empty calories with no nutritional value. Recent studies suggest that certain forms of chocolate have health benefits however. This guilty pleasure contains many fats that are good for the body. According to a Hershey study some milk chocolate products contain conjugated lion oleic acid also known as CLA. This trans fat is believed to fight cancer in animals. A second study by the Nestle Research Center found that a change in dark chocolate might help lower cholesterol. Results showed that ten men fed the dark chocolate experienced a drop of 15% in their low-density-lipoprotein (LDL) cholesterol levels. Another study at University of California, Davis found evidence of phenolics in chocolate. Phenolics are the same chemicals found in red wine that helps lower the risk of heart disease. They reduce the oxidation of LDL preventing it from creating plaque in the arteries References
Multiple Personality Disorder Name: Diana La F Date: 2002-09-29 21:40:04 Link to this Comment: 3006 |
When you were growing up, did you have an imaginary friend? Did Mom and Dad have to set a place for Timmy at the table and serve him invisible food, or did all your aunts and uncles have to pet your imaginary puppy when the came over to the house? That's just pretend, though, kids having fun. So is a child pretending that they are someone else, forcing their parents to call them Spike, convinced they have a Harley even though they're only five. But what if this were an adult, someone who should "know better" convinced that they are someone else. If this were to happen, society would label them as crazy or delusional. Or, maybe, this adult suffers from a Multiple Personality Disorder.
Multiple Personality Disorder (or MPD) is a psychological disorder where a person possesses more than one developed personality. These personalities have their own way of thinking, feeling, and acting that may be completely different from what another personality is like (1). To be diagnosed with multiple personality disorder, at least two of the multiple personalities must dominate over the others on a slightly frequent basis (2). This results in an abrupt change in the way a person acts. Basically, they become another person in either an extreme or complete way (3).
MPD was first recognized in the late nineteenth century by Pierre Janet, a French physician. The disorder was later brought more to public awareness by The Three Faces of Eve (1957), a movie based on the true story of a pristine housewife who was diagnosed with MPD when she couldn't explain why she would suddenly become a very sexual person and not remember it. The eighties and the nineties brought on what was seen as an over diagnosis of MPD (1).
MPD is known as Dissociative Identity Disorder (DID) in the psychiatric world (1). The reason for this change of label is that the term "multiple personalities" can be misleading (4). A person with MPD/DID is one person with separate parts autonomously comprising their mind . They are NOT many people sharing one body (5). Although it seems as though these "personalities" seem to be very different, it is important to understand that they are separate parts of the SAME person (4). It is not correct to say that someone with MPD/DID has "split personalities" as this denotes schizophrenia. A person with schizophrenia does not have connected thoughts and feelings, they are "split" (1). A person with dissociation, however, has memories, actions, identities, etc., that are unconnected. Different thoughts and feelings may be connected, but different thoughts and different memories may be connected to some and not the others. Everyone experiences this once in a while. Daydreaming, getting lost in a book or a movie, zoning out, etc. These are all moments of dissociation (4). Just because someone has MPD/DID does not mean they can not function in everyday life (2). Indeed, they usually have this disorder so that they CAN function.
There have been as many as 20 personalities [perhaps even 37] that have been reported (3). About 1% of the population has some form of MPD/DID. In fact, of patients in psychiatric hospitals, possibly up to 20% have MPD/DID but are misdiagnosed. With these statistics, MPD/DID can be put into the same category as anxiety, depression, and schizophrenia as one of the major mental health problems at present (4).
Although the causes of MPD/DID are not completely understood it seems as if childhood neglect and abuse of some sort are the major causes (4). The abuse usually occurs early in life, before the age of nine, and is commonly repeated and prolonged (2). Due to this abuse, children may detach parts mind and create new personalities to separate themselves from their pain (3). After long term abuse, these new "personalities," this dissociation, may become second nature. These children may use this technique to separate themselves whenever they feel anxious or threatened. Due to it's ability to keep a sane, functioning part of a persons mind in tact when all else seems hopeless MPD/DID can be seen as a very effective escape technique (4). It is a very healthy, sane, and safe way for these people to survive an unhealthy situation (2).
MPD/DID can be treated. The first treatment usually used is psychotherapy, to try to help the person integrate the personalities more (1). After that medications, hypnotherapy, and adjunctive therapies are also used. In fact, if treatment is started and completed, MPD/DID may have the best prognosis of any disorder (6).
Everyone has different facets to their own personalities. Without this fact we would not be the complex beings that we are. A person with MPD/DID, however, may have very distinct facets that work independently of one another, sometimes not even knowing that the others exist. These various facets work together to keep the person whole. MPD/DID is a highly evolved psychological survival technique that is not to be looked down upon. Without it, the people who "suffer" from it may not be able to function in everyday life as well as they do, if at all.
1)Infoplease Education Network, an interesting educational network with many resources
2)MPD/DID information site, Site put together by a lady with MPD/DID
3)Medical Index
, interesting site with a great amount of information on many medical conditions
4)MPD/DID resource page, site with a lot of information on MPD/DID
5)The International Society for the Study of Dissociation
, another site with a lot of information on MPD/DID
6)Sidran Institute of Traumatic Stress Education & Advocacy, site with abundant information and resources to traumatic disorders and treatment
Being Made Hole Name: Christine Date: 2002-09-29 22:12:25 Link to this Comment: 3007 |
Are you unable to deal with life's little miseries? Feeling stressed? Lethargic? Depressed? You could take a vacation. Or you could try meditation. Or yoga. Or you could try to achieve a permanent high by drilling a hole into your skull. Trepanation, or the drilling of a hole in the skull, is one of the oldest surgical procedures, some trepanned skulls even dating back to 3000 B.C. The oldest skulls have been found in the Danube Basin, but trepanned skulls have been found in virtually every country, even in America, with the highest concentration found in Peru and Bolivia (1). The word trepanation is derived from the Greek, meaning "auger or borer". More specifically, trepanation means "an opening made by a circular saw of any type" (1). Trepanation has been performed over the centuries for various reasons, including a means to liberate the demons or spirits from the heads possessed. Trepanation was also performed for therapeutic reasons, such as for epilepsy, headaches, infections, insanity, and a whole range of maladies. A third reason for trepanning is religious, where the rondelles, or disks of bone from the skulls, were collected and used for charms and talismans which were believed to have power to protect the wearer from illness and accidents. Nowadays the procedure is believed to help the individual expand his or her consciousness, and initiate a spiritual awakening that leaves the trepanned individual forever changed. Devotees of trepanation swear that a hole in their head gives them greater energy, improved concentration and mental capacity, elimination of stress-related diseases, and relief from other ailments that "come packaged with adulthood," leaving them feeling like kids again (2). Can drilling a hole into your head really hold such miraculous restorative powers that cure such a host of life's ills? Are solid-skulled humans one hole away from nirvana (5)?
Those who wish to be trepanned would have difficulty finding a surgeon in the United States to perform such a procedure; in fact, none will. Trepanation is performed in America only to relieve acute pressure on the brain, usually caused by a blow to the head (3). Any legitimate medical practitioner refuses to perform or recognize trepanning as a therapeutic practice, although a few international black market neurosurgeons will do so for the right price. Doctors interested in neurosurgery are required to take five to seven years of intense training to learn the techniques that make trepanning safe, and the notion of trepanning for recreational purposes has been called "quackery," "horseshit," "pseudoscience," and "nonsense" (4). Risks of drilling a hole into the skull include meningitis, blood clots, stroke, epilepsy, and the risk of a bone fragment embedding in the brain during the drilling (7).
However, the desire for a permanent high overrides the risk factors, and those who wish to be trepanned bypass the medical community and do the procedure themselves, usually with fellow supporters standing by in case of an emergency. Almost all of the information available on the procedure is based on first-hand accounts, including a video entitled "Heartbeat in the Brain", where devotee Amanda Fielding had her whole self-trepanation carefully recorded. Ms. Fielding wears old clothes and tapes sunglasses to her face so the blood will not impair her vision as she works. She starts by shaving her head and applying a local anesthetic to the spot to be trepanned, the ideal location being where the skull sutures have ossified, as there is less of a chance there is a blood vessel in that area. An incision is made with a scalpel, and then she starts in with the electric drill (6). In order to have the therapeutic effect, the hole needs to have between a one-quarter and one-half inch diameter. As soon as the skull is penetrated, the bleeding is prodigious. The skull piece is removed, the mess is cleaned, and the hole is bandaged. As the wound heals, skin grows over it, leaving behind a small indentation (7).
The miraculous restorative powers of trepanation has its origin in an alleged mechanism called "brainbloodvolume," coined by the founder of modern trepanning, Bart Hughes, a Dutch librarian. Mr. Hughes was almost a Dr. Bart Hughes, but was thrown out of medical school in Amsterdam in the 1960s because he failed part of his medical exams and because of his advocacy for marijuana use. While in Ibiza, Mr. Hughes was taught that standing on his head for extended periods of time would get him intoxicated, and at a later date, after ingesting the drug mescaline, the mechanism of brainbloodvolume became clear to him. "[I realized] that it was the increase of brainblood that gave the expanded consciousness. An improvement of function must have been caused by more blood in the brain which meant there must have been less of something else. Then I realized that it must be the volume of cerebrospinal fluid was decreased" (8). Mr. Hughes believes that gravity and age rob an individual of his or her creativity and energy that was once possessed during childhood. Babies have high brainbloodvolume because the soft spot (the fontanel) on the head gives the brain room to pulse. As the babies grow, the soft spot hardens and the brain does not have the room to expand. The hardening of the skull, combined with gravity, saps the blood from the head, making the brainbloodvolume plummet (2). Trepanation supposedly reverses the blood loss by expanding the blood vessels in the brain, allowing them to supply more oxygen and glucose to brain tissue as well as speedily remove toxins (7).
Given the circumstances and conditions in which the mechanism of brainbloodvolume was conceived, could it hold merit? Two researchers at the U.S. Health Service conducted a study on cerebral circulation, and concluded that the mechanism is far too complex to understand at the present time. However, they also made two tentative conclusions: the first, that the necessary level of cerebral circulation is maintained by uninterrupted fluctuations of cerebral spinal fluid (CFS), and the second, the limits of the speed of CSF volume fluctuations by the physical and neural characteristics of the brain are fundamental to the protection of the central nervous system from mechanical injury due to fast and unexpected shocks (9). The two tentative conclusions indicate that "the mechanisms of cerebral circulation are maintained by a complex and delicate balance that, far from deficient, can only operate if left unaltered" (7).
Along with the two researchers, other well-respected medical practitioners have vehemently opposed trepanation. They state that brain flow, not brain volume, is related to brain function, and there is no evidence that drilling a hole into the skull will increase blood flow to the brain. Furthermore, since trepanation only affects the skull, nothing they are doing will affect the brain. That is, trepanners do not touch the dura, the compartment that has cerebrospinal fluid in it, so the changes they are claiming to happen cannot be anatomically possible. Rather, doctors and scientists believe that the experienced benefits of the procedure are most likely due to the placebo effect. While dozens of people around the world are being trepanned, it is safe to say that trepanation will not become a trend in today's society; rather, it will appeal only to the radical portion of the population. As the fields of medicine and psychology learn from their past mistakes, medical procedures of the past are abandoned and believed to be better off forgotten. The primitives do not always know best. Perhaps the holes in the heads might really make trepanned individuals feel good after all - just not for the reasons they believe.
1)Trephination, an Ancient Surgery
2)You Need it Like...A Hole in the Head, by Michael Colton
3)Brief History of Trepanation, from the International Trepanation Advocacy Group website
4)Cutting the Cranium, by Willow Lawson
5)The Hole Story, by Jon Bowen
6)The People With Holes in Their Heads, by John Mitchell
7)The Therapeutic Benefits of Trepanation - Try to Have an Open Mind, by Daniel Witt
8)The Hole to Luck, interview with self-trepanner Bart Hughes
9)Hemodynamics of Cerebral Circulation, by Yu Moskalenko and A. Naumenko from the International Trepanation Advocacy Group website
Albinism Name: Brenda Zea Date: 2002-09-30 00:17:46 Link to this Comment: 3011 |
Most people have a very biased and stereotyped view of people with albinism. Many see albinos as persons with white hair, white skin and red eyes. This is a common myth that has perpetuated itself because the truth about albinism is not widely known. One in 17,000 people in the United States has a form of albinism. (1) There are many different types of albinism, depending on the amount of melanin in a person's eyes. While some people have the fabled red or violet colored eyes, most albinos have blue eyes. Even fewer have hazel, brown or gray eyes. These discrepancies between reality and the red-eyed albino myth are the reason that most albinos do not even realize that they have a form of albinism. (1)
The two most common types of albinism are Oculocutaneous albinism (also known as type-1 albinism {or tyrosinase-related albinism} this affects hair, skin and eye color) and Ocular albinism (this affects mostly the eyes, but the skin and hair may have slight discoloration). (1) Most albinos have serious vision difficulties. Their eyes do not have the correct amount of melanin and during the fetal and infant stages of their life, this causes abnormal development of the macular hypoplasia (the fovea in the retina), as well as abnormal nerve connections between the brain and their eyes. (2) Many are considered legally blind, or have such poor eyesight that they must use intensive prescription bifocals. A few, however, have good enough visual acuity to drive a car. While limited eyesight can be a problem, many albinos have multiple sight deficiencies. Often albinism can also come with a nystagmus of the eyes or strabismus. Nystagmus is where eyes tend to jump and jerk in all directions, while strabismus means that the eyes do not focus together as a "binocular team. An eye may cross or turn out." (2) This often results in crossed-eyes or 'lazy eye'. (1) Albinos may also encounter photosensitivity (sensitivity to light) or have astigmatism (distorted field of view). When the eye does not have enough pigmentation, it cannot keep out excess light, thus making people incredibly sensitive to bright lights as too much enters their eye. (1)
Extremely rare forms of albinism, such as albinos with Hermansky-Pudlak syndrome, can experience problems with bruising, bleeding, and susceptibility to diseases that affect the bowels and lungs. (1) Of the rarer forms of albinism, Hermansky-Pudlak syndrome is the most frequent. As persons with this syndrome can develop other physical problems in addition to their eyes, their life span is not as long as other albinos' lives.
While this group of albinos is at risk from themselves, 'normal' albinism can create problems of its own. If an albino person spends too much time out in the sun (this occurs mostly in albinos from tropical countries), they can develop skin cancer. While most of these cancers are treatable, they can only be treated if the facilities are available. (3)
Fortunately, albinism is not very common in most cultures because it is either caused by a rare recessive gene (this requires both of your parents to carry the gene in order for you to be albino), or an even rarer case, in which albinism is caused by genetic mutation. The most common type of inheritance is "autosomal recessive inheritance". (1) As only 1 in 70 people even carries a gene for Oculocutaneous albinism (OCA), even if two people who carry it have children, there is only a 50% chance that they will even pass the gene. This means that for each pregnancy there is only a 25% chance that the child will have both the albinism genes. (3)
As albinism is such a rare occurrence, people who are albino are often met with hostility and misunderstanding. Often albino children are teased at school and find it hard to fit in (this is especially true when the child is from a normally dark-skinned race they stand out from their peers). A real eye-opener for the entire country came when in 1998, Rick Guidotti published a photo-journal in the June edition of Life Magazine. He was one of the first people to portray albinos as normal, fashionable people. While this helped the albino community a little, there is still not a wide acceptance of albinism as old stereotypes thrive even in modern culture. (4) (5)
1)NOAH National Organization for Albinism and Hypopigmentation, A national organization about albinism and albinos
2)Lowvision.org, A small site with interesting albinism facts and photos
3)The International Albinism Center at the University of Minnesota, An extensive website about albinism
4)Albinism Website, A small website about albinos in pop-culture
5)Rick Guidotti Homepage, A famous high-fashion photographer changes his image and focuses entirely on albinos and how to represent them to the world
Ocular Histoplasmosis Syndrome: The Science Versu Name: MaryBeth C Date: 2002-09-30 01:33:41 Link to this Comment: 3012 |
Ocular Histoplasmosis Syndrome is a growth in abnormal blood cells under the retina induced by exposure to a particular kind of histo fungus. Though the manifestation of this syndrome in the eyes is rare, a significant portion of the population has been exposed to this fungus. As the syndrome develops, the part of the retina responsible for close, sharp vision deteriorates and eventually, without treatment this can become complete blindness aside from peripheral vision.
It is extremely rare that the histo fungus affects the eyes. It is most common that the fungus manifests itself in the lungs, thus creating a lung infection that appears like tuberculosis (3). This infection, unlike the ocular infection, is easily treatable with an anti-fungal prescribed medication. Though fungal infections from Histoplasma capsulatum are more unusual in the eyes than the lungs, OHS is the most common cause of blindness in adults aged 20 to 40 (5). On the contrary, when the fungus reaches the eyes, the damage is irreversible and often difficult to detect and diagnose.
This progression, however, is not readily detectible in a routine eye check-up and requires a specific test involving close examination and pupil dilation. The examiner can, however, detect damage to the macula, or the central part of the retina, by presenting the patient with an "Amsler grid" and judging how the patient sees it and if the patient's vision has been affected (2). The examiner may also notice tiny histo spots or swelling of the retina. Once the disease has begun to develop, it is only treatable through surgical means, more specifically, laser photocoagulation of the retina cells . This recoagulation process only prevents future vision loss and does not correct what has already been lost. This surgery s also only effective if the eye's fovea has not been damaged and only if the surgeon is able to eliminate all destructive cells in the retina .
Such was the progression of the disease in my uncle's eyes about ten years ago. His histoplasmosis went undetected and eventually grew into partial blindness. However, my uncle's experience defied the typical progression in some ways. Firstly, the destructive cells were never detected, though he regularly visited an eye doctor. The deterioration continued until it again defied the typified OHS case, and he became completely blind in his right eye. Generally, the histo cells only affect the center of the retina, the macula . In addition, the laser photocoagulation surgery did not stop the progression of blindness, but delayed it. Like common cases of ocular histoplasmosis, he did retain peripheral vision in his left eye. My uncle, however also defied the odds of OHS sufferers, and, though he had some of the most extensive progression of the infection, continued to live his life the way he always had. He became an active, and often victorious member of our local Blind Golfers Association and continued to play basketball, watch sports, read as best he could, and compete in his gym's activities.
Doctors later speculated that the histo fungus could have been picked up in any of Bud's travels, through the "Histo Belt" through the central United States, or many years earlier in his travels to China and Japan. Though his travels in Asia were many years earlier, some of the doctors have suggested that the fungal cells could have remained dormant through the years until they surfaced in the early 90s. This incubation period is much different from that of the lung affliction, as this infection's symptoms appear within two weeks . Research and information is constantly changing concerning our understanding of histoplasmosis, when my uncle was first diagnosed, much of the information he was given was speculative and the surgery he received was still experimental.
Researching histo fungus, histoplasmosis, and ocular histoplasmosis syndrome raised more questions than provided answers. Why the difference in symptoms? Why the difference in time for symptoms? Why the eyes and lungs? Why are the lungs easily treatable and the eyes so difficult? One thing we may conclude, however is that everyone should be tested for this infection, as it is the most common cause of blindness for young and middle-aged adults and incurable, but easily delayed.
1)Ocular Histoplasmosis Syndrome, Useful for a general overview
2)Effectiveness of Laser Surgery, Procedure and statistics
3)Frequently Asked Questions, Information on the lung infection
4)Frequently Asked Questions, More general iformation
5)Histoplasmosis , Some new informationReferences
The Science of Shyness: The Biological Causes of Name: Adrienne W Date: 2002-09-30 01:41:37 Link to this Comment: 3013 |
Although many people are unaware of its existence, social anxiety disorder is the third most common psychiatric disorder, after depression and alcoholism, according to the Medical Research Council on Anxiety Disorders (1). To paraphrase the Diagnostic and Statistical Manual of the American Psychiatric Association's definition of social anxiety disorder or social phobia it is: "A persistent fear of one or more social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others...The avoidance, anxious anticipation, or distress in the feared social or performance situation(s) interferes significantly with the person's normal routine" (2). Although those who suffer from social anxiety disorder (SAD) are often perceived as shy, their condition is much more extreme than shyness. Unlike shyness, it is not simply a personality trait; it is a persistent fear that must have deeper roots than environmental causes.
As an anxiety disorder, SAD is classified amongst panic disorder, obsessive- compulsive disorder, posttraumatic stress disorder, and generalized anxiety disorder
(3). The question is: what causes this behavior to occur? Is it simply a result of environment or are there biological reasons? Although the present knowledge on SAD is incomplete, there are several causes that are suspected: "a combination of genetic makeup, early growth and development, and later life experience"
(4). It is my hypothesis that, in addition to environmental causes, there are also biological causes of SAD. According to current research, there is compelling evidence that brain chemicals and genetics contribute to the development of SAD.
Jerome Kagan, Ph.D. has researched the genetic causes of SAD at Harvard. In his study of children from infancy to adolescence he discovered that "10-15% of children to be irritable infants who become shy, fearful and behaviorally inhibited as toddlers, and then remain cautious, quiet, and introverted in their early grade school years. In adolescence, they had a much higher than expected rate of social anxiety disorder." This evidence suggests, of course, that people are born with SAD, which indicates that there are biological factors that contribute to its development, not simply environmental factors. Kagan also discovered a common physiological trait in these particular children: they all had a high resting heart rate, which rose even higher when the child was faced with stress. Again, this physiological trait suggests the biological causes of SAD. In this study, Kagan also found evidence that linked the causes of SAD with genetics: the parents of the children with SAD have increased rates of social anxiety disorder as well as other anxiety disorders. There is also other research that suggests that SAD has genetic causes. According to The American Psychiatric Association: "anxiety disorders run in families. For example, if one identical twin has an anxiety disorder, the second twin is likely to have an anxiety disorder as well, which suggests that genetics-possibly in combination with life experiences-makes some people more susceptible to these illnesses" (3).
Evidence of anxiety is also apparent in the animal kingdom, which suggests that it is not simply the result of nurturing, it is an inherent attribute. In the book Fears, Phobias, and Rituals, Isaac Marks found that birds avoided prey that had markings similar to the "vertebrate eye," eye-like markings on other animals, such as moths. In his experiment, these eye-spots were rubbed off of moths. As a result, they were less likely to be eaten and more likely to escape from a predator. Marks concluded that the birds feel scrutinized by the gaze of another animal and thus avoid the "eyes," much like humans with social anxiety avoid situations in which they feel scrutinized or avoid eye-contact. His research suggests that biological factors influence a form of social anxiety in animals.
In addition to genetic causes, there is also evidence that SAD is caused by chemical disturbances in the brain. It is probable that four areas of the brain are involved in our anxiety-response system: the brain stem, which controls cardiovascular and respiratory functions; the limbic system, which controls mood and anxiety; the prefrontal cortex, which makes appraisals of risk and danger; and the motor cortex, which controls the muscles. These parts are supplied with three major neurotransmitters: norepinephrine, serotonin, and gamma aminobutyric acid, all of which play a role in the regulation of arousal and anxiety. Research shows that "dysregulation of neurotransmitter function in the brain is thought to play a key role in social phobia. Specifically dopamine, serotonin, and GABA dysfuncition are hypothesized in most cases of moderate to severe SP." Researchers continue to investigate whether neurocircuits play a role in the disorder. If this hypothesis proves to be true, it will clarify that there are genetic causes to SAD (1). However, the neurobiological information alone clarifies that there are biological causes to SAD.
Although research continues to be conducted on the causes of social anxiety disorder, it is apparent that there are genetic and neurobiological causes. Of course, psychological modeling, or environmental circumstances may also be a factor in the development of SAD; however, there is compelling evidence that chemicals in the brain also cause the anxiety. Research has also concluded that those who suffer from SAD are likely to have a family member with SAD or another anxiety disorder, which supports the hypothesis that there are genetic causes to SAD, as well.
1) www.socialfear.com; Provides information on the neurobiological causes of social anxiety.
2) www.socialanxietyinstitute.org/dsm.html; Provides the DSM-IV of the American Psychiatric's Association's definition of social anxiety disorder
3) www.psych.org/public_info/anxiety.cfm; Public information from the American Psychiatric Association.
4) http://socialanxiety.factsforhealth.org/whatcauses.html; A website that provides information on research conducted on the causes of social anxiety.
Information About Menopause Name: Diana DiMu Date: 2002-09-30 02:01:14 Link to this Comment: 3014 |
While many people may find the topic humorous, or even frightening, the subject of Menopause is one I had many questions about. My interest and curiosity in this subject stems from my first-hand experience with it: my mom. After much suspicion that she was going through "the change," my sisters and I recently discovered that she had stopped having her period for the last four years. Much to our surprise, we realized that many of our hunches (such as "Hot Flashes" and "Mood Swings") were correct; they were indeed some of the symptoms associated with the periods before and during menopause. I learned that she was taking progestin, a hormone supplement, as well as certain vitamins, to help against the symptoms associated with menopause. Suddenly her violent mood swings and recent irritability began to make more sense. My mom explained that for the first time in her life she had feelings of "blueness" or depression. Despite the realization that my mother was menopausal, I still did not understand what menopause actually is. What are some of its symptoms? Are they treatable? If so, how? Are there any dangers associated with menopause? If so, how can they be prevented or treated? Through my research I would like to take a closer look at these questions to gain a greater understanding of my mom's situation and help others who might also come across it with their own families and friends.
Many of the symptoms and effects of menopause are not actually a result of menopause but are associated with the period of change leading into menopause. The changes and effects are broken down into three stages: perimenopause, menopause, and post menopause:
Perimenopause:
Perimenopause is the period of gradual changes that lead into menopause. They often affect a woman's hormones, body, and feelings. They can actually stop and start again anywhere between a few months or a few years. This period is also known as the "climacteric" period. During this process, the ovaries' production of the hormone estrogen slows down. The hormone levels in a woman's body fluctuate, causing changes, which are often similar (although much more intense) to the changes associated with adolescence.
Menopause:
Menopause occurs when a woman has her last period. A woman's ovaries stop releasing eggs. This is usually a gradual process; however, it can happen all at once.
Post Menopause:
Post Menopause is simply the time after menopause. Women often have many health concerns, which result from menopause (2).
I would like to focus mostly on the period known as perimenopause because of its many symptoms, which often serve as metonymies for menopause on the whole. After looking at many of these symptoms I will take a more focused look at one of menopause's most well known symptoms and how it can be treated. I will also examine some of the other methods of treatment for menopause, as well as some of the dangers associated with menopause and its treatment.
Perimenopause can begin as early as age thirty, however, the average age is fifty-one. Some of the symptoms associated with perimenopause are as follows:
-Irregular menstrual periods
-Achy joints
- Hot flashes
-Temporary and minor decrease in ability to concentrate or recall information
- Changes or loss in sexual desire
- Extreme sweating
- Headaches
- Frequent urination
- Early wakening
- Vaginal dryness
- Mood changes or "swings"
- Insomnia
- Night sweats
- Symptoms/conditions commonly associated with pre-menstrual stress (PMS)
Perimenopause can be any one or a combination of the above symptoms. The symptoms are often very unpredictable and disturbing, especially if a woman does not know they are related to menopause (2). These symptoms usually last between two and three years, though in some cases they can last between ten and twelve years. It is highly important to note that women in perimenopause have reduced fertility but are not yet infertile. There is still a chance of pregnancy during perimenopause, even if a woman's menstruation is highly sporadic (1).
One of the symptoms most commonly associated with perimenopause are "hot flashes." Hot flashes are sudden or mild waves of upper body heat that can last anywhere from thirty seconds up to five minutes. They are caused by rapid changes in hormone levels in the blood (2). The part of the brain that controls body temperature is the hypothalamus. During perimenopause, the hypothalamus can often mistake a woman's body temperature as too warm. This starts a chain reaction to cool her body down. Blood vessels near the surface of the skin begin to dilate and blood rushes to the surface of the skin in an attempt to cool down the body's temperature. This often causes sweating, as well as producing a flushed red look to the woman's face and cheeks (1). Some women experience a rapid heartbeat, tingling in their fingers, or a cold chill after the hot flash. Seventy-five out of one hundred women have hot flashes. Half of them have at least one hot flash a day, while twenty have more than one a day. Most women experience hot flashes from three to five years before they taper off. Although some women may never have a hot flash, or only have them for a few months, others may experience them for years. There is no way to tell when they will stop. Many women suggest keeping a journal to record what triggers a hot flash so that an attempt to prevent the next one can be made (2). Some suggestions by the North American Menopause Society to help combat hot flashes include: trying to wear light layers of clothing, sleeping in a cool room, deep breathing and/or meditation, and regular exercise to fight stress and promote healthy sleep (1). However, prescription hormone treatment is the most common treatment for hot flashes. Replacement of estrogen that is lost during menopause is the most effective treatment against hot flashes. Hormone replacement therapy is also common treatment for many other symptoms of menopause (1).
Hormone Replacement Therapy (HRT) can come in the form of pills, patches, implants or vaginal creams, to restore estrogen and other hormones that decrease during perimenopause and menopause in a woman's body. While many women find HRT extremely helpful, there are still many side effects to its use. Some women experience pre-menstrual stress (PMS), others experience vaginal bleeding, bloating, nausea, hair loss, headaches, weight gain, itching, increased vaginal mucus, or even corneal changes which may affect a woman's ability to wear contact lenses. Some more serious side effects put women at higher risk for breast cancer and heart disease. Some women use progestin a hormone without estrogen, which is a better replacement therapy for women at risk of blood clots. Progestin is however, a less effective means of birth control.
Many women prefer to use non-hormone therapies to reduce the symptoms of perimenopause and menopause. Regular exercise is a strong recommendation to combat stress and help promote healthy sleeping patterns. A diet high in fruits and vegetables and low in saturated fats is also recommended. Many women try eating soy products to help combat hot flashes(3). Soy contains phytoestrogens, a plant chemical that produces similar effects to estrogen. Others suggest reducing caffeine, alcohol, spicy foods, and even hot beverages(2). Herbal remedies and homeopathy are also quite common solutions to women against using hormones to treat menopause. There are many over-the-counter vaginal creams as well. Menopause Online suggests in increase in the amounts of vitamins E and B6. Research on Vitamin E shows that it can help prevent heart attacks, Alzheimer's disease, and cancer. Vitamin B6 is involved in the production of brain hormones (neurotransmitters). It is often low in people with depression or those taking estrogen in the form of birth control or hormone replacement therapy. Lack of B6, and folic acid has been associated with osteoporosis. An increase in B6 has been shown to help fight heart disease and reduce the symptoms of PMS (3).
Breast Cancer and Heart Disease:
The risk of developing breast cancer increases with age. By the time a woman turns 60, one out of twenty-eight women develop breast cancer. Studies have shown that hormone treatment for ten-fifteen years may slightly increase a woman's chances of developing breast cancer.
Before the age of fifty, women are three times less likely to have a stroke or heart attack then men. Ten years after menopause, women are at the same kind of risk as men. Whether this directly correlates to hormone replacement therapy is not clear (2).
Osteoporosis:
Loss of estrogen can lead to osteoporosis or loss of bone mass. Women may lose between two to five percent of their bone mass per year for up to five years after menopause. Bones can become brittle, more susceptible to breaking. Bone loss begins at age thirty that is why it is important for women to build bone mass early with weight bearing exercise like walking, running, or weight lifting. It is also important to take a calcium supplement to help aid in developing bone mass. At least 1,000 mg of calcium are recommended per woman per day, and 1,200 mg are recommended after menopause. Estrogen replacement therapy can also help in developing and retaining bone mass. There are currently newer non-hormonal medications that are effective as well (2).
What then is the best way to treat the symptoms of menopause? I am not sure whether there is enough conclusive evidence to determine how harmful the use of hormone replacement therapy is. It is currently found to be an effective treatment with varying degrees of side effects. Loss of hormones like estrogen can result in loss of bone mass, as well as leaving a woman's body more susceptible to diseases like breast cancer and heart disease. How much of an affect does hormone replacement therapy have on these diseases and how helpful or harmful is it? This is something I would like to conduct further research on before I give a "better" hypothesis. Before concluding, I'd like to take a closer look at one more aspect of menopause that is often overlooked or misjudged: psychological changes.
Psychological Changes:
Although there is no scientific study to support that menopause contributes to true clinical depression, many women do suffer from "feeling blue" or being discouraged. During perimenopause, a woman's hormonal rhythm changes. These hormonal changes contribute often to mood swings. For many women, the hormone changes of menopause coincide with other stresses during midlife. In addition, many women experience changes in their self-esteem and body image. Many women can react to menopause by feeling overwhelmed, angry, out of control, or even numb. It takes someone in the medical profession to determine whether a woman is clinically depressed or just feeling the effects of menopause. Often women can combat their feelings of sadness with herbal remedies like Saint John's Wort, or by changes in their lifestyle to reduce stress. Often times, irritability is closely linked to disturbances in a woman's sleeping pattern, which can be treated by treating hot flashes among other things. Stress-reducing techniques like meditation and deep-breathing are effective for some, while regular exercise and a healthy diet, getting enough sleep, and pampering yourself are all positive ways to help combat stress and sadness. Many women recommend talking to friends and family about menopause. Some even take this a step further and form self-help groups where women can speak to each other about their common experiences with menopause. Often realizing there is another woman out there who understands what you are going through is beneficial to feeling less depressed and overwhelmed by menopause (1).
I think menopause, like depression is something which has a lot of pre-conceived notions by the public and is not necessarily well understood. I think it deserves more research and acknowledgement as a legitimate and substantial occurrence in a woman's life that deserves more respect and understanding, as well as more open acknowledgement. It should not be something which needs to be hid or made the butt of a joke. There is still much research to be done concerning menopause and its treatment. I think once women feel they can openly address menopause they will feel less stress and anxiety towards it.
WWW Sources
1) North American Menopause Society, Menopause Guidebook: Helping Women Make Informed Healthcare Decisions Through Perimenopause And Beyond.
2) Menopause - Another Change in Life.
3)Menopause Online.
4)National Osteoporosis Foundation.
5) National Center for Homeopathy.
6)National Breast Cancer Foundation.
The Socialization of Human Birth as Protection for Name: Chelsea Ph Date: 2002-09-30 02:14:58 Link to this Comment: 3015 |
The Socialization of Human Birth as Protection for Bipedalism
The topic of human birth is quite an interesting one. For example, why do we give birth the way we do? Why is labor so incapacitating to human females, and how has natural selection been a factor? I have investigated the way in which the process of human pregnancy has evolved over time, and found a strong link between the biological and the sociological. As humans evolved from quadrupeds to bipeds, the birthing process evolved from a private process to a social process. The socialization of human birth allowed bipedalism to flourish. If birth had remained private, the disadvantages to bipedalism in regards to the continuation of the species would have eventually necessitated a revision of the trait.
Comparing our birth process with that of our primate relatives gives a very logical argument to why human birth became a social process. "The baby monkey emerges facing toward the front of the mother's body so she can reach down with her hands and guide it from the birth canal...the human infant must undergo a series of rotations to pass through the birth canal without hindrance" (1). The sheer complexities of human birth naturally dispose it toward being a social act. Because of the necessities of bipedalism, the pelvis of a human female is much narrower than that of other primates, meaning that numerous physical complications arise, and birth is physically more painful.
Growth of brain and cranial size among hominids also added to the difficulty of labor and delivery. The human brain triples between birth and adulthood, whereas the brain of other primates only doubles. "What humans seem to have accomplished is the trick of keeping the brain growing at the embryonic rate for one year after birth. Effectively, if humans are a fundamentally precocial species, our gestation is (or should be) 21 months. However, no mother could possibly pass a year old baby's head through the birth canal. Thus, human babies are born 'early' to avoid the death of the mother." (2). Humans have maintained a gestation length comparable to that of chimpanzees (the gestation for chimpanzees is approximately 230 to 240 days), despite the fact that the young are born in such different stages of development relative to their adult selves.
Another very practical argument for necessity of socialization to bipedal survival is the fact that a human female is physically unable to assist herself or the baby during birth if something goes wrong. "I suggest that early hominid females who sought assistance or companionship at the time of delivery had more surviving and healthier offspring than those who continued the ancient mammalian pattern of delivering alone. Thus, the evolutionary process itself first transformed birth from an individual to a social enterprise..." (1). In cases when the baby is breeched, or with other complications arising when the baby is in the birth canal, assistance from another can be the difference between life and death.
Another danger in birthing alone, most women feel the need to push during contractions before their cervix is properly dilated (10cm), especially in the case of a longer labor or a breech, this results in the baby's head becoming trapped in the birth canal, then necessitating a rapid delivery to keep the child from attempting to breathe (as it will once its body is exposed to cooler temperatures), but increasing the risk of internal ripping of the mother's cervix and/or uterus with heavy bleeding, damage to other organs and death (3). Experience must have quickly taught early hominids that assisted birth was best.
Though the term "midwife" was not coined during the medieval ages, the role it describes is almost as old as bipedalism. Another part of the argument for this are references made to women in this capacity from Greek and Roman times, in medical documents, Egyptian papyri, the Bible, and Hindu scrolls. The documents indicate these women as having an invaluable, but more importantly established part of human society, already subject to its own set of rules (4).
Beyond midwifery, there are many factors that have been working to change the process of human birth. One factor is the development of more effective medicines for pain, tools such as forceps to use during delivery, and the advent of written record so that future generations could expound more easily upon the work of others (which is how the practice of caesarian section became such a viable option), even across certain geographical boundaries. Another was the changing diet and it's effect on the human body. I would argue that as these new factors came into play, natural selection began gradually to be overshadowed.
As man was able to control food sources (and consequently became less mobile) more effectively through farming, new foods became staples in the human diet. "Increased consumption of carbohydrate-rich foods, decreased mobility, and nursing at infrequent intervals all interact to make this possible, enabling women to conceive within 10-15 months of the last birth. Weaning earlier is made possible by the availability of appropriate infant foods in the form of cereal grains and, in some places, milk from domesticated animals. Ultimately the birth interval is reduced to approximately 2 years resulting in population increase." (2).
As the success of human birth and the ability to conceive more frequently in a lifetime became greater, the obstacles bipedalism presented were surmounted. Increased birth rates meant increased variation, providing a larger pool of genetic traits to be selected for or against. Early hominids used their intelligence to compensate for deficiencies in speed and agility. Birth evolved from a private to a social process in order to increase the rates of survival for both mother and child. With time, this socialization led to the development of various techniques and technologies capable of compensating for the physical limitations on birth in bipeds.
1) Bernard Bel, a quote from "Evolutionary Obstetrics" (In W. R. Trevathan, E.O. Smith & J.J. McKenna, eds., Evolutionary Medicine. New York: Oxford University Press, 1999, pg. 183-207), from Bernard Bel's "New Directions". An interesting site with intelligent arguments concerning all aspects of health, including the "medicalization" of human birth.
2) Glenn Morton's Creation/Evolution Page, Morton, G.R. "The Curse of a Big Head." Arguments as to the correlation between increased brain size and human sweat glands, pains during childbirth, and need for clothing. Inspired by Genesis 3:16-21, when God punishes Adam and Eve for eating from the tree of knowledge. The argument is supported by fossil record and other biological/anthropological evidence, and is, on the whole, not bad.
3) Glenn Morton's Creation/Evolution Page, a quote from Wenda R. Trevathan's Human Birth, (New York: Aldine de Gruyter, 1987), p. 92 from G.R. Morton's "The Evolution of Human Birth", an article providing information in support of the theory that human birth has not evolved significantly since Homo Rudolphensis.
3) Parkland School of Nurse Midwifery, a concise and informative page on the history of midwifery.
The Health Benefits of Fasting Name: Will Carro Date: 2002-09-30 04:14:07 Link to this Comment: 3017 |
There has been much contention in the scientific field about whether or not fasting is beneficial to one's health. Fasting is an integral part of many of the major religions including Islam, Judaism and Christianity. Many are dubious as to whether the physiological effects are as beneficial as the spiritual promoted by these religions. There is a significant community of alternative healers who believe that fasting can do wonders for the human body. This paper will look at the arguments presented by these healers in an attempt to raise awareness of the possible physiological benefits that may result from fasting.
Fasting technically commences within the first twelve to twenty-four hours of the fast. A fast does not chemically begin until the carbohydrate stores in the body begin to be used as an energy source. The fast will continue as long as fat and carbohydrate stores are used for energy, as opposed to protein stores. Once protein stores begin to be depleted for energy (resulting in loss of muscle mass) a person is technically starving. (1)
The benefits of fasting must be preceded by a look at the body's progression when deprived of food. Due to the lack of incoming energy, the body must turn to its own resources, a function called autolysis. (2) Autolysis is the breaking down of fat stores in the body in order to produce energy. The liver is in charge of converting the fats into a chemical called a ketone body, "the metabolic substances acetoacetic acid and beta-hydroxybutyric acid" (3), and then distributing these bodies throughout the body via the blood stream. "When this fat utilization occurs, free fatty acids are released into the blood stream and are used by the liver for energy." (3) The less one eats, the more the body turns to these stored fats and creates these ketone bodies, the accumulation of which is referred to as ketosis. (4)
Detoxification is the foremost argument presented by advocates of fasting. "Detoxification is a normal body process of eliminating or neutralizing toxins through the colon, liver, kidneys, lungs, lymph glands, and skin." (5). This process is precipitated by fasting because when food is no longer entering the body, the body turns to fat reserves for energy. "Human fat is valued at 3,500 calories per pound," a number that would lead one to believe that surviving on one pound of fat every day would provide a body with enough energy to function normally. (2) These fat reserves were created when excess glucose and carbohydrates were not used for energy or growth, not excreted, and therefore converted into fat. When the fat reserves are used for energy during a fast, it releases the chemicals from the fatty acids into the system which are then eliminated through the aforementioned organs. Chemicals not found in food but absorbed from one's environment, such as DDT, are also stored in fat reserves that may be released during a fast. One fasting advocate tested his own urine, feces and sweat during an extended fast and found traces of DDT in each. (5)
A second prescribed benefit of fasting is the healing process that begins in the body during a fast. During a fast energy is diverted away from the digestive system due to its lack of use and towards the metabolism and immune system. (6) The healing process during a fast is precipitated by the body's search for energy sources. Abnormal growths within the body, tumors and the like, do not have the full support of the body's supplies and therefore are more susceptible to autolysis. Furthermore, "production of protein for replacement of damaged cells (protein synthesis) occurs more efficiently because fewer 'mistakes' are made by the DNA/RNA genetic controls which govern this process." A higher efficiency in protein synthesis results in healthier cells, tissues and organs. (7) This is one reason that animals stop eating when they are wounded, and why humans lose hunger during influenza. Hunger has been proven absent in illnesses such as gastritis, tonsillitis and colds. (2) Therefore, when one is fasting, the person is consciously diverting energy from the digestive system to the immune system.
In addition, there is a reduction in core body temperature. This is a direct result of the slower metabolic rate and general bodily functions. Following a drop in blood sugar level and using the reserves of glucose found in liver glycogen, the basal metabolic rate (BMR) is reduced in order to conserve as much energy within the body as can be provided. (2) Growth hormones are also released during a fast, due to the greater efficiency in hormone production. (7)
Finally, the most scientifically proven advantage to fasting is the feeling of rejuvenation and extended life expectancy. Part of this phenomenon is caused by a number of the benefits mentioned above. A slower metabolic rate, more efficient protein production, an improved immune system, and the increased production of hormones contributes to this long-term benefit of fasting. In addition to the Human Growth Hormone that is released more frequently during a fast, an anti-aging hormone is also produced more efficiently. (7) "The only reliable way to extend the lifespan of a mammal is under-nutrition without malnutrition." (5) A study was performed on earthworms that demonstrated the extension of life due to fasting. The experiment was performed in the 1930s by isolating one worm and putting it on a cycle of fasting and feeding. The isolated worm outlasted its relatives by 19 generations, while still maintaining its youthful physiological traits. The worm was able to survive on its own tissue for months. Once the size of the worm began to decrease, the scientists would resume feeding it at which point it showed great vigor and energy. "The life-span extension of these worms was the equivalent of keeping a man alive for 600 to 700 years." (8)
In conclusion, it seems that there are many reasons to consider fasting as a benefit to one's health. The body rids itself of the toxins that have built up in our fat stores throughout the years. The body heals itself, repairs all the damaged organs during a fast. And finally there is good evidence to show that regulated fasting contributes to longer life. However, many doctors warn against fasting for extended periods of time without supervision. There are still many doctors today who deny all of these points and claim that fasting is detrimental to one's health and have evidence to back their statements. The idea of depriving a body of what society has come to view as so essential to our survival in order to heal continues to be a topic of controversy.
1)"Dr. Sniadach True Health Freedom 3
4)"Nutriquest, March 11th, 2000 Ketosis and Low Carbohydrate Diets"
5)"WebMD Detox Diets: Cleansing the Body"
Water Births Name: Lawral Wor Date: 2002-09-30 05:13:32 Link to this Comment: 3018 |
Over the past few years, there has been a resurgence in interest in homebirths and other "alternative" ways of giving birth. There has been a rise in if not the actual incidences of births involving a midwife instead of an obstetrician, at least in the coverage in the news and in parenting magazines of midwives and their art. The debate has gone back and forth over whether midwives are reliable sources of support when having a baby. This debate has become so common that it is becoming part of our collective culture, most recently in the Oprah Book Club book Midwives: A Novel by Chris Bohjalian (1). One of the reasons their credibility has been questioned is that midwives are more likely to participate in an alternative birthing style, such as water births. The debate around water births has been almost as lively as that around midwives themselves. Both the supporters and opponents of the practice are passionate about their arguments, both of which can be very convincing.
The number of hospitals that offer water births has risen in the last decade, but most are still performed by midwives in birthing centers or in the home. Even with the growing number of hospitals offering this service, "the American College of Obstetricians and Gynecologists has not endorsed the technique. It says there is not enough data to prove safety" (2). Most of the studies on water births have been conducted in the United Kingdom, where the practice is offered in most hospitals. While water births are not still the norm, having the mother rest in a tub of warm water during labor is very common there.
Spending part of the labor process in water has proved to be beneficial even if the mother leaves the tub before actually giving birth. "Warm water helps a laboring woman's muscles relax-which often speeds labor. When a woman is more relaxed and comfortable, the uterus functions optimally, reducing stress to both mother and baby. Water appears to lower a woman's stress and anxiety level, thereby lowering stress-related hormones which cross the placenta. Many women repeatedly report the wonderful pain relieving properties of water" (3).
The benefits of a water birth for a mother go beyond the simple relaxing qualities of water. The tissues of the vagina become much more elastic in water, making the actual birth easier for the mother. "A 1989 nationwide survey published in The Journal of Nurse Midwifery on the use of water for labor and birth reported less incidents of perineal tearing with less severity" (4). This reduced stress on the vaginal tissues and birth canal converts into less stress during birth for the child. Combined with the relaxed uterine muscles, water births are considerably less physically stressful for both the mother and the baby.
Even with all of these benefits, many hospitals in the United States still do not endorse, let alone offer, water births in their maternity wards. Attached to these benefits are some very serious risks. The same warm waters that help to relax muscles during labor can be incredibly harmful to the mother after she has given birth. The warm water can keep the muscles relaxed after the deliver of the placenta and prevent blood clotting. While immersed in water, it is also harder to tell how much blood is being lost as it is diluted in the bath water. "Also, if the placenta is delivered under water the combination of vasodilatation and increased hydrostatic pressure could theoretically increase the risk of water embolism" (5). These risks are all still in the theoretical stage, but they are risks nonetheless.
There are also risks for the baby in water births. While the British Medical Journal reports that there is a 95% confidence in live water births, they do concede that there is a risk of water aspiration (6). Instinctively, babies should not breathe in until they are confronted with air. They should continue to receive oxygen through the umbilical cord until they start to breathe on their own or the cord is cut. Studies have shown, however, that "babies who do not get enough oxygen during childbirth [due to stress in the birth canal or placement of the umbilical cord] may gasp for air, risking water to enter their lungs" (2). A more preventable, but still serious, problem is the increased chance of snapping the umbilical cord during a water birth. There have been no studies on why the risk of snapping the umbilical cord is higher among water births, but it is speculated that the increased movement involved in bringing the child to the surface of the water after the birth is to blame (6).
The debate concerning the safety of water births will continue as the practice becomes increasingly popular in the United Kingdom and elsewhere. There are no studies that directly link water births to the risks I have outlined; they are deemed theoretical or consequential. One must decide to weigh the benefits with the possible risks before choosing to undergo a water birth. As the practice becomes more accepted and studied, more conclusive studies will be carried out, making the decision easier for expectant mothers.
1)Bohjalin, Chris A. Midwives: A Novel. Vintage Books, 1998.
2)"Water birth drowning risk." BBC, August 5, 2002.
3)Birth and Women's Center - Water Births.
4)"Why Water." Global Maternal/Child Health Association and Waterbirth International.
5)LMM Duley MRCOG, Oxford. "Birth in Water: RCOG Statement No 1." Royal College of Obstetricians and Gynaecologists. January 2001.
6)"Perinatal mortality and morbidity among babies delivered in water: surveillance study and postal survey." British Medical Journal. August 21, 1999.
The True Importance of Moisturizers to Healthy Ski Name: Margot Rhy Date: 2002-09-30 10:17:53 Link to this Comment: 3023 |
Facial skin care products are promoted vigorously in the cosmetic industry with claims of tremendous benefit to good, healthy-looking skin. Consumers search for rejuvenation and protection of their largest organ, important on the basic biological level that it acts as a barrier, shielding the body from the environment, as a temperature regulator, as a basic immune defense, and as the sensory organ. However, these consumers generate a huge commercial business for reasons purely aesthetic; the face is simply what others notice first in personal presentation to the world. Functioning within a need to find perfection, consumers crave an easy solution to removing blemishes, fine lines, wrinkles, dark spots, and all other types of skin care problems. Therefore, the question of how important these products, especially moisturizers, are to healthy skin and what separates these moisturizers becomes worthy of understanding in a market that carries so many different kinds of products, all with different ingredients and all advertising the same positive outcomes.
There exists in the market an elementary understanding of what a proper skin care regimen should consist of, promoted by all companies operating at all price levels. Basically, lines carry products based on different skin types, oily, dry, combination, or sensitive, and then divide treatment into the basic steps of exfoliation, treatment, hydration, and protection (1). Skin moisturizers, then, are an essential step in this routine and they are designed for different skin types to soften the skin, to lubricate "without blocking pores and smothering the skin (2)."
A moisturizer's base is some type of an emulsion of oil and water with another agent, altogether acting to limit the natural evaporation of water from the skin. When the product is an emulsion of water in oil, the oil is more dominant and serves moderately dry skin effectively. When the product is based on oil in water, products are less moisturizing and are formulated for normal to slightly dry skin. Furthermore, products that are purely oil based are best for only for extremely dry skin and completely oil-free products are best for oily to normal skin types (2)
Typically, the other active ingredient is another kind of oil. Natural and essential oils are chosen depending on what vitamins, antioxidants, essential fatty acids, and fragrances they bring to the moisturizer (2) These ingredients can be seen in various advertisement campaigns and on various labels, chosen for their function and cost-efficiency. Moisturizers may also contain humectants which prevent water loss by attracting moisture to the skin. These are synthetic forms of phospholipids, which exist naturally as an "evaporative and protective barrier in the outer layer of the epidermis (2). However, these synthetics, forms of glycols such as propylene glycol or glycerin, only work well in environments with sufficient humidity in the air to draw from. Also, these can cause irritation or inflammation because they serve as a barrier, knowing how to both keep moisture in but also knowing how to prevent moisture from entering externally (3)Another ingredient may be liposomes, somewhat of a new development in skincare formulations. These come from phospholipids, have an aqueous core, and can carry vitamins, drugs, and other active ingredients in their phospholipd layers for delivery to the dermis. Since they have a cell membrane-like structure, "they can readily pass through the epidermis and are thought to be accepted into cells of the dermis by membrane fusion (2)What becomes apparent is the amount of chemistry, biology, and technology that enters into the industry of beauty. This serves as a reflection of the want consumers have for the most effective types of products science can give them and their need for the best skin care solutions they can buy easily, with little consideration given to more personal factors that affect healthy skin.
As a result of this focus on consumerism and easy solutions, little attention is given to the idea that lifestyle choices manifest themselves in our physical health and appearance. Nutrition and rest are two basic and easy-to-overlook contributors to healthy skin. Exercise helps the skin since it works to "maintain a clear circulation, calming the nerves and promoting a deeper, more revitalizing sleep (3) Water is obviously necessary for the skin's maintenance of the right amount of moisture as well as a person's general good health (3) These make positive contributions to healthy skin while the following are choices that prove damaging. Smoking deprives skin tissue of oxygen and nutrients through the effects of carbon monoxide and nicotine in the circulatory system, giving a pale complexion and early wrinkles. Caffeine and alcohol dehydrate the skin, the latter being particularly even more damaging by impeding circulation, removing moisture and nutrients, and even leading to broken or distended capillaries (3) Those who market skin care products, ranging from medical doctors to make-up consultants to fashion houses, gain definite monetary profit by promoting only products and by not acknowledging other factors in good skin care, thus adding complication to the issue. Do consumers really need their products to improve their skin's health? Or is this need merely generated by the beauty industry and dermatology field? For consumers, clearly it becomes a matter of buying a better appearance. They have made this evident by accepting what they are told and by allowing skin care products to become a billion dollar industry (3) ; this market exists because people want it to.
Moisturizers can be beneficial to the skin but those who support this the most and the strongest are those who profit off of them. They can replenish moisture to the skin but may lead to damage as well if a consumer does not understand the role of specific ingredients. They can be effective but lifestyle choices cannot be overlooked in lieu of them. Perhaps what needs to be understood by everyone willing to spend their idea of the right amount on their understanding of the right product is that these lotions and creams cannot serve alone as the key factor to healthy skin.
.
1)Dermadoctor Website, archived article on the Dermadoctor website, an online source for skincare
3) American Academy of Dermatology website , archived article on the American Academy of Dermatology website
The True Importance of Moisturizers to Healthy Ski Name: Margot Rhy Date: 2002-09-30 10:23:18 Link to this Comment: 3024 |
Facial skin care products are promoted vigorously in the cosmetic industry with claims of tremendous benefit to good, healthy-looking skin. Consumers search for rejuvenation and protection of their largest organ, important on the basic biological level that it acts as a barrier, shielding the body from the environment, as a temperature regulator, as a basic immune defense, and as the sensory organ. However, these consumers generate a huge commercial business for reasons purely aesthetic; the face is simply what others notice first in personal presentation to the world. Functioning within a need to find perfection, consumers crave an easy solution to removing blemishes, fine lines, wrinkles, dark spots, and all other types of skin care problems. Therefore, the question of how important these products, especially moisturizers, are to healthy skin and what separates these moisturizers becomes worthy of understanding in a market that carries so many different kinds of products, all with different ingredients and all advertising the same positive outcomes.
There exists in the market an elementary understanding of what a proper skin care regimen should consist of, promoted by all companies operating at all price levels. Basically, lines carry products based on different skin types, oily, dry, combination, or sensitive, and then divide treatment into the basic steps of exfoliation, treatment, hydration, and protection (1). Skin moisturizers, then, are an essential step in this routine and they are designed for different skin types to soften the skin, to lubricate "without blocking pores and smothering the skin (2)."
A moisturizer's base is some type of an emulsion of oil and water with another agent, altogether acting to limit the natural evaporation of water from the skin. When the product is an emulsion of water in oil, the oil is more dominant and serves moderately dry skin effectively. When the product is based on oil in water, products are less moisturizing and are formulated for normal to slightly dry skin. Furthermore, products that are purely oil based are best for only for extremely dry skin and completely oil-free products are best for oily to normal skin types (2)
Typically, the other active ingredient is another kind of oil. Natural and essential oils are chosen depending on what vitamins, antioxidants, essential fatty acids, and fragrances they bring to the moisturizer (2) These ingredients can be seen in various advertisement campaigns and on various labels, chosen for their function and cost-efficiency. Moisturizers may also contain humectants which prevent water loss by attracting moisture to the skin. These are synthetic forms of phospholipids, which exist naturally as an "evaporative and protective barrier in the outer layer of the epidermis (2). However, these synthetics, forms of glycols such as propylene glycol or glycerin, only work well in environments with sufficient humidity in the air to draw from. Also, these can cause irritation or inflammation because they serve as a barrier, knowing how to both keep moisture in but also knowing how to prevent moisture from entering externally (3)Another ingredient may be liposomes, somewhat of a new development in skincare formulations. These come from phospholipids, have an aqueous core, and can carry vitamins, drugs, and other active ingredients in their phospholipd layers for delivery to the dermis. Since they have a cell membrane-like structure, "they can readily pass through the epidermis and are thought to be accepted into cells of the dermis by membrane fusion (2)What becomes apparent is the amount of chemistry, biology, and technology that enters into the industry of beauty. This serves as a reflection of the want consumers have for the most effective types of products science can give them and their need for the best skin care solutions they can buy easily, with little consideration given to more personal factors that affect healthy skin.
As a result of this focus on consumerism and easy solutions, little attention is given to the idea that lifestyle choices manifest themselves in our physical health and appearance. Nutrition and rest are two basic and easy-to-overlook contributors to healthy skin. Exercise helps the skin since it works to "maintain a clear circulation, calming the nerves and promoting a deeper, more revitalizing sleep (3) Water is obviously necessary for the skin's maintenance of the right amount of moisture as well as a person's general good health (3) These make positive contributions to healthy skin while the following are choices that prove damaging. Smoking deprives skin tissue of oxygen and nutrients through the effects of carbon monoxide and nicotine in the circulatory system, giving a pale complexion and early wrinkles. Caffeine and alcohol dehydrate the skin, the latter being particularly even more damaging by impeding circulation, removing moisture and nutrients, and even leading to broken or distended capillaries (3).
Those who market skin care products, ranging from medical doctors to make-up consultants to fashion houses, gain definite monetary profit by promoting only products and by not acknowledging other factors in good skin care, thus adding complication to the issue. Do consumers really need their products to improve their skin's health? Or is this need merely generated by the beauty industry and dermatology field? For consumers, clearly it becomes a matter of buying a better appearance. They have made this evident by accepting what they are told and by allowing skin care products to become a billion dollar industry (3) ; this market exists because people want it to.
Moisturizers can be beneficial to the skin but those who support this the most and the strongest are those who profit off of them. They can replenish moisture to the skin but may lead to damage as well if a consumer does not understand the role of specific ingredients. They can be effective but lifestyle choices cannot be overlooked in lieu of them. Perhaps what needs to be understood by everyone willing to spend their idea of the right amount on their understanding of the right product is that these lotions and creams cannot serve alone as the key factor to healthy skin.
.
1)Dermadoctor Website, archived article on the Dermadoctor website, an online source for skincare
2)Altrius Biomedical Network, supported by dermatology community
3) American Academy of Dermatology website , archived article on the American Academy of Dermatology website
Why Stress Affects Us Physically Name: Kyla Ellis Date: 2002-09-30 11:13:45 Link to this Comment: 3026 |
Kyla Ellis
Biology 103
Web Paper
9-30-02
Why Stress Affects Us Physically
We deal with stress daily. In our every-day vocabulary "stressed" becomes an emotion as in: "How are you?" "I'm feeling happy, how are you?" "I'm feeling stressed." It is a negative word, and is not an emotion we aspire to. But if "stress" comes from the nervous system, why does it affect our body? Why does stress cause us to lose sleep, break out, and become depressed? In my paper, I will attempt to explain what stress is, how it happens, and then what it does to our bodies and why it does those things.
First, let me clarify: Stressors are internal or external factors that produce stress. Stress is the subjective response to the factors (10). All humans and animals have developed internal mechanisms through evolution that allow our bodies to react to a stressor. The term "stress" has a negative connotation, but it can also be a positive thing, such as when performers go onstage, they rely on stress to provide the adrenaline rush necessary to helping them perform. Most stress is not due to life-threatening situations, but rather to every-day occurrences such as public speaking, or meeting new people. I'd like to point out also that the intensity of stress depends on how it is perceived. For example, a deadline contraction can be for some people an opportunity to manage their time more efficiently, while for others it can be the end of the world.
There are four categories of stress. The first is Survival Stress. The phrase "fight or flight" comes from a response to danger that people and animals have programmed into themselves. When something physically threatens us, our bodies respond automatically with a burst of energy so as to allow us to survive the dangerous situation (fight) or escape it all together (flight). The second is Internal Stress. Internal stress is when people make themselves stressed. This often happens when people worry about things that can't be controlled or put themselves in already-proven stress-causing situations. The third category is Environmental Stress. It is the opposite of Internal Stress, it is caused by the things surrounding us that could cause stress, such as pressure from school or family, large crowds, or excessive noise. The fourth category is called Fatigue and Overwork - This kind of stress builds up over a long time and can take a hard toll on your body. It can be caused by working too much or too hard at a job, school, or home. It can also be caused by not knowing how to manage your time well or how to take time out for rest and relaxation. This can be one of the hardest kinds of stress to avoid because many people feel it is out of their control (3).
One site I looked at compared a person undergoing stress to a country whose stability is threatened. The country reacts quickly and puts out a number of civilian and military measures to protect the country. On the one hand, the readiness to quickly respond in such a way is vital to the long-term survival of the nation; on the other hand, the longer this response has to be maintained, the greater the toll will be on other functions of the society(10).
Stress affects us physically, emotionally, behaviorally and mentally. When there is a threat, the body physically reacts by increasing the adrenaline flow, tensing muscles, and increasing heart rate and respiration. Emotions, such as anxiety, irritability, sadness and depression, or extreme happiness and exhilaration come out. Behaviorally, one might possibly experience reduced physical control, insomnia, and irrational behavior. Mentally, stress may severely limit the ability to concentrate, store information in memory and solve problems ("Test anxiety" happens because the brain has a reduced ability to process information while under the effects of stress) (1).
Has anyone ever told you they were stressed out because of acne? The fact that they are stressing out about it might be making the problem worse. How can what happens on your face be related to what happens to your central nervous system? Acne forms when oily secretions from glands beneath the skin plug up the pores. There is a stress hormone known as corticotrophin-releasing hormone (CRH). An increase in the CRH signals oil glands in the body urge the oil glands to produce more, which can exacerbate oily skin, thus leading to acne(4)(5).
If people are stressed, they may lose sleep due to the fact that there is something on their mind, making it hard for them to stop worrying about it long enough to fall asleep. However, stress hormones also make it harder to sleep. CRH has a stimulating effect and when it is produced in the body in greater quantities, it makes the person stay awake longer and sleep less deeply. In this way, stress is also linked to depression, because people who do not get enough "slow-wave" sleep may be more prone to depression(6).
The hippocampus is an important part of our brain and its functions. The hippocampus is responsible for consolidating memories into a permanent store (9). The hippocampus is the part of the brain that signals when to shut off production of stress hormones called cortisol. However, these hormones can damage the hippocampus. A damaged hippocampus causes cortisol levels to get out of control, which compromises memory and cognitive function, creating a vicious cycle(7).
In conclusion, stress is not just simply a process that overtakes the central nervous system. It affects the body in many different and seemingly unrelated ways. To live a healthy lifestyle includes striking a good balance between work, down time, and sleep, which should help reduce the effects of stress.
1)Coping with the Stress of College Life ,
2)Staying Well ,
3)Understanding and Dealing with Stress
, 4)Science News,
5)CNN.com,
6) Stress and Sleep deprivation ,
7)The Cortisol Conspiracy and Your Hippocampus,
8) Stress Management ,
9)The hippocampus of the Human Brain,
10. 10) The Neurobiology of Stress and Emotions,
One Last Call for Alcohol Name: Elizabeth Date: 2002-09-30 13:05:50 Link to this Comment: 3029 |
For centuries, man has relied on alcohol as a relaxant, often employing its
sedative qualities to induce sleep. However, while a stiff drink before bed may
initially help one fall asleep, recent research shows that alcohol adversely affects
sleep patterns. Not only do recreationally drinkers experience disruptions in their
nightly sleep, but alcoholics also damage their ability to obtain quality sleep, perhaps
irreparably. Also, sleep problems such as insomnia may cause a person to abuse alcohol,
leading into a vicious cycle which corrupts their ability to sleep peacefully. In
addition, sleep problems may pave the way for an alcoholic's relapse, as they seek out a
familiar form of relaxation. The harmful effects of alcohol outweigh the initial sleep
inducing benefits, as an overindulgence in alcohol may result in permanent difficulties
with sleep.
Sleep takes place in two distinct stages. The first, called slow wave state
(SWS), is a deep, restful sleep characterized by slowed brain waves. The second is known
as rapid eye movement (REM) sleep, a less restful state associated with dreaming.
Alcohol affects sleep patterns by interfering with the monoamine neurotransmitters which
control the body's ability to sleep peacefully (1). In those
individuals who drink alcohol but do not abuse the substance, a drink or two before bed
helps lessen the amount of time needed to fall asleep. However, contrary to popular
opinion, alcohol will not promote a good night's sleep. Even a trace presence of alcohol
in the bloodstream disrupts the second half of a person's sleep cycle, leading to
wakefulness in the middle of the night and an inability to fall back to sleep. Such
disturbances lead to daytime fatigue, which can affect a person's ability to undertake
such everyday tasks as driving a car. Alcohol consumed up to six hours before bedtime
can still disturb one's sleep cycle that evening. Unfortunately, the majority of alcohol
consumption takes place from dinner on, leaving many susceptible to a fitful night
(4).
Alcohol also has the tendency to exaggerate existing sleep problems, such as
insomnia and sleep apnea. An insomniac, a person who has difficulty sleeping, may seek
the aid of alcohol in order to fall asleep, but a reliance on alcohol leads to
wakefulness later in the night and a compounded inability to fall back to sleep. Sleep
apnea, a breathing disorder in which the pharynx, or the upper air passage, constricts
during sleep, affects the body's ability to get enough oxygen. Usually, the shock of not
being able to breathe wakes the person, but if she or he has been drinking, the body may
not react to the situation as quickly as is necessary. As a result, those with sleep
apnea run a significant risk when they consume too much alcohol. Even by ingesting as
few as two alcoholic drinks a night, those suffering from sleep apnea place themselves at
a much higher likelihood for heart attack, stroke, or death by suffocation
(2)
The effects of alcohol on sleep increase among those who abuse the substance on
a regular basis, otherwise known as alcoholics. Insomnia affects 18% of the alcoholic
population, a higher percentage than found in the population at large (1)
substance lower significantly, and the alcohol no longer enables one to fall asleep
quicker. In fact, consuming too much alcohol makes it increasingly difficult to fall
asleep. Once sleep finally sets in for an alcoholic, the time spent in both SWS and REM
modes is reduced, resulting in an overall reduction of sleep time. While studying
recovering alcoholics during their periods of withdrawal, researchers observed an
increase in the amount of time spent in SWS and REM sleep with a corresponding increase
in the amount of time needed to fall asleep (5). However, although SWS
and REM times were increased, they were not restored to their optimal levels. Research
indicates that the damage an alcoholic does to his or her system while abusing the
substance may be irreparable (1). In any case, sleep patterns are
significantly affected for at least two years, if not for life.
The reverse side of alcohol and sleep problems is the effect an inability to
sleep may have on one's reliance on alcohol. Insomniacs may at first employ a drink
before bed as a sleep aid, noticing its relaxing properties. However, as alcohol in fact
worsens a person's ability to sleep, this initial benefit will wear off as time
progresses, leading the insomniac to drink more and more in order to produce the desired
effect. Unfortunately, as discussed earlier, an over-consumption of alcohol actually
reduces its sedative qualities and increases the difficulty of being able to fall asleep,
a dangerous side effect for a person who already has problems falling asleep. If the
insomniac becomes too reliant on alcohol for sleep purposes, he or she may develop a
dependence on alcohol which could progress into alcoholism, permanently disrupting their
sleep patterns and causing an interference in their ability to perform simple tasks.
Also, an inability to sleep may cause a recovering alcoholic to seek the familiar
comforts of alcohol, triggering a relapse (4).
Alcohol related sleep problems affect more than just adults. Drinking while
pregnant or nursing has been shown, besides other damaging effects, to alter the sleep
patterns of a newborn baby. The baby absorbs the alcohol into its bloodstream just as an
adult would, leading to wakefulness throughout the night, frightening dreams, and a
decrease in the restful quality of the baby's sleep. Adequate rest is essential for a
developing child. In turn, interruptions to a healthy sleep cycle can cause serious
detriments to the baby, including such dangers as fetal alcohol syndrome (3)
Alcohol affects the ways humans sleep. Even a small drink six hours before bed
interferes with the resting process. The dangers and side effects of disrupted sleep
increase with alcohol abuse, and some sleep problems may lead to alcoholism, or serve as
an excuse for a recovering alcoholic's relapse. These negative effects of alcohol on
sleep can extend even to small children by way of their mother. It's best to be aware of
these risks before overindulging in alcohol, especially if one has a condition such as
sleep apnea, which may be aggravated by alcohol, sometimes with fatal results.
Manatees and the Human Fault Factor Name: Katie Camp Date: 2002-09-30 13:06:12 Link to this Comment: 3030 |
Manatees appeared on earth during the Eocene period about 50 to 60 million years ago. In general, adult manatees are about twelve feet long and weigh 1000 to 1500 pounds (1). They require warm water, a supply of "submerged, emergent, and floating plants" (3) such as hydrilla, turtle grass, ribbon grass, and manatee grass (4) for food, shelter and breeding grounds. Manatees are very gentle sea mammals whose curious personality leads humans to perceive them almost like their companion pets. Although manatees are playful, "scratch[ing] themselves on poles, boat bottoms, and ropes," they "do not seek interaction" (4) with people. Their humble mannerisms and general slow reactions do not mesh well with the increasing fast pace human life on Florida waterways. "More than 90% of direct manatee mortality" caused by humans is a result of boat accidents (5). Most often manatees are hit in their state of "torpor" or rest when they float near the surface of the water and are in the direct line of contact with boat propellers. Within the past few years the number of manatees killed by watercraft has remained steady, in the high seventies and low eighties. In 1999, a record high of 82 manatees were killed in such situations.
In some "controversial protection measures" (5) boats have posted speed limits on the water and boaters are advised for certain behavior, like to wear polarized sunglasses so that manatees close to the surface of the water can be seen more easily. One might assume then that the resulting boat collision deaths would decrease. This year, 2002, however, has already proved this idea wrong. Reports from the Florida Marine Research Institute state that this year up as recorded to September 27, 83 manatees have already been killed by human contact (6).
Not only do collisions with watercraft cause many manatee deaths a year but the phenomena of algae blooms, otherwise called red tide, account for many other manatee deaths. Harmful algal blooms (HABs) are caused by the cycle of algae in the sea. The germination of algal cysts can only happen in warm temperatures with increased light (7) . This obviously follows the pattern of manatees' habitats. When the cysts break open and result with a simple reproduction of a cell, it then "blooms," cells divide exponentially. Their concentration can cause toxicity in the water, accumulating in "dense, visible patches near the surface of the water" (7). The species which commonly poison manatees, found in the Gulf of Mexico is Gymnodinium breve.
Even though we want to attribute the loss of manatees by algal blooms to natural phenomena, it is obvious that certain human actions contribute to the worsening HAB situations. In the past few decades, the resultant deaths have increased while HABs have existed for years and so it poses the question as to why manatees are now being so affected by this. Human interaction with the environment has inevitably caused change in the environment which has then affected the manatees' response to HABs. Due to pollution, algal blooms have become more concentrated. Pollution also causes decreased resistance in manatees. Construction has destroyed wetlands which used to filter pollution. Habitats suitable for the manatee population become less and less with the same construction and so more manatees then congregate in the same area (8). The most popular area has become Crystal River which flows about seven miles into the Gulf of Mexico from Florida. The manatee's "need for fresh or low salinity drinking water" pulls them towards Crystal River and its containment of many major springs also provide for warm water, even when the waters of the gulf turn cold for the winter (1). Therefore, when two hundred or more manatees migrate up Crystal River every winter, their presence in close quarters makes the spread of infectious disease and toxins, like red tide, spread more quickly than otherwise.
These issues of manatee endangerment in Florida spark debate with the question of why manatees are considered endangered and then how should they be protected. Currently the approximately 2500 manatees of Florida are protected under the Endangered Species Act of 1973 and the Federal Marine Mammal Protection Act of 1972 (3). Many counties in the state of Florida attempt to further protect manatees by developing their own regulations. The population of Florida, however, argues over the development of better protection for manatees. Currently this can be seen with the petition submitted by the Coastal Conservation Association of Florida to re-evaluate manatee's definition as an endangered species under the Florida Endangered Species Act (2). This argument of certain groups in Florida over the protection of manatees and their definition as an endangered animal is a defensive reaction to human's direct involvement in destroying the species.
Clearly manatees have existed for millions of years and have undoubtedly changed over time. Their gestation period of thirteen months means that they only reproduce every two to five years and their change occurs slowly (3). It is then doubtful that manatees would be able to grasp the new development in red tide concentration, etc to develop different, more efficient immune systems to survive this destruction. Numbers like "20 % of the total population d[ying] in 1996 alone," it is in "serious doubt as to the manatees' survival into the next century" (4). Obviously, humans have not killed off every manatee with direct collision accidents. In fact the eight categories of death for manatees include three human related occurrences like boat collisions or death by a flood gate and five categories of "natural" causes like diseases and toxins (9) . While manatees may not be killed a majority of the time by human contact, 44 % are killed. And even though the majority of deaths are declared "natural," humans' involvement and interaction with the environment has changed and effected the environment in which the manatees live. Therefore, no matter what the particular stated cause of death is of manatees, somehow their deaths and so their endangerment as well, can be linked to humans.
World Wide Web Sources
1)Manatee Introduction and Background, part of The Florida Water Story homepage.
2)The Future of the Florida Manatee: And Ongoing Concern, recent opinion piece written on the petition to reconsider manatees' placement under the Florida Endangered Species Act.
3)About manatees..., a general description and explanation of manatees and their behavior.
4)Manatees, People---&The Buddy System, part of The Florida Water Story homepage.
5)Manatee Protection Efforts, part of The Florida Water Story homepage.
6)Number of Manatees Killy by Boats Reaches Record High, recent article (September 27, 2002) on number of manatee deaths this year.
7)What are Harmful Algal Blooms (HABs)?, general overview of how algal blooms work.
8)Manatee Habitat & Water Quality Issues, part of The Florida Water Story homepage.
9)Descriptions of Manatee Death Categories, general overview of categories used to determine death statistics.
Alcohol: From the Cradle to the Grave Name: Heidi Adle Date: 2002-09-30 13:53:11 Link to this Comment: 3032 |
Heidi Adler-Michaelson
2002-09-29
Biology 103 Web Paper 1
Alcohol: from the Cradle to the Grave
"My baby was born drunk. I could smell the alcohol on his breath." (2). Maza Weya is an Assiniboine Indian. She grew up on the Fort Belknap reservation in Montana. She says that her "twin brother excelled at everything so [she] excelled at being an alcoholic" (2). Things worsened over the years, she had a child somewhere in between it all, and later started drinking perfume in the hope that it could help her quit. Her family intervened and took the child away from her and some time later she received the notification that her sister had adopted her son and she wasn't even there to give her consent. It was after this blow that she decided to go to rehab. Her son is 5 feet tall and weighs 95 pounds. The first time she talked to her son on the phone, she told him about her drinking problem during, before, and after the pregnancy. She says: "...he asked me why I didn't love him enough that I wouldn't drink while he was inside me...He asked if I had given him up because he wasn't perfect, because he was damaged" (2).
What could possibly lead a pregnant woman to drink during pregnancy? Well, of course there are those that have been addicted for years and find it impossible to quit of 9 months. It is true that most developmental problems in the fetus are generally linked to chronic and abusive drinking (1). But recent studies have shown that similar if not greater damage can be done to the unborn child whose mother does binge drinking (2).This is a concept defined as having five or more drinks at one setting (5). "The highest-risk groups of women in terms of drinking during pregnancy are women with master's degrees and higher and women who dropped out of high school" (4). The Centers for Disease Control found four times as many binge drinkers in 1995 as in 1991 (2).
Science has its own thoughts on this topic. During the first three months of pregnancy, the fetus is most vulnerable. The alcohol passes from the mother's bloodstream to the baby's (9). According to the March of Dimes "When a pregnant woman drinks, alcohol passes swiftly through the placenta to her fetus. In the unborn baby's immature body, alcohol is broken down much more slowly than in an adult's body. As a result, the alcohol level of the fetus's blood can be even higher and can remain elevated longer than in the mother's blood" (5).
Not to long ago, researchers discovered how exactly alcohol affects the development of the brain of a fetus. According to this research, getting drunk just once during the final three months of pregnancy may easily be enough to cause brain damage. "This is the first time we've had an understanding of the mechanism by which alcohol can damage the fetal brain. It's a mechanism that involves interfering in the basic transmitter system in the brain, which literally drives the nerve cells to commit suicide" (10). It is during the third trimester of pregnancy that a period called synaptogenesis begins. During this period, that continues into childhood, the brain develops rapidly and is most sensitive to alcohol. The researchers have found that parental alcohol affects two brain chemicals, glutamate and GABA, which helps the brain communicate with itself. The research is still going on with concentration on the link between damage to certain parts of the brain and problems in the adult (10).
How exactly are children who were forced to drink in their mother's womb different? There are many different definitions with only minor variations. As proposed by Sokol and Clarren in 1989, the proposed criteria are 1) prenatal and/or postnatal growth retardation (weight and/or length below the 10th percentile); 2) central nervous system involvement, including neurological abnormalities, developmental delays, behavioral dysfunction, intellectual impairment, and skull or brain malformations; and 3) a characteristic face thin upper lip, and an elongated, flattened midface and philtrum (the groove in the middle of the upper lip) (3). One of the most important things to know about children with this disability is that FAS (Fetal Alcohol Syndrome) is that they don't understand the concept of "cause and effect" (i.e. if I touch the hot stove, I will burn myself) (8).
Another important facet is the deviations of the gravity of alcohol among different ethnicities. According to the Centers for Disease Control, incidents of FAS per 10,000 births for different ethnic groups were: Asians 0.3, Hispanics 0.8, whites 0.9, blacks 6.0, and Native Americans 29.9. The former FAS coordinator on the Fort Peck Indian Reservation says that this data is mostly due to the fact that Native Americans are more open and comfortable in speaking about alcohol problems (2). But it is wide-spread knowledge that alcoholism has been a problem among Native American tribes for decades.
Melissa Clark is a 22 year old victim of fetal alcohol syndrome. Recently, when she was home alone in her house in Great Falls, a man rang the door bell. Even though she did not know the man, she opened the door and let the stranger in. She walk straight to her bed room and commenced to take off his clothes. He told her to do the same and she did. After raping her he simply got dressed and walked out. Some hours later her foster mother came home and Melissa told her what happened. Johnelle Howanach, her foster mother, called the police who in turn wrote it off as consensual sex. Johnelle however argues that Melissa did not know that having sex with a stranger was wrong. She says: "People with fetal alcohol syndrome just don't have those boundaries. They are eager to please, very friendly...They don't know the difference between a friend and a stranger because they can't remember" (6)..
In another case, a woman drank herself into a stupor in her ninth month of pregnancy. A Wisconsin appellate court ruled that she could not be charged with attempted murder of her fetus. In fact, the only state that criminalizes such behavior is South Carolina (7). This raises many questions among humanitarians. How much is too much and what should the consequences be, if any at all? Is the unborn baby considered a part of the woman or an individual living organism? Could it live without the mother? Will it ever be asked if it wants a sip? Is this even an issue of choice? After all, the Bible clearly states: "Behold, thou shalt conceive and bear a son: and now drink no wine or strong drinks" (Judges 13:7).
1) Westside Pregnancy Resource Center, "Prenatal Risk Assessment, Keeping Your Unborn Baby Healthy Through Prevention."
2) Great Falls Tribune, "My baby was born drunk."
3) National Institution of Alcohol Abuse and Alcoholism , "Fetal Alcohol Syndrome"
4) Tucsoncitizen, "Alcohol's toll on unborn worst of any drug."
5)National Institute of Health, "CERHR: Alcohol (5/15/02)."
6)Great Falls Tribune, "Fetal alcohol syndrome leaves its mark."
7) Family Watch Library, "A Setback For Fetal Rights In Wisconsin Alcohol Case."
8) Alcohol Related Birth Injury Resource Site, "Alcohol Related Birth Injury (FAS/FAE) Resource Site."
9) Evening Post , "Study looks at effects of alcohol on unborn."
10) Alcohol Related Birth Defects Resource Site, "Alcohol Related Birth Injury FAS/FAE) Resources Site."
11) University of North Carolina, "An Introduction to the Problem of Alcohol-Related Birth Defects (ARBDs)"
Chemical Sunscreens - When Are We Safe? Name: Virginia C Date: 2002-09-30 13:57:48 Link to this Comment: 3033 |
|
How Bark is Protection for Trees Name: Jodie Ferg Date: 2002-09-30 14:52:21 Link to this Comment: 3034 |
The state tree of New Hampshire is the white birch. The bark of this tree is papery and white. As children, we would often peel off pieces and write to each other on them. The white color is supremely whiteas white as this page. I was talking to my mother earlier this evening and she told me that the color of the bark is to reflect the winter sunlightif the tree absorbs too much heat it will die. The white color of the bark prevents this from happening. The white birch tree is found in New Hampshire as well as other northern regions. It loses its leaves in the winter, thereby exposing its bark to the harsh sunlight of winter. The pale color of its bark allows the tree to survive.
One of the most famous types of tree in America is the redwood. These huge trees are found mostly in California, and are artifacts of an unsettled American wilderness. To further express how large these trees are: redwoods average eight feet in diameter and can be as wide as twenty feet. There are some as tall as 375 feet, which is taller than the Statue of Liberty. A typical redwood forest contains more biomass per square foot than any other area on earth, including the rain forests in South America. These trees are large. It would seem that they would be unharmed by anything in naturecould you imagine a beaver trying to chew through a twenty foot wide trunk? Still, there are things in nature that can harm these treesnamely, fire. A fire can burn any tree. Redwoods are not invincible, but they have evolved to avoid being burnt to the ground by the periodic fires the area experiences. The branches of the redwood do not start until very high off the groundthe branches are thinner than the trunk and therefore are more easily devoured by flame. Because the trees are so thick, nearing a foot thick in some trees, the fire chars the wood instead of burning through it. The charred wood acts as a heat shield and prevents the entire tree from being destroyed. Redwood trees can also withstand Redwood trees have been around since about the time of the dinosaurs. As we all know, not much has survived from that time. (1)
The Eucalyptus tree is to Australia as the redwood is to America. This tree is also found in California and other parts of the United States. The bark of the eucalyptus is very oily, so if it is caught in a fire the oil burns rather than the tree itself. The bark that is damaged by the fire sheds, so the tree does not catch on fire. There are also roots below ground that are very wet; their moisture protects them from the fire. There have been several reports of eucalyptus forests being completely burned, regenerating, being completely burned again, and regenerating again. To survive, the plant had to become resistant as possible to fire. That is what it has done. By being able to regenerate after a destructive fire, the plant adapts to a harsh climate. Other examples of plants that use fire to their advantage are the Jack pines, which have seritonous cones. This means that in order for the cones to open and go to seed, they must be exposed to direct and intense heatthat is, fire. Without the fire, the plant could not actually continue as a species.(2),(3)
Bark serves to protect a tree. Without bark, there would not be trees. Bark has its uses to humans as well as to trees: Native Americans used birch bark to build canoes and wigwams. The bark was also used to write on. There are oils in many different barks around the world that humans use. These same oils and other chemicals in the bark of trees and other plants can also serve to protect the plant. We are all familiar with poison ivy, one of the most irritating poisonous plants. There are also trees with poisonous barktrees that we are somewhat familiar with. A few such trees with poisonous bark are the black locust, the yew tree, and the elderberry tree. There are many other plants that are completely poisonous, which would include the bark, but they seem to be smaller plants that do not necessarily have bark. A poison in the bark is a way to prevent being eaten by animals. (4)
We sometimes think of trees and plants as living things that are just there, passively accepting human interference and animal destruction. We often forget that trees have ways of being active organismsthey have ways of protecting themselves (obviously beyond the bark as well) that we rarely notice or think about. In discussions in class it has seemed that people have forgotten that trees are even living at all. It is important to recognize that such beings as trees do exist and are very necessary for human life. With all the protective devices trees have, they cannot withstand humans and their chain saws. We are hazardous to these plants. Perhaps if there were something akin to chain saws in nature, however, there would be plants whose bark was so tough and strong it could withstand such a cut. Despite the toughness of wood and bark, however, we have managed to create and build with the hardwood trees. With our tools we can almost anything with wood.(5) There is nothing stopping humans. Even trees will never have the chance to adapt to withstand us. They have developed as to withstand so much else that we should step back, stop cutting so many of them down, and admire their ability to continue with life even under the harshest conditions.
Mammograms: A Go or No? Name: stephanie Date: 2002-09-30 14:58:15 Link to this Comment: 3035 |
"A mammogram is an x-ray picture of the breast. It can find breast cancer that is too small for you, your doctor, or nurse to feel. Studies show that if you are in your forties or older, having a mammogram every 1 to 2 years could save your life." Though this is currently the official government endorsed idea, the entire controversy over breast cancer preventatives is far much more complex. In what has become perhaps the most highly-debated topic in all of cancer research, the question on the validity of mammograms as a preventative for breast cancer has increasingly caught media attention in the past few years. Media attention notwithstanding, the statistic that suggests that one of every eight women in the U.S. will get breast cancer in their life makes the attention fall closer to home. Whether or not a mammogram can help more than hurt women in preventing cancer is an extremely touchy debate and deserves a considerable amount of research.
Essentially, a mammogram's main idea is to x-ray the breasts in order to find what are called microcalcifications, or tiny build-ups of calcium deposits or tumors that may be unidentifiable from feel. The controversy to the issue lies not in having the mammogram done at all, but the age group that should be included in this. The National Cancer Institute, who is endorsed by the government, released a statement in February of 2002:
"Women in their 40s should be screened every one to two years with mammography. Women aged 50 and older should be screened every one to two years. Women who are at higher than average risk of breast cancer should seek expert medical advice about whether they should begin screening before age 40 and the frequency of screening." (1)
Although many people now take this as a good rule of thumb, there are a number of justifiable reasons that those under 50 should not in fact use mammogram testing. The number of "risks" associated with the testing begins with the fact that the mammogram doesn't always detect breast cancer. The breast density, which just refers to the amount of tissue in the breast that is not fatty, can obscure results. Women under the age of 50 most commonly have a denser breast, which leaves greater room for false-positives or any other abnormal test. For women under 50 who do have cancer, a mammogram detects it in about 70 percent of all cases. For those over 50, about 85 percent of breast cancer cases are detected through mammograms. One source explained the risk as, "If a 40 year old woman is screened every year for 10 years, her chance of having an abnormal mammogram result is about 1 in 3". This chance is decreased for those aged 50-60, to about 1 in 4. And of those that have abnormal results, most do not end up being cancer. (2)
However, this leads into the second aspect of the controversy. When an abnormal result occurs, only a diagnostic test can determine whether or not the "cancer" is legitimate. This often painful, time consuming, worrisome and expensive procedures involve extracting fluids from the breast to be tested in labs. Many women find the wait for further results nerve racking, especially due to the fact that most end up being negative. And, many studies have shown that these women "have more anxiety and worry about having breast cancer, even after being told they do not have cancer."(2) For those under the age of 50, there is about a .03 % chance that the abnormal result will prove to be cancer. For those over 50, that result increases to 14 percent. Still, because younger women have a lower chance of having cancer in the first place, there are a smaller number of breast cancer deaths to prevent, though the percentages may be higher.
Another concern about mammograms revolves around the claim that the radiation exposure to the breast tissue during the process may actually increase the chances of cancer. One source refuted the idea saying that the exposure is comparable to a dental x-ray, with the possibility of causing the death of 1 in 10,000 women, under the condition of one mammogram per year for ten years. (1) In contrast to this finding, other sources claim that the chance is far greater, 1 in 2,700 chance, cumulating with every exposure. (3). The details of this claim and study were not mentioned, however.
And the most obvious affair associated with mammogram testing is their helpfulness in the first place. The idea that the mammogram acts as only a time consuming, expensive insurance that is just as effect with self-breast examinations comes to the forefront. In a 1992 Canadian study of 25,000 women with an equal number of routine screeners and an equal number of non-screeners found that both groups had the same rate of breast cancer deaths. (3) The source also went on to claim that:
"Seven other randomized studies have also reported no statistically significant reduction in the death rates of women who underwent routine screening mammography." (3)
In addition to that finding, the Lancet, a highly esteemed medical journal, published results that clashed with that particular study. In the study of the 54,000 women over a 14 year medical history (half of which were regular screeners while the other half relied on only medical check-ups), a 21% lower death rate from breast cancer was found in the group that used screening. (4) Said Dr. Freda Alexander of the University of Edinburgh in Scotland, who conducted the study, "The results for younger women suggest benefit from introduction of screening before 50 years of age." (4) And, in a comparable study involving 100,000 women, the death rate about 27% percent lower among those who were regular screeners. (4)
Nonetheless, organizations remained strongly divided on the topic. Of those that officially recommend routine mammograms for women under 50 include the American Cancer Society, the American College of Obstetrics and Gynecology, the American College of Radiology and the National Cancer Institute. (2) Of those who do not include, the American College of Physicians, the International Agency for Research on Cancer, the American Academy of Family Practice and the Canadian Task Force on the Periodic Health Exam.
All in all, the data seems to generally support the idea of receiving the routine exams. While nearly every organization approves their use though, mammograms only become controversial when age is factored into the picture. In accordance with the 1999 UK Trial of Early Detection of Breast Cancer, researchers said "The analysis of results by age at entry continues to suggest that screening of women aged 45-49 years is at least as effective as is that for women over 50 years." (4) The principle of defining the controversy by age seems, in retrospect, mildly irrelevant. With the support of data from the numerous valued sources, it is not dangerous nor is it impractical to undergo mammograms for those under 50. But, more importantly, the decision should be made on a personal level. Obviously, for those at greater risk (due to genetics, history, or other factors), the decision seems an obvious one, in accordance with most of studies overviewed here. As with any decision about a person's body, it should strictly remain a personal one but it is always important to make use of responsible medical technologies and resources.
(2)Potential Benefits and Risks of Mammograms
Religion vs. Science Name: Laura Silv Date: 2002-09-30 16:20:24 Link to this Comment: 3037 |
I grew up with the impression that science and religion were incompatible. Maybe it was because I went to Catholic school, and my religion teacher thought I was trying to be sarcastic when I asked things like, "If the pope is infallible, why did he say that Galileo was wrong about the sun being the center of the universe?". When she answered, "Because the pope didn't know any better", I said, "Isn't he supposed to know better if he's the pope?", and the teacher told me to stop asking dumb questions and said we'd get into it later (which of course we never did). So out of fear of flunking fifth grade religion AND science, I adopted the policy that what was taught in Science class applied only to science, and ditto for Religion.
Nine years later, I realize that maybe my questions weren't so dumb. Some people spend their lives trying to bring out the similarities between religion and science, while others spend their lives trying to tear the two apart. For my paper, I wanted to explore possible reasons why these two opposing sides have never been able to find common ground enough to unite upon (fade in War: Why Can't We Be Friends?).
One reason religion is unwilling to familiarize itself with science because science offers simple, valid, irrefutable and, above all, logical explanations for some of the "miracles" described in holy books. The Nile, for example, is known to turn red when it is overgrown with bacteria. Sorry, Moses. Carbon dating of fossils tells us that there was life on this planet long before the estimated time of the creation of Adam and Eve. Sorry, God. You can see where the religious leaders might get a little worried that their congregations would begin to fall away from the belief that an invisible man in the sky makes miracles happen, if too many explanations which appeal to their more rational way of thinking were to come up.
There are those, of course, who would argue that the Torah and the Bible are not meant to be taken literally but figuratively; that Adam and Eve are representative of all men and women, that the story of the Creation in seven "days" it meant to be a more figurative term for a longer amount of time (substitute the word "eon" for "day" in the Creation story and you'll get what I mean). That's nice and all, but it begs the question, where does the line between figurative and literal translations end? For example, the story of Esther, which, as opposed to some other stories in the Bible, is very specific when it comes to times, dates, names and places - not only that, but the story is historically supported as it is written. Should we apply the figurative translation to something which is so obviously meant literally? Of course not. So when does the figurative translation end and the literal begin? This is one question which scientists and theologians still have not been able to come up with a satisfactory answer to.
Another difference which I have found between science and religion is the definition of "truth". To the scientist, who is more skeptical, truth is ever-changing - the more one sees of the world, the more observations one makes, the closer one comes to the truth. In laymen's terms, the truth is out there. It is the goal which may not ever be attained, but that certainly won't stop the scientist from coming as close as she can. The scientist does not define "truth" by what it is, but rather by taking away the attributes which truth is not. In this manner, the definition of truth is always changing and never finalized. The theologian, on the other hand, defines truth as that which is printed in the Holy Texts, that which comes from the mouth of God Himself (although personally I believe that if there IS a god, she would have to be a woman, but that's another paper topic). Truth is absolute, definitive, unchanging and final. You can see the truth, touch it, feel it.
Although there are undeniably many differences between the issues encompassed by science and religion, few people ever take the time to realize how similar in nature the two really are. Think about it - both science and religion have their own set of books from whence all their information is drawn, instructors (if the professor will forgive me for comparing him to a pastor), philosophies of life and death, instructions and jargon. It's actually a little creepy to think of how similar these two spheres really are, for science is a religion in and of itself, and religion is a type of science. Both are learned practices; no one is born with an instinctive knowledge of the divine just as no one is born with an automatic knowledge of biochemistry. Perhaps the reason why these two fields can never seem to quite get along is because they are too similar in their nature while being dissimilar in their specific outlooks.
Science and religion are related to each other in ways both strange and familiar - for example, we can imagine that there are people raised in religious backgrounds who find science to be more practical and logical than the Invisible Man in the Sky, but what most people don't realize is that a majority of scientists are religious, not atheists. My former employer was a chemist, and I remember he said once that he and most of the people he worked with found that their faith in religion is strengthened by their work rather than diminished by it, for the detail and intricate design which is found in science and nature led them to believe that there has to be some divine power which holds the world together in the delicate balance in which it exists (Dr. Don Jones, San Bernardino, California).
Although this paper is only a small portion of the massive study which ensues on the comparison between religion and science, I hope that I have put a new spin on the comparison, for I would hate to have written anything too hackneyed and be considered unoriginal. I hope perhaps to continue the comparison in a later paper.
Tay-Sachs Disease: The Absence of Hope Name: Lauren Fri Date: 2002-09-30 17:04:52 Link to this Comment: 3040 |
When a couple has a baby, they pray that they will have an easy childbirth and a healthy newborn. However, an easy delivery and a healthy-seeming baby does not guarantee a problem-free childhood. Children born with Tay-Sachs Disease (TSD), a fatal genetic disorder, do not show symptoms until they are six months old, but almost never survive past the age of five.
Tay-Sachs Disease was named for Warren Tay and Bernard Sachs, two doctors working independently. In 1881, Dr. Tay, an ophthalmologist, described a patient with a cherry red spot on the back of his eye; the presence of this red spot has become a clear signal for the diagnosis of TSD. Several years later, Dr. Sachs, a New York neurologist, described the cellular changes caused by TSD, observed the hereditary nature of the disease, and noted its predominance among Jews of Eastern European descent (1).
A rarer form of the disease known as Late-Onset Tay-Sachs exists, but this paper will focus on classic infantile TSD and explore its scientific and social implications.
Definition and symptoms.
TSD is defined as a genetic disorder that causes the progressive destruction of the central nervous system (2). TSD occurs in babies with the Tay-Sachs gene on chromosome 15 (1). All affected babies exhibit a red spot in the back of their eyes. TSD is caused by the absence of hexosaminidase A (Hex-A), an enzyme whose presence is necessary for the breaking down of acidic fatty materials known as gangliosides. In an unaffected child, gangliosides are made and quickly biodegraded as the brain develops. When a child is afflicted with TSD, ganglioside GM2 accumulates in the brain, distending cerebral nerve cells and forcing physical and mental deterioration (3).
Once the symptoms begin, they grow progressively worse. First, normal development slows, stops, and eventually reverses. Often the baby loses newly-acquired skills such as the ability to crawl, roll over, and interact with its environment. Second, the baby loses peripheral vision and exhibits an "abnormal startle response." Third, general mental function becomes clearly debilitated, and the baby experiences recurrent seizures. Often, children lose coordination, ability to swallow, and respiratory ease. Ultimately, the child becomes blind, deaf, paralyzed, mentally retarded, and completely unable to interact with or respond to his/her environment (1).
Risk factor.
Tay-Sachs Disease is considered extremely rare among the general population. Occurrences of TSD are not limited to, but definitely concentrated in, certain ethnic sub-populations. TSD is often considered a "Jewish disease," but French-Canadians who live near the St. Lawrence River and the Cajun population of Louisiana are also at high-risk. Still, most research on the groups affected most by TSD focuses on American Ashkenazi Jews. The frequency of TSD within the Jewish population is attributed to the "founder effect" in which "genetic disorders and mutations within a closely knit minority group are perpetuated over generations" (4).
The statistics on the frequency of TSD among Jews is startling. TSD potentially affects one in every 2,500 Ashkenazi Jewish newborns (5). Ashkenazi Jews are one hundred times more likely to have an affected child. Only about one in three hundred in the general population (non-Jews and Sephardic Jews) are carriers of the TSD gene, compared to approximately one in thirty Ashkenazi Jews (6). Most people who are carriers are completely unaware since they are perfectly healthy. The gene for Tay-Sachs can be passed down through many generations before anyone in the family line gives birth to a TSD-afflicted baby. If both parents are carriers, the baby has a fifty percent chance of being a carrier, and only a twenty-five percent chance of being born with TSD. There is a twenty-five percent chance that the child of two carriers will be completely unaffected. If only one parent is a carrier, their child has a fifty percent chance of being a carrier and a fifty percent chance of being completely unaffected. A baby can only be born with TSD if both parents are carriers (7). Due to its recessive hereditary characteristics, TSD is classified as autosomal recessive (8).
Prevention and detection.
While there is no cure or proven treatment for TSD, it is a "preventable tragedy" (9). First, it is now recommended that couples within high-risk populations get tested to see whether or not they are carriers. This involves only a simple blood test whose results can be interpreted using either enzyme assay (checks the amount of Hex-A in the bloodstream) or DNA analysis (checks for one of fifty known mutations in the Hex-A gene) (9). Once carrier status is determined, a couple may decide to pursue parenthood at their own discretion, baring in mind that even when both parents are carriers, their child still will have a seventy-five percent chance of being perfectly healthy.
Once the mother is pregnant, she has two options for prenatal diagnosis. The first, amniocentesis, involves removing a small amount of amniotic fluid during the sixteenth week of pregnancy (10). If there is an absence of Hex-A in the fluid, the fetus is affected by TSD, and the couple can choose to have a therapeutic abortion. Another, newer option is chorionic villus sampling (CVS), which is performed during the tenth week of pregnancy (11). This procedure involves removing a small amount of placenta, and test results are returned more quickly than with amniocentesis. Also, should the couple choose to have an abortion, CVS allows them more privacy and safer pregnancy termination (9). Genetic counseling is widely recommended to all couples who are members of high-risk populations, especially those who are determined carriers.
Social and sociological implications.
It is important to analyze the effects of Tay-Sachs Disease from a broader cultural perspective. Because TSD occurs mainly in clearly-defined populations, and also because of the profound moral issues raised by genetic screening, screening-based abortions, and alleged eugenics, a purely scientific study of the disease would be lacking.
A group called Dor Yeshorim (Hebrew for "the generation of the righteous") provides an illuminating example of how the effects of TSD raise moral, social, and even theological issues. Dor Yeshorim was founded in 1983 by groups of Orthodox Jews (led by Rabbi Joseph Eckstein, father of four children born with TSD) in New York and Israel. Rabbi Eckstein intended to do everything within his power to eliminate Tay-Sachs disease from the Jewish population. Through programs implemented by Dor Yeshorim, Orthodox Jewish high schoolers are tested to determine whether or not they are carriers. Rather than receiving the results directly (in an effort to curb stigmatization), they receive a six-digit identification number. Then, when two youths are considering marriage or even dating, they are encouraged to call a hotline. They enter their six-digit numbers, and the service deems them "compatible" or "incompatible" (if both are carriers). Eight thousand people were tested in 1993, and eighty-seven couples who were previously considering marriage decided against it based on their genetic incompatibility (12). A few years later, Dor Yeshorim expanded its testing services to Yeshiva University, sparking controversy. Originally, Dor Yeshorim was aimed at a the Chasidic population, who still arrange marriages; however, arranged marriages are extremely rare at Yeshiva University, and thus the appropriateness of the testing was questioned there (13). Whether or not Dor Yeshorim is morally sound, their tactics have been effective; in 1995, they released a statement which declared: "Today, with continual testing,
new cases of Tay-Sachs have been virtually eliminated from our community" (14).
Conclusion.
While Dor Yeshorim's position is a radical one, drastic steps must be taken to put an end to the devastation suffered by families who must cope with the hopeless misery of Tay-Sachs. There is still no cure, and no effective method of treatment. Research is being conducted that utilizes gene therapy to try to repair the mutated Tay-Sachs gene, but attempts have been largely unsuccessful (15). For now, carrier screening and prenatal testing are encouraged for all couples who may be at risk. TSD can also occur in babies who are not born to couples in high-risk populations, so testing and education must be expanded to put an end to Tay-Sachs disease for good.
Chemical Castration: The Benefits and Disadvantage Name: Katherine Date: 2002-09-30 17:05:50 Link to this Comment: 3041 |
Child molestation is a serious problem in the United States. The legal system is lenient with pedophiles, punishing them with insufficiently brief prison sentences that are further abbreviated by the option of parole. Some child molesters are released back into society after serving as little as one fourth of their prison-time (1). Recidivism is extremely high among child molesters; 75% are convicted more than once for sexually abusing young people (6). Pedophiles commit sexual assault for a variety of reasons. Some rape children because of similar instances of abuse in their own childhoods (1). Some view the act of molestation as a way to gain power over another individual (1). Some pedophiles act purely on sexual desires. No matter what causes these heinous criminals to molest children, their crimes are inexcusable. Unfortunately, utilizing prison as a punishment for child molestation creates only a Band-Aid solution for the issue of sexual assault and other resolutions need to be investigated.
Alternative options for the punishment of male pedophiles are being explored in the status quo. Scientists have observed the link between testosterone and aggression and concluded that high levels of testosterone correspond with increased violent and aggressive behavior in men (5). "It is the reason that stallions are high strung and impossible to train, the reason male dogs become vicious and start to bite people. It's why boys take chances and chase girls, why they drive too fast and deliberately start fights. In violent criminals, these tendencies are exaggerated and carried to extremes" (8). In an effort to stop male pedophiles, male child molesters have the option of being chemically castrated in some states. "Chemical castration is a term used to describe treatment with a drug called Depo-Provera that, when given to men, acts on the brain to inhibit hormones that stimulate the testicles to produce testosterone" (2). Depo-Provera is a common birth control pill that containing a synthetic version of the female hormone progesterone. Advocates of chemical castration hope that injections of Depo-Provera will prevent men from molesting children.
However, some experts argue that Depo-Provera is ineffective and will not prevent molestation. Forced castration may have the adverse affect of angering a criminal, increasing his violent tendencies and lead to additional sexual abuse (2). Additionally, Depo-Provera is reversible. Therefore, unless injections are mandatory and monitored, pedophiles will not be "cured" by the drug therapy. The child molester will have renewed sexual fantasies and high levels of testosterone if the injections are discontinued (7). Joseph Frank Smith, a convicted child molester, became an advocate for chemical castration after undergoing the therapy in the 1980s. Smith stopped using the injections in 1989. In 1999, he was convicted for molesting a five-year old girl and immediately returned to prison (3). Depo-Provera also has caused side effects in some men "including depression, fatigue, diabetes, [and] blood clots" (2). Chemical castration may cause some detrimental effects in child molesters.
Regardless, Depo-Provera has been proven to inhibit the abilities of pedophilias to assault children. The progesterone in Depo-Provera counteracts the biological tendencies that lead men to rape children (4). By lowering testosterone, Depo-Provera reduces sex drive (6). Males can have sexual intercourse (7) but do not want to. Depo-Provera also decreases aggressive tendencies by reducing testosterone. "[T]he castrated criminal would be more docile and have a better opportunity to be rehabilitated, educated, and to become a worthwhile citizen" (1). Castration removes the biological and chemical tendencies that are intrinsically linked to the desire to rape in males.
Depo-Provera also reduces recidivism rates. When used as a mandatory condition of parole (6), chemical castration decreases the occurrence of repeat offenses from 75% (6) to 2% (1). Prison is less desirable because it serves no rehabilitative purpose for sexual offenders. Pedophiles who spend time festering in a prison cell are given extensive downtime to concoct new sordid sexual fantasies involving children. These horrific visions are translated into terrifying realities once the criminal comes back into contact with children following his inevitable release from prison (1). Prison simply produces sneakier criminals. Pedophiles do not want to be incarcerated again so they think of new ways to rape children that will avoid detection and future detention (6). Prison increases aggressive tendencies in male pedophiles while chemical castration addresses the root causes of sexual assault and decreases further sexual deviance.
Although chemical castration is not the perfect solution to inhibit child molestation, it discourages sexual assault better than incarceration. Injections of Depo-Provera decrease the aggressive tendencies that lead to rape in males. Castration also discourages sexual fantasies and eradicates sexual obsessions. Pedophiles are reduced to apathetic pacifists. Regulated chemical castration should be encouraged as an alternative to prison for male child molesters in order to stop recidivism and decrease instances of sexual assault.
1)Castration Works, an article by Susan Feinstein for 212.net regarding the implications of chemical castration on pedophiles.
2)Chemical Castration Law May Backfire, Experts Warn, an article off the ACLU Newswire from September 18, 1996.
3)Convict Who Had Chemical Castration Gets 40 Years For New Sexual Attack, the Roswell Daily Record Online, February 4, 1999.
4)Is Chemical Castration an Acceptable Punishment For Male Sex Offenders, by LaLaurine Hayes for the online database "Sex Crimes, Punishment and Therapy" constructed by students in a Psychology course at California State University Northridge.
5)High Testosterone Levels Linked to Crimes of Sex, Violence, Volume 1 No. 3, 1995, pg. 2.
6)Repeat Sexual Offenders Must Face Chemical Castration, an article
prepared by Crystal Hutchinson, a student at Monroe Community College in New
York State.
7)Chemical Castration: A Strange Cure for Rape, from the Kudzu Monthly, an e-zine popular among the
Southern States.
8)Dr. Robert Girard, in a scientific study on factors that contribute to criminal conduct, in an article by Susan Feinstein chronicling the effects of chemical castration as posted on 212.net.
Bipolar Disorder and the Connection to Dyslexia Name: Meredith S Date: 2002-09-30 17:17:07 Link to this Comment: 3042 |
Dyslexia is affecting more and more children every year, and although most educators would agree that dyslexics are "not people who see backwards, (1)" there is still no solid theory on why dyslexics cannot differentiate between the sounds "or" and "ro." At the same time, bipolar disorder is becoming more and more understood by scientists as more and more people, especially children, are diagnosed with every year. These two seemingly different disorders, both lacking a cure, are often found within the same children, and yet no substantial research has been conducted nor have educators been taught how to teach some one who is both dyslexic and bipolar.
The most universal diagnosis of bipolar disorder consists of massive mood swings, ranging from manic to severe depression, all within a few hours. Manic episodes often consist of long periods, in which sufferers may feel elevated, think that they are invincible, have trouble focusing on one topic, and need little to no sleep. Depression episodes can also be long, but instead of riling sufferers, it makes them fall into a state a feeling hopeless, increased apathy, decreased appetite, and "a drop in grades, or inability to concentrate (2)."
Bipolar disorder, especially found in children and adolescents, is not just a phase that they can hope to out grow. It is a biological phenomenon in which the brain overworks in some areas to compensate for others not working hard enough. Neurotransmitters are cells that send signals between brain cells using chemicals, such as serotonin and dopamine, also known as monoamines (2). In bipolar patients, "40 percent have a loss of the serotonin 1a receptor, which may contribute to the atrophy of neurons, and may set off depression (3)." These lack of neurons affect the other parts of the brain that control understandings of rewards, possible dangers, and emotions. By not having as many messenger cells, these areas of the brain are not as connected to each other, meaning that sufferers' brains are not able to control themselves as much as a person without bipolar disorder because their brains are "wired differently (4)."
Bipolar disorder used to be thought of as an adults-only disorder. While only 1-2 percent of the adult populations suffer from this disorder, it is now thought that up to one third or 3.4 million children may be exhibiting symptoms of bipolar disorder (3). Bipolar disorder by itself would seem bad enough, but "it is suspected that a significant number of children diagnosed in the United States with attention-deficit disorder with hyperactivity (ADHD) have early onset bipolar disorder instead of, or along with, ADHD (2)."
ADHD is, like most disorders, not completely understood, and although the symptoms are now recognized, the causes are still in dispute. Sufferers are often hyperactive, unruly, and unable to concentrate. Medications as Ritalin are often used to combat such hyperactive symptoms and help sufferers return to a normal life. While the symptoms are often treated, the cause of them is not understood nearly as well. ADHD is thought to be the result in imbalances, within and outside of the body. Factors such as congenital and biochemical are thought to be main causes, although more and more research is proving that stress can also be a significant contributor to the outbreak of symptoms. For every child or adult who is suffering from ADHD, approximately 50 percent also suffer from other learning disorders, the main one being dyslexia (5).
At one time, a dyslexic was dismissed being lazy or even dumb. It affects one in 20 people (6), and is now known as one of the most common learning disorders. Thought to be congenital, dyslexics have problems "translating language to thought or thought to language (1)," meaning that they often have problems reading or remembering how to spell a word, not matter how often they see it. One theory as to why dyslexics cannot make the connection between written and spoken language is that the they have smaller magnocellular pathways, routes on which magnocells, or nerve cells found between the retina and the place where the right and left images are combined to form one image, carry the image to the brain. Since these pathways are smaller, some of the information may be "lost (7)."
But Dyslexia is not purely a visual problem, and the answer to the riddle has yet to be solved. One thing researchers do know it that stress, anxiety, and other factors can increase the impairment of this disorder (5). Men and women, along with children cannot have their dyslexia cured, but they can learn to live with it. Unfortunately, dyslexia does not display many physical symptoms; so diagnosing the disorder can be difficult, unless one knows quite a bit about it. As a result, many children are not diagnosed until later, and still have to deal with the stresses of being thought of as an underachiever or having a lower IQ by other fellow students and even teachers. When dyslexia is combined with ADHD or bipolar disorder, the levels of stress and anxiety on an individual skyrocket, making all of the conditions worse, and forcing the student to have an even harder time with school and everyday life.
The possible link between dyslexia and bipolar disorder has not been nearly as extensively investigated as the two disorders in isolation. But a link may exist. If about majority of cases of bipolar disorder also involve ADHD and about half of all ADHD cases involve dyslexia and/or another learning disability, it may be possible that a direct link between bipolar disorder and dyslexia may exist. At the moment though, there is no research ongoing directly connecting the two.
Teachers are responsible for educating all children, including those with learning disabilities. To, a multisensory teaching approach, known as the Orton and Gillian approach (8) has been proven to help dyslexics learn and master previously impossible tasks substantially. But, the Orton and Gillian method can only help those with dyslexia alone and does not ever consider other disorders that may be at play within the same student. Although the multisensory approach can be easily adapted for each dyslexic, seeing as each case of dyslexia varies in severity, it has yet to be modified to compensate for ADHD or bipolar disorder. Until the day when scientists and teachers formulate connection and method of teaching that deals directly with a combination of learning disorders, sufferers are going to continue to struggle, in turn making their disorders even worse and giving them more power over their lives.
1) www.childpsychology.com - a database for articles and links for psychological disorders
3) Kluger, Jeffrey, "Young and Bipolar," Time Magazine 160 (August 19, 2002): 38-51
6) www.news.bbc.co.uk/1/hi/health/343139
Sum yourself up in a single cell Name: Sarah Tan Date: 2002-09-30 17:36:19 Link to this Comment: 3043 |
Somehow during high school, a friend of mine was watching me pack a travel bag, and based on stream-of-conscious thoughts I voiced and my overall level of high stress, she likened me to a paramecium. The comparison was based mainly on our hazy recollections from freshman year biology, which didn't involve much on paramecia. We imagined paramecia swimming around at random, bumping into things, getting disoriented, and waving their cilia in a frenzy. For some time now, I've wanted to find out how accurate this image was and therefore how accurate the comparison was, but I never had a pressing reason to look these things up. For this purpose, I was more interested in the behaviors of paramecia as opposed to their physical structure, though it turns out that the latter cannot be fully disregarded in trying to understand the former.
One of the first things I found that had never occurred to me about paramecia is that they are three-dimensional. All the pictures I'd seen before of paramecia presented a view that looked two-dimensional so that if they rotated, they'd just become a line of cilia. I realize that this couldn't reasonably be the case because it would be impossible for them to always be right side up under the microscope. They travel by rotating lengthwise on an invisible axis and moving forward or backward depending on the way the cilia beat (4). Although it seems like an unproductive use of energy, this method in fact allows the paramecia to move in direct lines despite being asymmetrical creatures. Another interesting characteristic of paramecium movement that came up in more than one source is their reaction to encountering a block in a path. When that happens, paramecia go backwards a bit at an angle, turn slightly, and try again. They repeat this trial and error process until it is successful (2). The paramecium's complexity has been discussed regarding the abovementioned navigation of obstructions, as well as recognizing dangerous situations and the apparent learning from past experiences, all done by a single cell with no nervous system, neurons or synapses (7).
The common misconception that paramecia or other single-celled organisms are simple or more primitive that multi-cellular organism is completely untrue. Paramecia, as types of ciliates are larger than the average unicellular eukaryotes, and they are arguably the most complex unicellular organisms. Because single-celled organisms must perform all functions with a single cell without the advantage that multi-cellular organisms have of specialized cells for specific functions, their structure may be much more complex than the cells of larger organisms (1). Such functions include movement, water balance, food capture, sensitivity to environment, and possibly self-defense. Defense, however, is one function where there seems to be some uncertainty as to whether paramecia have it. When they are disturbed, they rapidly shoot out trichocysts-short, thread-like structures (5). While most sources seem to agree that they are used for defense, at least one argues that the trichocysts of paramecia are seldom successful in warding off predators (3). Another source seems to describe trichocysts but calls them extrusomes (1), and the difference in what should be standard terminology is confusing.
Although paramecia are almost always described as cigar or slipper-shaped, they are not stuck in that form. They are able to change shape to squeeze through narrow passages and are therefore more versatile than I had previously realized (10). The reason they maintain their usual oval form, though, is their exterior membrane, called a pellicle, which is elastic enough for small changes but stiff enough to protect them (4). Understanding just how small they are was also enlightening, particularly when a video said that several hundred thousand paramecia could live in a single dewdrop (10).
So what about the comparison that this started out with? Up until this point, the paramecium seems rather sedate, which is not what I had hoped for. Nevertheless, I continued searching and turned up some video clips of paramecia in action under the microscope. This was where I could watch the speed at which they swam around, and I finally found evidence for our assumptions in the beginning. Given the 10,000-14,000 cilia on each cell's surface (3), looking at the speed at which they can zoom around is just fun (6), but specifically zooming in on the cilia is fascinating (9). I think that in real time, the paramecium does fit the qualities that my friend meant when she described me as a paramecium because it seems to have to independently do with one cell what most other organisms only do with many more. Additionally, the new information I learned in researching paramecia for the paper also suits my personality surprisingly well, such as dubious defense mechanisms, insisting on trying to run through brick walls, using roundabout way for everyday tasks which works well for them but perhaps not many other similar organisms.
Even if this is not a topic of great interest to the general population, or even to anyone besides my friend and I who share this joke, I think that researching paramecia has been useful to me in helping to get it "less wrong." Neither of us had biology in mind when we started this, but double checking the biological facts ensures that we are using the metaphor correctly, and we can now better explain the similarities than we could before. One problem that this method of research shows, however, is the ease with which information and data can be adopted and manipulated by someone who has a specific conclusion to prove. There are all kinds of ways to spin the supposed facts so that they show a predetermined result, and if the listener is not aware of the way in which the research was obtained, it is all too easy to be misled.
(1) Introduction to the ciliata
(2) Paramecium - www.101science.com
(5) Protist Images: Paramecium Caudatum
(6) Molecular Expressions Digital Media Gallery: Pond Life - Paramecium (Protozoa)
(7) How does a paramecium move and process information?
Raising Children Vegan Name: Chelsea W. Date: 2002-09-30 17:40:55 Link to this Comment: 3044 |
In recent years, the prevalence of vegetarian and even vegan diets has increased substantially, with a 1997 Roper Poll estimating the number of vegans in the United States to be between one-half and two million (though it is worth noting that it is difficult to gather accurate statistics on the subject).((7)) And, many people are choosing to raise their children on these diets - I'm one of those children who was raised vegetarian.
"Vegetarian" is a broad term referring to diets without meat. Often, it refers to "lacto-ovo vegetarians": people who do not eat meat, but do eat both eggs and dairy. "Vegan" refers to those who do not use a wider range of animal products, generally considered to include meat, eggs, dairy products and sometimes honey ((5) ) (though some people, myself included, may adopt the label "vegan" and still eat honey). It is also sometimes used to refer more generally to a lifestyle aimed to reduce animal suffering, including, for example, not purchasing leather products.((5))
People make the choice to become vegan for a variety of reasons, commonly involving, though often not limited to, a concern for animal rights.(5>) Other reasons may relate to health, spirituality, ecology - or any number of other issues.((5)) And, similar rationales would likely apply for people who wish to raise their children on a similar diet. Part of my interest in the subject stems from the likelihood that I will eventually decide to raise my own children vegan.
Even vegan advocacy groups, such as Vegan Outreach, are generally quick to acknowledge that merely removing certain foods from one's diet, without otherwise seeking to balance it, is unlikely to be healthy.((1)) Similarly, the health value of removing wheat, for example, from one's diet would be questionable if it were not replaced with other grains as a staple of the diet (there are, in fact, plenty of other, less popular grains teeming with nutrition). But there is fortunately a plethora of information available on how best to meet nutritional needs on a vegan diet. Here I'll specifically address some of the issues pertaining to the needs of young vegans (as it is interesting and worth noting that young vegans, just like all children, have nutritional needs related to, but unique from, those of adults). For more general information on vegan nutrition or veganism in general Vegan Outreach((1) ) is a good place to start.
Nutrition
Veganism is recognized to have certain nutritional advantages, though having other areas of potential deficiencies to watch. The American Dietetic Association (ADA) gives as its position on vegetarianism at large "that appropriately planned vegetarian diets are healthful, are nutritionally adequate, and provide health benefits in the prevention and treatment of certain diseases."((8)) And, child-rearing "expert" Dr. Spock even ultimately endorsed vegan diets for children.((3)).
Breast-feeding is recommended for vegans as for other infants. But, the mother should be careful to maintain sufficient nutrients in her own diet and thus in her breast milk.((6)) Vitamin B-12 and iron are noted as nutrients to particularly watch on this matter, and moderate exposure to sunlight should be allowed for in order to maintain vitamin D levels.((6))
B-12 and iron continue to be nutrients to watch through-out development. It is important to ensure an adequate supply of vitamin B-12 in children's diets (even more so than adults, especially as those who have been raised eating meat may thus have stored B-12 in their bodies), and this is often done via supplements.((2)) Additionally, sufficient sources of protein must be present in a diet. And, young children, more notably than adults or teenagers, should have substantial fat intake (calorie intake should not be restricted for children before at least age 2) because of the swift growth normal at that period of time.((6)) Calcium is also an important nutrient, especially during teen years. Although it is often associated with dairy, calcium can be obtained from several other sources including leafy green vegetables (such as kale) and fortified soy milk.((6))
Although I did not encounter any studies about advantages of the vegan diet specific to children, there is a great deal of evidence of the health advantages overall of diets low in animal products. "Vegetarian diets," for example, "are associated with a reduced risk for obesity, coronary artery disease, hypertension, diabetes mellitus, colorectal cancer, lung cancer, and kidney disease."((1))
On a Practical Note
The potential "inconvenience" of a vegan diet is often brought up as a stumbling block. Yet, as veganism becomes more commonplace, so does the availability of vegan food. Given the typical contents of my cabinets at home, in fact, I would likely find it extraordinarily inconvenient to attempt to plan a week's worth of meals containing meat (my poor cooking skills left aside), and most restaurants (fast food typically excluded) are willing to alter menu items to suit vegans if there aren't options all ready on the menu. But, certainly, being vegan in the context of an outside world which is not, does present certain frustrations, especially in the context of travel to places where vegetarianism and veganism are less popular.
On Paying Attention to Nutrition
The above checklist of nutritional "do's and don'ts" seems to raise the larger question of just how attentive parents might be expected to be (or endeavor to be) to the nutrition of their children - whether vegan or not. On this issue, Reed Mangels, Ph.D., R.D. makes an excellent point. "Of course it takes time and thought to feed vegan children," she writes. "Shouldn't feeding of any child require time and thought?"((6))
1)Vegan Outreach, a portion of the website of an organization called Vegan Outreach
2)an article on vegan children in the June 2001 issue of the Journal of the American Dietetic Association
3)an article on Dr. Spock's endorsement of a vegan diet for children in the April 22, 2001 edition of the Knight Ridder/Tribune Business News
4)Considerations in planning vegan diets: Children, an article in the June 2001 issue of the Journal of the American Dietetic Association
6) Wasserman, Debra. Simply Vegan. Baltimore: Vegetarian Resource Group, 1999. (Note: The nutrition section of this book is written by Reed Mangels, Ph.D., R.D. The most pertinent section of this can be found online at http://www.vrg.org/nutshell/kids.htm)
7) "Why Vegan?" Pittsburgh: Vegan Outreach, 1999. (Note: Substantial portions of this pamphlet are available online at http://www.veganoutreach.org/whyvegan/)
8)the American Dietetic Association (ADA) stating its position on vegetarian diets.
Sustainability in Action Name: Carrie Gri Date: 2002-09-30 19:02:50 Link to this Comment: 3047 |
Carrie Griffin
Biology 103
Prof. Paul Grobstein
September 30th, 2002
Sustainability in Action
Humanity could never have come this far had it not been for the bevy of natural resources available on this planet and our own ingenuity regarding their use. We've found shelter from forests, energy from oil, and food from sources that might strike a diner as peculiar with a second glance at the unprepared fish or spiny pineapple. And, as history has rolled on, we have become rather taken with all of our innovations, from our multi-colored vinyl siding to our canned tomatoes, and rather than focusing on the fulfillment of human needs, we're now seeking to satisfy human wants, a Sisyphean task.
Consequently, humans have polluted the air, depleted the soil's nutrients, rendered water unpotable, chopped the tropical rain forest, even heated the globe up a bit, all in pursuit of economic expediency (1).
Fortunately, during the 1960s, a burgeoning social conscience struck America and began to transform the conventional wisdom regarding a variety of issues, including the environment. As the American public grew increasingly aware of environmental concerns, environmentalism soon developed its own political and social agenda. At the core of the movements' goals rests the concept of sustainable development, an answer to the chronic economy versus ecology conflict. In the words of the 1987 Bruntland Report, Our Common Future, "sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (2). Sustainable development acknowledges the interconnectedness between ecology, economy, and a third factor: community. The principle asserts that by focusing on one of the smaller groupings in society, i.e. neighborhoods, cities, townships, the tensions between economic opportunities and ecological preservation can be treated on a local, and therefore better specified, scale. Or, as Minnesota's Office of Environmental Awareness concisely stated, "Sustainable development means development that maintains or enhances economic opportunity and community well-being while protecting and restoring the natural environment upon which people and economies depend " (3) .
As a philosophy, sustainability has been embraced globally by countless individual countries and the United Nation itself. Indeed, the UN cited sustainable development as the hallmark of their environmental creed at the 1992 United Nations Conference on Environment and Development held in Rio de Janeiro, Brazil. The conference ultimately produced Agenda 21, a document that affirms the UN's commitment to sustainability and offers a method of implementation of the philosophy. Agenda 21 was re-examined recently at the 2002 Earth Summit in Johannesberg, where the ideals of sustainable development and the relationship between "the environment, poverty, and the use of natural resources" were again established as important for the upcoming century (4).
Certainly, the ideals of sustainability are often met with enthusiasm and pledges to commit to them. Who can truly argue with a philosophy that, as Stanley Kuston and William Gibson, authors of The Ethic of Sustainability assert, is " a call to ethical responsibility" (5) ? The question now lies beyond the ideal itself but in the practicalities and implementation of the philosophy.
And, ironically, the answer is implicit within the theory. Sustainable development requires local action and therefore depends upon individual behavior for its existence and sustenance. Furthermore, the movement has already started. The September 2002 issue of the Utne Reader highlights " Thirty Under Thirty," a list of thirty in-their-twenties youth (and some younger!) who have taken up the banner of activism for a variety of issues, including sustainability. The article describes the work of Malaika Edwards, a twenty-seven year old resident of Oakland, California who founded The People's Grocery, a "community-owned organic grocery store run exclusively by youth." Distressed by the lack of healthy wares offered in her neighborhood, and further urged by the growing population of unemployed youth, she conceived of a small market that could also, as she put it, " tackle issues of racism and globalization on a grassroots level " (6).
Edwards' story is one of many local tales that fulfill sustainability's credo. Ultimately, these regional actions can serve as the creation of better normative behaviors for consumers. With these seemingly small acts, it could become customary for humans to ask questions regarding environmental viability versus economic practicality, so that sustainable development becomes a part of any manufacturing procedure or any plans for construction. Sustainable development does not have to exist merely as an abstract principle; it only requires thoughtful consumption and decision-making on our parts.
Web Sources
1) 1) World Scientist's Warning to Humanity
4) 4) The United Nations
5) 5) Kuston, Stanley and Gibson, William. "The Ethic of Sustainability"
6) 6) Optiz, Maria. "Thirty Under Thirty: Young Movers and Shakers."
SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH , BUT NOT BOTH)
FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT
FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT
2) 2) SD Gateway
3) 3)
Minnesota Office of Environmental Assessment
Non Web Sources
Utne Reader, Sept-Oct 2002.
References
Sexual Attraction Among Humans Name: Diana Fern Date: 2002-09-30 22:47:53 Link to this Comment: 3050 |
Sexual Attraction Among Humans
Being a heterosexual female, in the twenty first century, I pride myself on the fact that I take people at more than face value, that I appreciate human beings for their character rather than for their looks. I scoff at women who proclaim that they will not date a guy unless he has substantial material assets, a broad back, and good breeding. Yet why do I find myself making conversation with physically attractive males while blowing the off more unattractive ones? Why does my head whip around when I see a man in a Porsche? Why do my male friends all have the same prerequisites for the perfect female despite race and ethnicity: perky breasts, slim waist, and full lips? Despite most people's lofty notions of equality, and beauty being in the eye of the beholder, we are all susceptible to certain physical, and material traits that make some humans more desirable than others. Perhaps we cannot punish ourselves for our weakness when we see beautiful and successful people, part of the answer lies in the biology and evolution of humans. Males and females have different standards for a desirable mate, and we share many of these characteristics with other animals in the animal kingdom, yet these instincts are inherent for a reason: reproduction.
"As unromantic and pragmatic as it may seem, nature's programming of our brains to select out and respond to stimuli as sexually compelling or repelling simply makes good reproductive sense"(1) . Recent studies have indicated that certain physical characteristics stimulate a part of the brain called the hypothalamus, which is followed by sensations such as elevated heart rate, perspiration, and a general feeling of sexual arousal. So what visual queues instigate these feelings of sexual arousal in men? How does it differ from what women find attractive? "A preference for youth, however, is merely the most obviously of men's preferences linked to a woman's reproductive capacity"(2). The younger the female the better the capacity for reproduction, hence attributes that males find attractive and contingent on signs of youthfulness. "Our ancestors had access to two types of observable evidence of a woman's health and youth: features of physical appearance, such as full lips, clear skin, smooth skin, clear eyes, lustrous hair, and good muscle tone, and features of behavior, such as a bouncy, youthful gait, and animated facial expressions"(2) . Cross-cultural studies have found that men, despite coming from different countries find similar traits attractive in females. Men's preferences are biologically and evolutionarily hardwired to find signs of youth and health attractive in women in order to determine which females are best suited to carry on their gene, and legacy. Healthier and more youthful women are more likely to reproduce, and be able to take care of the children after birth, hence ensuring a perpetuation of the male's gene.
Scientist's have also been establishing that scent plays an important role in deeming females attractive. At certain points during their menstrual cycle women produce more or less estrogen accordingly. During certain times thought the menstrual cycle their sent can be more or less appealing to males. "A research team reports in the Aug. 30 NEURON that the brains of men and women respond differently to two putative pheromones, compounds related to the hormones testosterone and estrogen. When smelled, an estrogen like compound triggers blood flow to the hypothalamus in men's brains but not women's, reports Ivanka Savic of the Karolinska Institute in Stockholm"(3) .
Men are not the only ones subject to biological predispositions in deeming attraction. "Women are judicious, prudent, and discerning about the men they consent to mate with because they have so many valuable reproductive resources to offer"(2) . Men produce sperm by the thousands, yet women produce about 400 eggs in their lifetime, and the trials of pregnancy and child rearing are long and arduous, hence their preferences and what they find sexually attractive in a male are based more on security and longevity of relationships. Athletic prowess is an important attribute to most women that hearkens back to the beginning of man. An athletic and well-muscled male is more likely to be a good hunter hence provide for a family. Large and athletic male can also provide physical protection from other males.
I was speaking to one of my male friends the other day when he mentioned that when he was in a bar speaking to an attractive girl, he always lied about his profession, telling them he was either a lawyer, doctor, or investment banker. What do all of these professions have in common? Money. Women are attracted to a successful male because this is indicative of his ability to provide for a family. This is a desirable trait that is shared by females thought the animal kingdom. "When biologist Reuven Yosef arbitrarily removed portions of some males' (Gray shrike, a bird that lives in the desert of Israel) caches and added edible objects to others, females shifted to the males with the larger bounties"(2) . Yet a man has had more than just the resources to attract a female, he also has to be willing to share them. Women tend to be attracted to more generous men because this is indicative of how they will treat them in the future, a man cannot withhold his resources from a female and their offspring.
Sexual attraction does have biological and evolutionary traits. Yet humans do have the ability to transgress the standardization of what is attractive. The topics that I touched upon can vary from person to person, yet are all inherently a part of the human species. We are not fully beyond the basic drives of our biological and evolutionary makeup, yet not all of our desires for a sexual mate are purely physical and material, there is always the mysterious capacity to fall in love and maintain a lasting relationship with one other person.
1. 5)he evolutionary Theory of Sexual Attraction, a site posted by the university of Missouri, Kansas city.
2.7) Buss. The Evolution of Desire: Strategies of Human Mating. New York: HarperCollins, 1994.
HarperCollins: NY, NY. 1994.
3.5)Brain Scans Reveal Human Pheromones, a news source found by encyclopedia brittanica when entered the search key word, "sexual attraction"
Do I Have Insomnia? Name: Maggie Sco Date: 2002-09-30 23:38:25 Link to this Comment: 3051 |
Do I Have Insomnia?
For about two and half weeks now, I haven't been able to sleep properly. I feel tired at a relatively normal hour, around eleven or midnight, but when I go to bed I can't fall asleep. I lay awake for hours, and then when I do fall asleep I only sleep for an hour or so before waking up again. In search of a cure for my sleeplessness, I decided to research sleep disorders.
Sleep disorders are much more common than I had expected. According to the National Institute of Neurological Disorders and Stroke, about 60 million Americans per year suffer from some sort of sleeping problems. There are more than 70 different sleep disorders that are generally classified into one of three categories: lack of sleep, disturbed sleep, and too much sleep (1). All three types of disorders are serious problems and can pose a grave risk to the sufferer's health, but because of my problem I have decided to focus my paper only lack of sleep, or insomnia.
To understand why not getting enough sleep was affecting me so much, I needed to understand a little more about sleep. Sleep is a period of rest and relaxation during which physiological functions such as body temperature, blood pressure, and rate of breathing and heartbeat decrease (2). Sleep is essential for the normal functioning of the body's immune system and ability to fight disease and sickness, as well as for the normal functioning of the nervous system and a person's ability to function both physically and mentally (1). Sleep also helps our bodies restore and grow, and some tissues develop more rapidly during sleep. There is also a theory that while the deeper stages of sleep are physically restorative, rapid-eye movement (REM) sleep is psychically restorative. REM sleep also might incorporate new information into the brain and reactivate the sleeping brain (2). These are just a few of sleep's less obvious duties, not to mention that it refreshes us and makes us alert for the next day.
I always thought that insomnia was just not getting enough asleep. One interesting definition that I found described insomnia as the 'perception of poor-quality sleep' (3). This seems to indicate that it can almost be caused just by a person thinking that they aren't getting enough sleep. Insomnia can refer to difficulty falling asleep, trouble staying asleep, problems with not sleeping late enough, or feeling unrefreshed and tired after a night's sleep. Insomnia can cause such problems as sleepiness, fatigue, difficulty concentrating, and irritability.
Insomnia can be divided into three main categories, transient, chronic or intermittent. Transient insomnia is if it lasts from one night to four weeks. If transient insomnia returns periodically over months or years is becomes intermittent. It is chronic insomnia when it continues almost nightly for several months (4). Transient and intermittent types often do not require more treatment than an improvement in sleep hygiene.
There are many factors that can contribute to insomnia, and different issues trigger each type of insomnia. Transient and intermittent insomnia can be caused by something as simple as the sleeplessness that occurs just before a big test, and are very common and considered a normal stress reaction that will typically go away (5). Depression, internalized anger, anxiety and behavioral factors are the most common reasons for insomnia. The most frequent behaviors include consuming too much caffeine, alcohol or other substances, excessive napping, or stimulating activities such as smoking, exercising or watching television before bedtime (3). Insomnia can often be linked to mental illnesses or other diseases; for example, chronic insomnia is usually caused by depression (1). When a person is having sleep problems because of something else, it is called secondary insomnia. Environmental factors, such as discomfort or excessive light, and changes in a normal sleeping pattern, such as jet lag or moving to a new time zone, also cause transient insomnia (1). When none of these factors are contributing to a person's sleeplessness, they are considered to have primary insomnia, or insomnia that isn't caused by other obvious causes.
People who have insomnia tend to worry about the fact that they are not getting enough sleep, and sometimes their daytime behaviors contribute to increased lack of sleep. Worrying and stress will only increase insomnia, and habits developed to make up for a lack of sleep can delay the return of a normal sleep schedule. These behaviors include napping during the day, giving up on regular exercise, or drinking caffeinated beverages to promote staying awake or concentration (5). In order to regain normal sleeping patterns, insomniac have to practice good sleep hygiene.
After learning about the causes for insomnia, I decided that I didn't have any of the main underlying causes such as alcoholism or depression, so I decided to research good sleep hygiene. Sleep hygiene consists of basic behaviors that promote sleep and try to change behaviors that might increase chances of insomnia. These habits include going to sleep and waking up at the same time, not taking naps during the day, avoiding caffeine, nicotine, and alcohol late in the day, getting regular exercise but not close to bedtime, not eating a heavy meal late in the day, not using your bed for anything other than sleep or sex, making your sleeping place comfortable, and making a routine to help relax and wind down before sleep, such as reading a book, listening to music, or taking a bath (1). Interestingly, while sleeping pills can be effective for transient or intermittent insomnia, they are not recommended and they may make chronic insomnia worse (1). The best way to cure insomnia is to use good sleep hygiene, and be aware of any underlying causes that might be causing it.
After learning about insomnia, I decided that I don't really have it. The only side effect that I have in common with ones associated with insomnia was difficulty concentrating. I'm not irritated or sleepy during the day, and as far as I can tell I don't have any of the typical causes of insomnia. My sleep hygiene had been pretty good before I learned about it, but I did try to improve that as much as I could. The last two nights I have gotten six consecutive hours of sleep, and now I am feeling more tired during the day than when I was only getting three hours. But I do feel like my sleeplessness is declining, whatever the causes were.
Do I Have Insomnia? Name: Maggie Sco Date: 2002-09-30 23:38:33 Link to this Comment: 3052 |
Do I Have Insomnia?
For about two and half weeks now, I haven't been able to sleep properly. I feel tired at a relatively normal hour, around eleven or midnight, but when I go to bed I can't fall asleep. I lay awake for hours, and then when I do fall asleep I only sleep for an hour or so before waking up again. In search of a cure for my sleeplessness, I decided to research sleep disorders.
Sleep disorders are much more common than I had expected. According to the National Institute of Neurological Disorders and Stroke, about 60 million Americans per year suffer from some sort of sleeping problems. There are more than 70 different sleep disorders that are generally classified into one of three categories: lack of sleep, disturbed sleep, and too much sleep (1). All three types of disorders are serious problems and can pose a grave risk to the sufferer's health, but because of my problem I have decided to focus my paper only lack of sleep, or insomnia.
To understand why not getting enough sleep was affecting me so much, I needed to understand a little more about sleep. Sleep is a period of rest and relaxation during which physiological functions such as body temperature, blood pressure, and rate of breathing and heartbeat decrease (2). Sleep is essential for the normal functioning of the body's immune system and ability to fight disease and sickness, as well as for the normal functioning of the nervous system and a person's ability to function both physically and mentally (1). Sleep also helps our bodies restore and grow, and some tissues develop more rapidly during sleep. There is also a theory that while the deeper stages of sleep are physically restorative, rapid-eye movement (REM) sleep is psychically restorative. REM sleep also might incorporate new information into the brain and reactivate the sleeping brain (2). These are just a few of sleep's less obvious duties, not to mention that it refreshes us and makes us alert for the next day.
I always thought that insomnia was just not getting enough asleep. One interesting definition that I found described insomnia as the 'perception of poor-quality sleep' (3). This seems to indicate that it can almost be caused just by a person thinking that they aren't getting enough sleep. Insomnia can refer to difficulty falling asleep, trouble staying asleep, problems with not sleeping late enough, or feeling unrefreshed and tired after a night's sleep. Insomnia can cause such problems as sleepiness, fatigue, difficulty concentrating, and irritability.
Insomnia can be divided into three main categories, transient, chronic or intermittent. Transient insomnia is if it lasts from one night to four weeks. If transient insomnia returns periodically over months or years is becomes intermittent. It is chronic insomnia when it continues almost nightly for several months (4). Transient and intermittent types often do not require more treatment than an improvement in sleep hygiene.
There are many factors that can contribute to insomnia, and different issues trigger each type of insomnia. Transient and intermittent insomnia can be caused by something as simple as the sleeplessness that occurs just before a big test, and are very common and considered a normal stress reaction that will typically go away (5). Depression, internalized anger, anxiety and behavioral factors are the most common reasons for insomnia. The most frequent behaviors include consuming too much caffeine, alcohol or other substances, excessive napping, or stimulating activities such as smoking, exercising or watching television before bedtime (3). Insomnia can often be linked to mental illnesses or other diseases; for example, chronic insomnia is usually caused by depression (1). When a person is having sleep problems because of something else, it is called secondary insomnia. Environmental factors, such as discomfort or excessive light, and changes in a normal sleeping pattern, such as jet lag or moving to a new time zone, also cause transient insomnia (1). When none of these factors are contributing to a person's sleeplessness, they are considered to have primary insomnia, or insomnia that isn't caused by other obvious causes.
People who have insomnia tend to worry about the fact that they are not getting enough sleep, and sometimes their daytime behaviors contribute to increased lack of sleep. Worrying and stress will only increase insomnia, and habits developed to make up for a lack of sleep can delay the return of a normal sleep schedule. These behaviors include napping during the day, giving up on regular exercise, or drinking caffeinated beverages to promote staying awake or concentration (5). In order to regain normal sleeping patterns, insomniac have to practice good sleep hygiene.
After learning about the causes for insomnia, I decided that I didn't have any of the main underlying causes such as alcoholism or depression, so I decided to research good sleep hygiene. Sleep hygiene consists of basic behaviors that promote sleep and try to change behaviors that might increase chances of insomnia. These habits include going to sleep and waking up at the same time, not taking naps during the day, avoiding caffeine, nicotine, and alcohol late in the day, getting regular exercise but not close to bedtime, not eating a heavy meal late in the day, not using your bed for anything other than sleep or sex, making your sleeping place comfortable, and making a routine to help relax and wind down before sleep, such as reading a book, listening to music, or taking a bath (1). Interestingly, while sleeping pills can be effective for transient or intermittent insomnia, they are not recommended and they may make chronic insomnia worse (1). The best way to cure insomnia is to use good sleep hygiene, and be aware of any underlying causes that might be causing it.
After learning about insomnia, I decided that I don't really have it. The only side effect that I have in common with ones associated with insomnia was difficulty concentrating. I'm not irritated or sleepy during the day, and as far as I can tell I don't have any of the typical causes of insomnia. My sleep hygiene had been pretty good before I learned about it, but I did try to improve that as much as I could. The last two nights I have gotten six consecutive hours of sleep, and now I am feeling more tired during the day than when I was only getting three hours. But I do feel like my sleeplessness is declining, whatever the causes were.
The Female Praying Mantis: Sexual Predator or Misu Name: Michele Do Date: 2002-10-01 02:15:27 Link to this Comment: 3055 |
"Placing them in the same jar, the male, in alarm, endeavoured to escape. In a few minutes the female succeeded in grasping him. She first bit off his front tarsus, and consumed the tibia and femur. Next she gnawed out his left eye...it seems to be only by accident that a male ever escapes alive from the embraces of his partner" Leland Ossian Howard, Science, 1886. (7)
The praying mantis has historically been a popular subject of mythology and folklore. In France, people believed a praying mantis would point a lost child home. In Arabic and Turkish cultures, a mantis was thought to point toward Mecca. In Africa, the mantis was thought to brink good luck to whomever it landed on and even restore life to the dead. In the U.S. they were thought to blind men and kill horses. Europeans believed they were highly worshipful to god since they always seemed to be praying. In China, nothing cured bedwetting better than roasted mantis eggs. (7) The praying mantis is known for its unique look and very interesting aspects of behavior. Their bodies consist of three distinct regions: a moveable triangular head, abdomen and thorax. It is the only insect capable of moving its head from side to side like humans. Compound eyes help give them good eyesight, but it must move its head to center its vision optimally, also much like a human. Females usually have a heavier abdomen than males. Legs and wings are attached to the thorax and elongated to create a distinctive "neck". Its front legs are modified as graspers with strong spikes used for grabbing and holding prey. (2) To say the least, the mantis is a highly evolved curiosity with raptorial limbs that can regenerate when young, wings for flight, ears for hunting and evading predators, and mysterious behavior. With such highly evolved bodies for capturing and seizing prey, why are females infamous for their sexual cannibalism of males?
The mantis has an enormous appetite, eating up to sixteen crickets a day, but is not limited to just insects. They are carnivorous and cannibalistic, and only eat live prey in both nymph and adult stages. Although customarily they eat cockroach-type insects, they prefer soft-bodied insects like flies. They have been documented eating 21 species of insects, soft-shelled turtles, mice, frogs, birds, and newts. (2) Although the European mantis was introduced to the United States to eat insects that destroy farm crops, other species are known informally as "soothsayers," "devil's horses," "mule killers," and "camel crickets" since their saliva was mistakenly thought to poison farm livestock.
Because of the interesting sexual cannibalism of the species, there have been many studies on the praying mantids reproductive processes. Breeding season is during the late summer season in temperate climates. (5) The female secretes a pheromone to attract and show that she is receptive to the mate. The male then approaches her with caution. The most common courtship is when the male mantis approaches the female frontally, slowing its speed down as it nears. This has also been described as a beautiful ritual dance in which the female's final pose motions that she is ready. The second most common courtship is when the male approaches the female from behind, speeding up as it nears. He then jumps on her back, they mate, and he flies away quickly. It is most seldom that courtship occurs with the male remaining passive until approached by the female.
The actual mating response process has been described as an initial visual fixation on the female, followed by fluctuation of the antennae and a slow and deliberate approach. Abdominal flex displays with a flying leap on the back of the female are executed in order to mount her. The female lashes her antennae and there is rhythmic S-bending of the abdomen. During one experiment, mantids were observed in copulation for an average of six hours. The male flew away after mating. (6)
Although the praying mantis is known for its cannibalistic mating process in actuality it only occurs 5-31% of the time. Especially in laboratory conditions of bright lights and confinement, the female is more likely to eat the male as means of survival. "In nature, mating usually takes place under cover, so rather than leaning over the tank studying their every move, we left them alone and videotaped what happened. We were amazed at what we saw. Out of thirty matings, we didn't record one instance of cannibalism, and instead we saw an elaborate courtship display, with both sexes performing a ritual dance, stroking each other with their antennae before finally mating. It really was a lovely display". (7) There is one species, however, the Mantis religiosa, in which it is necessary that the head be removed for the mating to take effect properly. (5) Sexual cannibalism occurs most often if the female is hungry. But eating the head does causes the body to ejaculate faster. (3)
There are over 2000 species of praying mantids that display diverse shapes and sizes. They are camouflaged to blend into their environments from tropical flowers to fallen leaves. "And although they work around the same general lines- 'wait, seize, devour', behavior patterns between different species are as diverse as their body shape." (7) Some engage in sexual cannibalism more often than others. Those that do, it seems, are responsible for giving those that don't a bad reputation.
In our society that loves gory tales of sex and violence, it seems that we have focussed more on the fatal attraction aspect of the species than trying to figure out exactly why they do it. After all, being eaten also benefits the male since he serves as a kind of vitamin for his offspring so that they are strong enough to survive. And he gets to pass on his genes. The fact of the matter is that sexual cannibalism isn't that uncommon in nature. Especially in the insect world, male redback and orbweb spiders fall prey to their lovers, not to mention the infamous black widow. Have scientists focussed too much on the tales and myths of the deadly seductress? Have we misunderstood the praying mantis?
1) href="http://www.pansphoto.com/mantidae.htm"Praying Mantis
2) href="http://insected.arizona.edu/mantidinfo.htm">Praying Mantid Information
3) href="http://www.psy.tcu.edu/psy/Chapter%2013.rtf.">Sexual and Mate Selection
4) href="http://www-unix.oit.umass.edu/~abrams/mantis.html.">The Wondrous Praying Mantis!
5)href="http://www.geocities.com/paraskits/index/praying_mantis/praying_mantis.html.">The Praying Mantis
6) href="http://www.colostate.edu/Depts/Entomology/courses/en507/papers_1999/feldman.htm.">The Praying Mantis
7) href="http://www.scicom.hu.ic.ac.uk/students/features/caroline_mantis.html.">You Give Love a Bad Name
Exercise and the "Runner's High": can it realy ma Name: Sarah Fray Date: 2002-10-01 11:07:07 Link to this Comment: 3060 |
Exercise and the Runner's high: can it really make you happy?
By: Sarah Frayne
The commonly referred to "Runner's High" is a euphoric, calm and clear state reportedly reached after a long period of aerobic exercise. There is no concise single definition for the phenomenon because it is immeasurable. The concept is soley based on reports of personal experiences. Also, exercise is said to have the effects of a general boost in mood and happiness. This theory is the basis for numerous depression treatment programs that incorporate exercise. Many believe this mood change is a result of both mental and physical factors. Psychologically, exercise causes a boost in self esteem, an improved self-image, confidence and feelings of accomplishment as well as a break from the other aspects of life (2). While these reasons to be happier during or after exercise are well accepted, the chemical processes behind the immediate "runner's high" and a lasting general mood change during and after exercise is greatly debated.
The first theory about the chemical cause of the "Runner's high" was put forth in the 1970's. Jogging was popularized around the same time a new type of brain chemicals was discovered . These chemicals, now called endorphins, were found to be very similar to morphine in chemical structure and pain killing abilities (7). In fact, morphine attaches to the same receptors in the brain as endorphins. The scientists found the similarities so striking that they actually named the chemicals 'endorphins', meaning "morphine" and "made by the body" (1). These endorphins became the popular answer to anything that gave pleasure (they are also commonly associated with orgasms). The theory that endorphins caused the high during exercise was supported when early research found that there were heightened levels of endorphins present in the blood stream during and after exercise (1).
Scientists found it hard to investigate the exact relationship between these new chemicals and the euphoric effects of exercising because of the variability involved in the qualitative nature of exercise difficulty and the intricacy of evaluating whether the endorphins were, not only present, but also responsible for the high. To this end, Virginia Grant, a psychologist, did experiments with rats comparing the behavior of rats addicted to morphine and rats that exercised. The experiment allowed rats to eat for one hour a day. Some rats were left in an empty cage the remaining 23 hours, while others were left in cages with wheels. Those left in the empty cages were able to eat enough in the eating hour to stay healthy, while those with the running wheels showed an inverse relationship between eating and running and eventually ran so much they died of starvation (1). It was concluded that exercise stimulated the same portion of the brain as addictive drugs. Any addictive drug causes a surge of dapamine in the brain, resulting in the building of the small proteins enkaphilin, dynorphin and substance P (1). Further, Rats who were in cage with a running wheel would run until these three chemicals were present in the brain. While research is still being conducted on the subject, this phenomenon with rats could help to explain the addiction to exercise sometimes seen in people with eating disorders. The dangerous combination of over exercising and anorexia is strikingly similar to the lowering of caloric intake of the rats the further they ran.
The experimentation with rats built a strong case that exercise is addictive; however, it failed to address the specific nature of endorphins in the process. The fact that endorphins are present during exercise is not surprising. Endorphins act as a pain reducer and are released when the body is in stress. The mere presence of the chemical does not prove that it is the main factor in causing the elation. Further, some scientist point out that the endorphins don't leave the bloodstream and therefore do not stimulate the receptors in the brain (1). Also, there are other chemicals within the brain capable of causing good feelings in a way comparable to those during and after exercise. Serotonin is one such chemical. Similar to endorphins, serotonin is released into a portion of the brain where it activates receptors causing heightened emotions and senses. Also, the chemical often times causes a suppression of appetite. However, there has not been much research done on the role of serotonin in the exercise process (3).
The most recent findings in the search has lead scientists to focus on a chemical called phenylethylamine, also found in chocolate (6). Phenylethylamine (PEA) had previously been found to relieve depression in two thirds of depression cases. There are theories that relate a low level of phenylethylamine with the presence of depression making it a natural candidate for possible chemicals surrounding the anti-depressant effects of exercise. Also, the chemical has been found to cause heightened activity and attention in animals (5). It is also notable that phenylethylamine has been able to boost moods as quickly as amphetamines, but without side effects or creating a tolerance to the chemical (5).
Ellen Billet of Nottingham Trent University studied the levels of phenylethylamine in 20 young men before and after exercising on a treadmill at 70 percent maximum heart-rate capacity. The men were asked to rate the level of exercise level they felt, and then were tested for phenylethylamine. Rise in the level of the chemical was around 77 percent with huge variances in levels between individuals (7).
It has become accepted in the scientific community that there is some sort of "runner's high" or general mood elation associated with exercised on a physical level. The research to find the processes behind this phenomenon have all shown the immediate chemical levels of people after exercise. However, the lasting ability of these effects are largely important to depression treatment and an overall healthy happy lifestyle. Donna Kritz-Silverstein from UCSD, found that exercise must be done on a regular basis to maintain the positive effects. She found that those who exercised had a lower Beck Deppression Inventory (BDI) meaning they were generally in a better mood. Ten years later, those who had stopped exercising had BDIs similar to those who had never exercised, while those who continued to exercise were able to maintain a low BDI (4).
The reaserch on the chemical processes behind, and on the lasting effects of exercise concerning its effects on mood is very new and still being done. There are incredible implications for the treatment of depression and the general ability of people to maintain happy life styles through exercise. Also, the study of these anti- depressant chemicals help to show the chemical properties of depression and mood. Further, the drug- like qualities of exercise allows an avenue for investigation of exercise addiction and the eating disorders with which it is often associated.
Websites
1) 1)JS Online, a collection of articles by different people
2) 5)International Association of Mind Body Professionals , a collection of articles about the mind and body and the interactions between the two
3) 5)About.com Page, Informational page about psychology with new discoveries, articles and general information
4) 5)UniSci Home Page, science page with articles on various numerous topics
5) 5)Chocolate Information Page, a site which includes information about drugs, chocolate, and the chemicals behind these substances
6) 5)BBC News Page, a page with the news in the UK
7) 5)Cosmiverse Home Page, a page with science news and articles
8) 5)Advanced Chemistry Developement Home Page, a site with the latest on chemistry
Living Dead, Walking Life Name: Lydia Parn Date: 2002-10-01 15:43:41 Link to this Comment: 3076 |
It seems that everything around us is coming to an end. Walk down the aisle of a grocery and you'll see cans of oranges with expiration dates stamped on the side to remind us of the transient nature of grocery goods. A CD doesn't play forever and a candle always burns out. Even fun has to end for wild nights of hedonism are bound to once again turn into blue Mondays. So, if everything around us is reaching its grand finale, where does biology, the study of life, end?
Before advances in modern science made it possible to restore broken hearts and weary lungs to their original operative states, death was easy to notice. When the beat of the heart stopped, one was considered dead. Now, with technology developed to resurrect the dying, the once clean-cut line between life and death has been dulled, only to incite a fury of discussion (3). This debatable issue exists on a variety of levels for it is a grouping of diverse "philosophical, theological and scientific ideas about what is essential to embodied human existence"(2). However, before delving into a discussion of death, it is first important to think about what constitutes life.
Simply put our bodies are made up of a collection of cells. If one of these cells were to be extracted from a multicultural organism and placed in a solution with the appropriate nutrients, it would endure with no great trouble. It would keep on performing the basic metabolic processes considered necessary to life; taking in nutrients, breaking them down to create energy, then using that energy to divide, expel wastes and further develop. This sort of life could be considered metabolic. The next step up would be the level of tissues and growth. These are basically collections of cells that are grouped into carrying out the same functions as described above. Extracting muscle tissue, which is composed of cells whose purpose is to contract upon correct stimulation, placing it in a supporting solution and artificially energizing it will cause it to contract. This sort of life can be considered to be organic life. Further grouping individual organs together, as in our own bodies, adds another plane of life to the framework. These examples illustrate the view that life is narrowly biological in nature and would further suggest the cause of death to be the malfunction of particular organizational structures. To state that the cessation of human life is a clear-cut biomedical process would be to refute the idea of consciousness; the soul, the spirit, the mind (3).
"I think, therefore I am" - Rene Descartes
Released in 1981, the Uniform Determination of Death Act (UDDA) was a landmark statement that specified two alternative criteria for determining death.
An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem, is dead. A determination of death must be made in accordance with accepted medical standards (5).
The UDDA then recognizes that death can be determined by the traditional cardiopulmonary criteria, yet also authorizes brain death to be declared for patients who fail to meet traditional criteria because cardiopulmonary functions are artificially maintained. Due to the UDDA's adoption of either cardiopulmonary or neurological criteria for the determination of death, this act has been carped by many as being confused. For both neurological and cardiopulmonary criteria can serve as signals to show an organism's capacity for life has been permanently lost. Since it is respiration and cardiac, not brain, activity that can been artificially maintained, many claim that neurological criteria provide direct evidence of death, while cardiopulmonary criteria only provide indirect evidence. In the event that respiration and cardiac activity are artificially sustained, neurological criteria must be used to certify someone as dead (1).
Medical advances have made it possible to transplant organs and tissues, and the expansion of technological methods to artificially sustain respiratory and circulatory functions have made it crucial to reconsider our understanding of death and have encouraged the adoption of brain-related criteria for death. When somebody passes away it is not the loss of the physiological function that is missed, but the person that was sustained by such functions. The brain contains the physiological centers responsible for integrating the functions of various other organ systems and tissues of the body, so that it is the death of the brain that results in the loss of integrated functioning (5). Consciousness and cognition reside in the cerebral portion of the brain, and by focusing on this advocates of brain-based criteria do not bash the traditional views of death based on cardiopulmonary criteria, rather they tend to see the profound difference between conceiving human life "as a heart-centered reality and as a brain-centered reality" (2).
However advocates of brain-standard criteria usually tend to slip away into two schools of thought, "whole-brain" versus "higher-brain" criteria for a standard of death (4). According to advocates of whole brain criteria, a person is brain dead only when the entire brain, including the stem, is dead. In application a few problems do arise, however, since patients who meet the standard clinical tests for brain death may still maintain some brain function, such as the secretion of neurohormones, or coordinated activity within isolated nets of neurons. This has driven some to further refine neurological criteria for brain death based upon functional differences between the different parts of the brain. The brain stem is the elemental constituent that supports most vegetative functions essential for life - regulation of wake-sleep cycles, respiration, swallowing. "When the brain stem ceases to function, the person loses capacity for spontaneous circulatory and respiratory function as well as the capacity for consciousness" (2). The issue of concern between advocates of whole-brain and brain stem death criteria is essentially which brain structures and functions must be lost in order to certify that the body no longer has power over the capacity for spontaneous regulation of vital processes. What the two measures have in common though is the fact that they both reflect the concept of death as a loss of integrated functioning of the organism as a whole, body and soul.
The higher brain formulation proves tricky though when placing it in practicality. This is most easily illustrated when considering higher brain death in the context of patients with a condition referred to as a "persistent vegetative state"(PVS). In such patients, all higher brain functions are lost, however brain stem functions remain largely intact. With medical care, such as respirators and artificial nutrition, people in a PVS can live for many years (1). If brain death criterion for death is employed, such patients would be considered dead. In situations such as this careful concern needs to be given in order to draw a distinction between the questions of when it is morally permissible to withhold treatments and allow a patient to die, and when it is right to declare a patient dead. In the end, one's response to brain-death standards depends both on ethical judgments and one's degree of trust in the medical profession itself.
1) Brain Death and Technological Change: Personal Identity, Neural Prostheses and Uploading. , James J. Hughes, 1995.
2) The Determination of Death , May, 1997.
3 Definition of and Criterion for Determining Death , Igor Jadrovski.
4 Report from the national institute of philosophy & public policy , Consciousness, and the Definition of Death, 1998.
5) Neurology: Brain Death Criteria , Carlos Eduardo Reis
Poor Man's Heroin Name: Brie Farle Date: 2002-10-02 13:49:02 Link to this Comment: 3084 |
A plaintiffs group in Washington D.C. has filed a $5.2 billion lawsuit against Purdue Pharma LP and Abbott Laboratories Inc. charging the drug companies with allegedly failing to warn patients the painkiller OxyContin is dangerously addictive. Do you think they'll win?
" Oxy, oxies, oxycotton, OC s, killers, oceans, O's, oxycoffins, Hillbilly Heroin." Each of these words is another name for the drug, OxyContin, marketed by Purdue Pharma LP. Addiction and abuse of the drug, crime and fatal overdoses have all been reported as a result of OxyContin use. (1).
This drug was approved by the FDA in 1995, and is a 12-hour time-released form of oxycodone, an opium derivative, which is the same active ingredient in Percodan and Percocet. OxyContin is the longest lasting oxycodone on the market. Opiates provide pain relief by acting on opioid receptors in the spinal cord, brain, and possibly in the tissues directly. Opioids, natural or synthetic classes of drugs that act like morphine, are the most effective pain relievers available. (2).
Oxycodone has been around for decades and taken for post surgical pain, broken bones, arthritis, migraines and back pain. Oxycodone is a central nervous system depressant. Its appears to work through stimulating the opioid receptors found in the central nervous system that activate responses ranging from analgesia to respiratory depression to euphoria. While Percocet and Percodan have about five milligrams of oxycodone, OxyContin tablets contain oxycodone in amounts of 10, 20, 40, and 80 milligrams. ( 4). A 160- milligram tablet became available in July 2000. Thus, OxyContin is a high potency painkiller, intended only for use by terminal cancer patients and chronic pain sufferers. People who take the drug repeatedly can develop a tolerance or resistance to the drug's effects. A cancer patient can take a dose of oxycodone on a regular basis that would be fatal in a person never exposed to oxycodone. Most individuals who abuse oxycodone seek to gain the euphoric effects, mitigate pain, and avoid withdrawal symptoms associated with oxycodone or heroin abstinence. The strength, duration, and known dosage of OxyContin are the primary reasons the drug is attractive to abusers and legitimate prescribers.
Although designed to be swallowed whole, abusers have found other ways to ingest OxyContin. Abusers often chew tablets, or crush the tablets and snort the powder. Because oxycodone is water soluble, crushed tablets can be dissolved in water and the solution can be injected. Both of these methods lead to rapid release and the absorption of oxycodone. Combining any use of OxyContin with alcohol is deadly. OxyContin and heroin have similar effects, so both appeal to the same abuser population. The powerful prescription pain reliever has become a hot new street drug. It s the so-called poor man s heroin, says Capt. Michael Holsapple of the Kokomo Police Department. (5). A 40 mg tablet of OxyContin by prescription costs approximately $4 or $400 for a 100-tablet bottle in a retail pharmacy. Generally, OxyContin sells for between 50 cents and $1 per mg on the street. Therefore, the same 100-tablet bottle purchased for $400 at a pharmacy can sell for $2,000 to $4,000 illegally. How does this compare to the street price of heroin? One bag of heroin sells for about $40, according to 1998 findings in Ireland. (6,7). A bottle of OxyContin containing one hundred tablets is clearly more for the money. (4).
Sometimes, OxyContin can be obtained easily in clinics. For a brief visit and the appropriate presenting complaint, patients may leave with a prescription for OxyContin. Many physicians are not formally trained to identify drug-seeking behavior. (4). In April 2002, the US Drug Enforcement Agency reported that OxyContin has been implicated as the direct cause of main contributing factor in 146 deaths and a likely contributor in an additional 318 deaths. Based on their findings, only nine of the reported deaths involved injecting the drug and only one death related to snorting. This indicates even non-abusers may be adversely affected. It has been alleged that Purdue Pharma L.P has marketed the drug excessively while underplaying how addictive it is. Reported warnings about the drug found on the Internet include:
1. This medicine can be habit-forming. You should not use more than the prescribed amount.
2. Whole oxycodone tablets may appear in your stool. This is no cause for worry because the medicine is absorbed when the tablet is still in your body.
3. If you are pregnant or breastfeeding, talk to your doctor before taking this medicine.
4. This medicine can cause dizziness or drowsiness. Be careful if driving a car or using machinery.
5. If you have taken this medicine for several weeks, ask your doctor before stopping, as you may need to take smaller and smaller doses before you stop the medicine completely. (5).
These precautions are not uncommon for any prescription pain reliever. However, Purdue Pharma LP has not included information regarding the drug s similarity to heroin, and has not stressed the severity of the complications. A recent newspaper article reported that OxyContin s sales, which exceeded $1 billion in the United States in the year 2000, are said to be the result of an aggressive marketing strategy to physicians, pharmacists and patients that misrepresented the appropriate uses of OxyContin and failed to adequately disclose and discuss the safety issues and possible adverse effects of OxyContin use (4) .
Seven people who are former addicts or relatives of addicts filed the Washington D.C. lawsuit. In May, Purdue said it had met with officials from the DEA because of the agency s concerns about its illegal diversion and abuse. Around the same time, Purdue Pharma said it tried to reduce abuse of the drug by halting distribution of the drug in 160mg tablets. According to the lawsuit, defendants, made misrepresentations or failed to adequately and sufficiently warn individuals regarding the appropriate uses, risks, and safety of OxyContin. Specifically, the suit quotes a May 2000 U.S. Food and Drug Administration warning letter to Purdue Pharma ordering the company to cease use of an advertisement for the drug that appeared in a medical journal. A section from the warning letter is quoted that suggests the advertisement inaccurately represents the drug as a first-line treatment for osteoarthritis. The suit alleges inappropriate marketing of OxyContin, that the drug has been inappropriately prescribed and used, unnecessarily putting people at risk of addiction to OxyContin (4).
Should it be assumed that the general public is aware of the effects of opiates? Is it the responsibility of the physician to be suspect of warning labels on every newly marketed drug? Does the word "addiction"? always prevent chronic pain sufferers from taking a miracle drug ? And finally, will anyone, especially teens, ever stop experimenting with drugs? Your answers to the above questions were probably doubtful, but this does not mean that the D.C. lawsuit isn t worth fighting for. We should be personally careful, but we also need to emphasize our right to be thoroughly and accurately informed about what we put in our bodies.
2) Government Information about OxyContin , Facts
3) About OxyContin , Facts and Information
4) OxyContin Addiction Help , Facts and Resources where to get help from addiction
5) Yahoo Health , Basic Information
6) MapInc , Article about the increase in Heroin prices in Ireland
7) Oanda , Monetary Conversion Site
| Serendip Forums | About Serendip | Serendip Home |
The Letter B Name: Catherine Date: 2002-10-02 19:36:00 Link to this Comment: 3095 |
Kawasaki Disease - No not the motorcycle Name: Yarimee Gu Date: 2002-10-10 22:15:43 Link to this Comment: 3253 |
When hearing the word Kawasaki the first thing to come to my mind was always the motorcycle. This was until the day I came into contact with the disease itself. Although I was not directly affected, my younger brother was. He was diagnosed at the age of nine, when I myself was ten. Because of my age at the time I did not really understand the disease. All I knew was that my brother had a heart condition serious enough to send him to the hospital for a while and that he had to return for follow-up visits for up to three years after this. It was not until recently that I asked myself, what is Kawasaki Disease?
Kawasaki is a disease that was detected fairly recently which is characterized by inflammation of arteries, especially coronary arteries (those that transport blood back to the heart) that are most at risk. Tomisaku Kawasaki released the first report concerning Kawasaki in 1967 and it was only in the 1970΄s where recognition as a disease came about. From then on "Kawasaki disease (Also known as KD) has become the leading cause o f acquired heart disease among children in North America and Japan. (3)
The symptoms of KD include a very high or spiking fever (104 or higher) that lasts a few days to about a week and does not respond to treatment, red lips or mouth, red eyes (similar to conjunctivitis) without mucus discharge. The peeling off of the top layer of the tongue. (This is called "strawberry tongue for it's bright red, glossy look) Swollen hands and feet that may also become red, and swollen lymph nodes. The following table shows the criteria used to diagnose KD. (2)
Table 1. CDC CRITERIA FOR DIAGNOSIS OF KAWASAKI DISEASE
Fever >5 days unresponsive to antibiotics, and at least four of the five following physical findings with no other more reasonable explanation for the observed clinical findings:
1. Bilateral conjunctival injection
2. Oral mucosal changes (erythema of lips or oropharynx, strawberry tongue, or drying or fissuring of the lips)
3. Peripheral extremity changes (edema, erythema, or generalized or periungual desquamation)
4. Rash
5. Cervical lymphadenopathy >1.5 cm in diameter
Centers for Disease Control (1980). Kawasaki disease-New York. Mortality and
Morbidity Weekly Report, 29:61-63.
Other symptoms which may or may not develop, but often times help in diagnosing the disease are swelling of the joints and extremities, irritability, diarrhea, nausea, vomiting, a rash, abdominal pain, and swelling of the gall bladder. (2)
In most times these symptoms can disappear over a period of a couple of months even if untreated. However, there are lasting and extremely serious effects to the coronary arteries which can last forever. Because these arteries become inflamed, they can be significantly damaged. This in turn can cause small sacs in the blood called aneurysms. These allow blood to pool and platelets in the blood begin to gather. After a while they form a blood clot that slows or stops the blood from getting to the heart. If the flow of blood is stopped then the child can have a heart attack. Another complication is the scaring of the arterial walls, resulting from the healing of the aneurysm. (Also known as the regression of an aneurysm) This causes them to thicken, making the arteries more narrow, which can lead to the same result as an aneurysm. Even after these aneurysms heal, the arterial wall will never be the same. However long term research has not been done to determine the effects of this later on in life. (4)
Despite the fact that this is new disease, there have been extremely efficient treatments for the developed over the years. Aspirin is used to thin the blood to lessen the chance of platelets forming blood clots. It is also used to help reduce the extreme fevers in the beginning of the disease, and as a prevention of the inflammation of the arteries. A product called Gamma Globulin is also used to treat KD. This is essentially anti-bodies from donated blood which help to lower the inflammation of the coronary arteries and protect them from the damage this can cause.
Unfortunately, modern science has been unable to find a cause for KD, either microbial or infectious. As such, there is no way of preventing the disease or even of knowing who is more susceptible to it. As of today what is known is that it is a non-communicable disease, meaning that it is not contagious. You cannot catch KD by being near someone who has it. Also, Kawasaki is a children's illness. "About 80% percent of the people with Kawasaki Disease are under age 5. Most of those affected are boys who develop the disease about 1.5 times as often as girls, and children of Asian descent. In the United States there have been reports of over1, 800 cases being diagnosed annually. (1)
Because of this it is extremely important that research is conducted and information be distributed about this disease. It is necessary to gain awareness and to gather more information in hopes of one day deciphering this disease and being able to do away with it.
.
1)AMERICAN HEART ORGANIZATION
2)KAWASAKI DISEASE FOUNDATION
3)THE AMERICAN ACADEMY OF PEDIATRICS
4)THE HOSPITAL FOR SICK CHILDREN
Kawasaki Disease - No not the motorcycle Name: Yarimee Gu Date: 2002-10-10 22:15:52 Link to this Comment: 3254 |
When hearing the word Kawasaki the first thing to come to my mind was always the motorcycle. This was until the day I came into contact with the disease itself. Although I was not directly affected, my younger brother was. He was diagnosed at the age of nine, when I myself was ten. Because of my age at the time I did not really understand the disease. All I knew was that my brother had a heart condition serious enough to send him to the hospital for a while and that he had to return for follow-up visits for up to three years after this. It was not until recently that I asked myself, what is Kawasaki Disease?
Kawasaki is a disease that was detected fairly recently which is characterized by inflammation of arteries, especially coronary arteries (those that transport blood back to the heart) that are most at risk. Tomisaku Kawasaki released the first report concerning Kawasaki in 1967 and it was only in the 1970΄s where recognition as a disease came about. From then on "Kawasaki disease (Also known as KD) has become the leading cause o f acquired heart disease among children in North America and Japan. (3)
The symptoms of KD include a very high or spiking fever (104 or higher) that lasts a few days to about a week and does not respond to treatment, red lips or mouth, red eyes (similar to conjunctivitis) without mucus discharge. The peeling off of the top layer of the tongue. (This is called "strawberry tongue for it's bright red, glossy look) Swollen hands and feet that may also become red, and swollen lymph nodes. The following table shows the criteria used to diagnose KD. (2)
Table 1. CDC CRITERIA FOR DIAGNOSIS OF KAWASAKI DISEASE
Fever >5 days unresponsive to antibiotics, and at least four of the five following physical findings with no other more reasonable explanation for the observed clinical findings:
1. Bilateral conjunctival injection
2. Oral mucosal changes (erythema of lips or oropharynx, strawberry tongue, or drying or fissuring of the lips)
3. Peripheral extremity changes (edema, erythema, or generalized or periungual desquamation)
4. Rash
5. Cervical lymphadenopathy >1.5 cm in diameter
Centers for Disease Control (1980). Kawasaki disease-New York. Mortality and
Morbidity Weekly Report, 29:61-63.
Other symptoms which may or may not develop, but often times help in diagnosing the disease are swelling of the joints and extremities, irritability, diarrhea, nausea, vomiting, a rash, abdominal pain, and swelling of the gall bladder. (2)
In most times these symptoms can disappear over a period of a couple of months even if untreated. However, there are lasting and extremely serious effects to the coronary arteries which can last forever. Because these arteries become inflamed, they can be significantly damaged. This in turn can cause small sacs in the blood called aneurysms. These allow blood to pool and platelets in the blood begin to gather. After a while they form a blood clot that slows or stops the blood from getting to the heart. If the flow of blood is stopped then the child can have a heart attack. Another complication is the scaring of the arterial walls, resulting from the healing of the aneurysm. (Also known as the regression of an aneurysm) This causes them to thicken, making the arteries more narrow, which can lead to the same result as an aneurysm. Even after these aneurysms heal, the arterial wall will never be the same. However long term research has not been done to determine the effects of this later on in life. (4)
Despite the fact that this is new disease, there have been extremely efficient treatments for the developed over the years. Aspirin is used to thin the blood to lessen the chance of platelets forming blood clots. It is also used to help reduce the extreme fevers in the beginning of the disease, and as a prevention of the inflammation of the arteries. A product called Gamma Globulin is also used to treat KD. This is essentially anti-bodies from donated blood which help to lower the inflammation of the coronary arteries and protect them from the damage this can cause.
Unfortunately, modern science has been unable to find a cause for KD, either microbial or infectious. As such, there is no way of preventing the disease or even of knowing who is more susceptible to it. As of today what is known is that it is a non-communicable disease, meaning that it is not contagious. You cannot catch KD by being near someone who has it. Also, Kawasaki is a children's illness. "About 80% percent of the people with Kawasaki Disease are under age 5. Most of those affected are boys who develop the disease about 1.5 times as often as girls, and children of Asian descent. In the United States there have been reports of over1, 800 cases being diagnosed annually. (1)
Because of this it is extremely important that research is conducted and information be distributed about this disease. It is necessary to gain awareness and to gather more information in hopes of one day deciphering this disease and being able to do away with it.
.
1)AMERICAN HEART ORGANIZATION
2)KAWASAKI DISEASE FOUNDATION
3)THE AMERICAN ACADEMY OF PEDIATRICS
4)THE HOSPITAL FOR SICK CHILDREN
test Name: Paul Grobstein Date: 2002-11-08 10:45:47 Link to this Comment: 3607 |
YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH
, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).
SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH
, BUT NOT BOTH)
FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT
FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT
I Have PMS and a Handgun, Any Questions?: Demysti Name: Adrienne W Date: 2002-11-08 12:54:33 Link to this Comment: 3613 |
Most of us are familiar with PMS, the acronym that stands for Premenstrual Syndrome, perhaps largely through the jokes told about it. However, many women who suffer from PMS or PMDD will insist that these disorders are no laughing matter. PMDD, or premenstrual dysphoric disorder, is perhaps less well known, but its impact is equally, if not more significant on a woman's life and health. Both PMS and PMDD refer to physical and mood-related changes that occur during the last one or two weeks of the menstrual cycle. What is the difference between the two disorders? For the most part, PMDD is simply a more acute manifestation of PMS, as its symptoms are more severe. Women with PMDD experience more severe mental symptoms than women with PMS, which is looked on as a more physical condition. Both are serious medical conditions that require treatment, even though their origins are still somewhat of a mystery.
The symptoms of PMS include bloating, weight gain, poor concentration, sleep disturbance, appetite change, and psychological discomfort (1). As part of its definition, the main symptoms of PMDD are actually the core symptoms of depression: irritability, anxiety, and mood swings. Some other symptoms of PMDD include: decreased interest in daily activity, difficulty concentration, decreased energy, and sleep disturbances. Thus, although PMS also affects mental health, it does not interfere with daily functioning as much as PMDD. Premenstrual dysphoric disorder, the severest form of PMS interferes with a woman's quality of life, much like depression. For this reason, many doctors believe that it, like depression, should be looked at as a serious medical condition that requires treatment.
The impact PMDD has on a woman's life and the life of those around her is not trivial, and should be taken more seriously by our society. According to a leading researcher in this field of study, Dr. Jean Endicott: "Many women report that their PMDD symptoms have caused seriously impaired relationships with relatives, friends, or co-workers, as well as with spouses or partners. Often, relationships have been lost because others say they can no longer 'put up with' some of the recurrent behaviors" (4). Of course, any disorder that interferes with the quality of one's life should be taken seriously. Unfortunately, in our society these disorders are looked upon as a joke more than anything. PMS is not covered in the medical curriculum; doctors that wish to seek more information and research on the subject must do so independently. Perhaps it is for this reason that the few researchers in the field remain unclear on the causes of PMS and PMDD.
There are, however, quite a few hypotheses on the causes of these disorders. Obviously, hormones must play a role because women report the disappearance of symptoms after their ovaries are removed. There is also evidence that the brain chemical serotonin is a factor that causes the more severe PMDD. Studies attempting to link PMS and PMDD to genetics have also been conducted. In the study, daughters of mothers with PMDD were more likely to have it themselves. Also, 93% of identical twins in the study share PMDD, which is a higher percentage than the fraternal twins in the study (44%) (4). Another leading researcher in the field, Dr. Susan Thys-Jacobs, hypothesized that calcium deficiencies are the cause of PMS and PMDD. In her study, she found her hypothesis to be true. However, there is still much to be uncovered in the mystery of these disorders. Dr. Thys-Jacobs is currently testing her theory in an NIH funded study.
Given the similarities of PMDD to depression, some doctors prescribe antidepressants to patients that suffer from PMDD. Many researchers believe that one of the causes of PMDD is a low level of serotonin, which is also a cause of depression. For this reason, SSRIs, antidepressants such as Prozac, Paxil, and Zoloft that increase serotonin levels in the brain, are considered to be effective treatments for PMDD by some doctors (2). However, there is some controversy in the medical community as to whether medication is a necessary and/or appropriate treatment. According to the PMS Project, an organization committed to the advancement of PMS and PMDD research, studies show that the most successful treatment of these disorders is a change of lifestyle and nutrition. The organization also argues that PMS and PMDD are too complex and have too many diverse symptoms to be treated with a single drug effectively (3).
What, then, would be a more natural treatment that fits within the parameters of lifestyle and diet change? Dietary change includes the elimination of all caffeine and a low carbohydrate diet, which especially avoids simple, refined sugars (1). Vitamin supplements are also recommended for sufferers of PMS and PMDD: calcium, vitamin B6, vitamin E, and tryptophan, a precursor of serotonin, have shown to ease symptoms in some women. Lifestyle changes include regular exercise, and therapy. It has also been found that hormonal therapy, such as oral contraceptives with estrogen and progesterone, may be used to decease the symptoms of PMS and PMDD.
Although the causes of these disorders are still unknown, women do have treatment options that have been proven to help ease symptoms. The problem, however, is that our society does not treat PMS and PMDD as serious disorders; and, if it is treated seriously, the assumption is that women should simply be medicated and silenced. Hopefully, more women will take the initiative to demystify these disorders to help themselves because there is help available. Perhaps there will also be more interested members of the medical community that will conduct more extensive research to advance the treatment. Either way, it is important for women to understand more about these disorders so they can help themselves.
1)Explains the differences between PMS and PMDD
2)Gives examples of the causes and treatments of PMDD and PMS
3) Official site of the PMS project
4) A comprehensive explanation of PMDD
Are you SAD: The reality of Seasonal Affective Di Name: Kathryn Ba Date: 2002-11-08 13:47:49 Link to this Comment: 3617 |
The winter blues. Cabin fever. These terms bring to mind that glum feeling that overcomes many people during the winter months. Does this seasonal depression have any validity or do we just get antsy when the temperature turns from scorching to frigid? About twenty years ago, Herbert E. Kern noted in himself regular seasonal emotional cycles, which he hypothesized might be related to seasonal variations in environmental light. He then learned from the findings of Alfred J. Lewy et. al that bright environmental light could suppress the nocturnal secretion of melatonin by the pineal gland in humans. In 1980-1981, Dr. Norman Rosenthal admitted Kern to his psychological unit and treated his symptoms of depression with bright light. Amazingly, the treatment worked. The follow-up study, in which the original results were replicated, lead to the description of Seasonal Affective Disorder (SAD) in 1984 (1). Further research over the past two decades has led to a better understanding of SAD, including possible causes, the symptoms associated with SAD, and treatment options.
SAD affects four women four every one man, with an overall incidence ranging from 2% to 10%, with more people effected at higher latitudes, in North America. The frequency of SAD in North America was double that in Europe, "suggesting that climate, culture, and genetics may be more important factors" (2). The differences in the epidemiology of SAD may cause one to pause. How can gender and cultural difference be accounted for in relation to SAD, and do gender differences as a result of culture explain both the epidemiology and etiology of the disease? Have populations in which women hold traditional gender roles, as opposed to many American and European women who are rapidly blurring gender boundaries, been studied?
General Cultural differences between the United States and Europe are also another important point to consider. Do the two cultures place different emphasis on certain events, which in turn lead to a larger prevalence of SAD in America, such as the stresses associated with the winter holiday season? Are post-holiday blues the result of our culture, and do they lead to SAD? Do periods of economic decline contribute to the overall stress of one region, which then leads to more incidents of SAD? These questions, among many, may cause one to wonder what causes this disease; the environment, biology, culture?
The etiology of SAD remains a topic of great debate. Rosenthal notes that "winter changes often involve energy conservation...many SAD symptoms can be seen as conserving-overeating, oversleeping and low sexual ability...Seasonal adaptations are adaptive in some circumstances, but not in humans, who must function at the same level all year" (3). One might wonder if his explanation is a bit shortsighted. Humans are not excluded from their fellow mammals, birds, fish, reptiles, etc. when it comes to conserving energy in the winter. Perhaps the necessity to conserve energy is not as obvious as it once was before modern technology provided Gortex jackets, Polar-fleece gloves, or Smart-Wool socks. Before these luxuries, humans probably considered the importance, and indeed necessity, of keeping warm and conserving energy during the winter months. This may have taken the form of eating more food to add more fat to one's body, remaining in bed or next to a fire, and participating in as little physical exertion as possible. Not only is Rosenthal's explanation insufficient for these reasons, he also does not take into account other factors, such as other biological influences or a genetic component.
Another theory states that excessive or inadequate levels of neurotransmitters, such as serotonin, may cause depression. Interestingly, serotonin is known to decline in the autumn and throughout winter, a fact that might allow for correction by appropriate medications. Disturbed circadian rhythms have also been pointed to as the cause of SAD. At night, circadian rhythms lower body temperature and trigger the production of melatonin, a hormone that enhances sleep. If these are not functioning properly, it is theorized, one might experience symptoms associated with SAD (2).
A genetic factor might also provide an explanation. In a study of monozygotic and dizygotic twins, seasonality was shown to be a heritable trait (4). This area of research leads to interesting questions. For example, further research might attempt to determine what specific genetic factors are responsible for SAD. Also, one might ask if SAD is the result of genetic differences and environmental influences. A study that examines monozygotic twins and dizygotic twins both separated at birth and raised in different environments, and data regarding the frequency of both twins having SAD, would be useful to determine what the effects culture and genetics may have on this disease.
It is plausible that the environment, biological, and cultural factors combine to determine the occurrence of SAD among certain populations. For example, do people who live in northern latitudes have an better chance of having SAD than people who live in southern latitudes, merely as a result of geographic and environmental differences? Do people inherit genes from their ancestors who made physical adaptations, in order to survive in a northern climate, that carry a SAD related gene? If physical adaptations did occur, did those adaptations lead to cultural differences, which in turn increased the likelihood of SAD? As of now, these questions remain just theories, and despite the general success of treating SAD, its etiology remains elusive.
The standard US manual of psychiatric diagnoses, the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), lists SAD as a subtype of major clinical depression. It is called a "specifier" because it refers to the seasonal depressive episodes that can occur within major depression and bipolar disorders. Specifically, the criteria for SAD are as follows:
A. Regular temporal relationship between the onset of major depressive episodes and a particular time of the year (unrelated to obvious season-related psychological stressors).
B. Full remission (or a change from depression to mania or hypomania) also occurs at a characteristic time of the year.
C. Two major depressive episodes meeting criteria A and B in the last two years and no non-seasonal episodes in the same period.
D. Seasonal major depressive episodes substantially outnumber the non-seasonal episodes over the individual's lifetime (5).
SAD is classified into two distinct types: fall-onset SAD, also called winter depression, and spring-onset SAD. Winter depression usually begins in late autumn and lasts through the spring or summer. Symptoms of this type of SAD include increased sleep, increased appetite, craving for carbohydrates, weight gain, irritability, and interpersonal conflict. The symptoms of spring-onset depression include insomnia, weight loss, and poor appetite, and typically begin in late spring or early summer and end in early fall (2). Patients with SAD report that their symptoms improve in lower latitudes, and worsen if they travel to an area with great winter cloud cover (4).
What can be done to help SAD patients. The most widely prescribed treatment is the use of a light box, a device that emits fluorescent light of approximately 2,500 to 10,000 lux. Lux is defined as "a unit of illumination intensity that corrects for the photopic spectral sensitivity of the human eye." Bright sunshine can be over 100,000 lux, a brightly-lit office is less than 500 lux, and an indoor light at night is only 100 lux (4). This treatment, which is 60% to 90% effective, rarely produces side effects. If they do occur, they may include: photophobia, headache, fatigue, irritability, hypomania, insomnia, and possible retinal damage. A typical treatment includes shining the light at a downward slope, while aiming at the eyes, for a period of 10 to 90 minutes daily, depending on the severity of SAD (5). The use of a light box is not effective with all SAD patients, for whose cases selective serotonin reuptake inhibitors (SSRIs) are prescribed. These medications generally are most effective when used in combination of light therapy. For practical reasons, some patients choose not to use light therapy because of the large time commitment. For this reason, the treatment of SAD must be considered on an individual basis (4).
The questions raised in this essay point to the necessity of considering differences in the epidemiology of SAD in terms of culture, biology, and environmental influences. It may be that none of these factors is the cause of SAD, but it is clear that these factors are thoroughly related and sometimes difficult to distinguish. For this reason, the etiology of this disease may be found by looking at these factors as a unit, instead of their individual parts.
1)Two decades of research into SAD and its treatments: A Retrospective , an article written by Dr. Rosenthal
2)Seasonal Affective Disorder: Autumn Onset, Winter Gloom, a clinical review of SAD
3)Modern Solutions to Ancient Seasonal Swings, from the November 2000 edition of Psychology Today
4)SAD: Diagnosis and Management, an article written by Raymond W. Lam, with general information about SAD
5)Seasonal Affective Disorders , an article
Yoga: Stress Reduction and General Well-Being Name: Mahjabeen Date: 2002-11-10 12:53:10 Link to this Comment: 3633 |
As the last few weeks of the semester approaches, Bryn Mawr finds itself submerged in the stress of finishing syllabuses, writing papers, meeting deadlines, begging for extensions and taking exams. The keyword here is stress. Stress is perhaps the most utilized word at Bryn Mawr and as a junior with more than my share of work, I have also managed to accumulate more than my regular share of stress.
Nevertheless with every problem comes its solution. There are many ways to manage and reduce stress and one such technique is practicing Yoga.
Yoga is an ancient, Indian art and science that seeks to promote individual health and well-being through physical and mental exercise and deep relaxation. Although known to be at least 5,000 years old, Yoga is not a religion and fits well with any individual's religious or spiritual practice. Anyone of any age, religion, health or life condition can practice Yoga and derive its benefits.
Unique and multifaceted, yoga has been passed on to us by the ancient sages of India; early references to yoga are found in the spiritual texts of the Vedas, Upanishads and the Bhagavad-Gita. Pantanjali's Yoga Sutras (the Eightfold Path) are still widely studied and practiced today. The Sutras form the basis of much of the modern yoga movement. (1).
The three major cultural branches of Yoga are Hindu Yoga, Buddhist Yoga, and Jaina Yoga. Within each of these great spiritual cultures, Yoga has assumed various forms.
Yoga is the practice of putting the body into different postures while maintaining controlled breathing. It is considered to be a discipline that challenges and calms the body, the mind, and the spirit. Preliminary studies suggest that yoga may be beneficial in the treatment of some chronic conditions such as asthma, anxiety, and stress, among others. (2).
By focusing on the breath entering and leaving your body, you are performing an exercise in concentration. If your mind wanders to other things, your focus on the breath will bring your concentration back. Research confirms that consciously directed breathing can have the following benefits: reduced stress, sound sleep, clear sinuses, smoking cessation, improved sports performances, relief from constipation and headaches, reduced allergy and asthma symptoms, relief from menstrual cramps, lower blood pressure, and emotional calmness. (3).
According to Dean Ornish, in his book, Reversing Heart Disease, "almost all of these (stress reduction) techniques ultimately derive from yoga." Yoga integrates the concepts of stretching, controlled breathing, imagery, meditation, and physical movement.
Yoga is thought of by many as a way of life. It is practiced not only for stress management but also for good physical and mental health and to live in a more meaningful way. Yoga is a system of healing and self-transformation based in wholeness and unity. The word yoga itself means to "yoke" -- to bring together. It aims to integrate the diverse processes with which we understand the world and ourselves. It touches the physical, psychological, spiritual, and mental realms that we inhabit. Yoga recognizes that without integration of these, spiritual freedom and awareness, or what the yogis call "liberation," cannot occur. (4).
Yoga's numerous health benefits, its potential for personal and spiritual transformation, and its accessibility make it a practical choice for anyone seeking physical, psychological, and spiritual integration. Interest in Yoga is surging throughout the world. Among the many different Yoga styles, Hatha yoga is the most familiar to Westerners. It is the path of health using breathing techniques and exercises concerning different postures to better mental and physical harmony.
During an experiment in biology lab concerning the measurement of heart rate, my partners and I experimented with yoga breathing as a technique to decrease heart rate and bring about relaxation. Our results did show a decrease in heart rate from the norm and it was concluded that if yoga was practiced in a calm setting without a time constraint (neither of which was available to us in a noisy laboratory) there would have been a significant decrease in the practicing individual‘―s heart rate. Moreover, from personal experience I can vouch that Yoga is indeed effective in not only stress reduction but also for an individual‘―s general well-being.
All forms of Yoga teach methods of concentration ad contemplation to control the mind, subdue the primitive consciousness, and bring the physical body under control of the will. In Hatha yoga, slow stretching of the muscles in exercise is taught, along with breathing in certain rhythmical patterns. The body positions or asamas for exercises and meditation can be learned, with some practice, by most. These positions are thought to clear the mind and create energy and a state of relaxation for the individual. Hatha Yoga is basically the style of Yoga practiced by most Westerners not only for relaxation and stress reduction but also for the mitigation of pain during certain illnesses. Yoga is also widely recommended for pregnant and nursing women as well as those reaching menopause. (5).
In Britain, there is widespread practice of Yoga in the workplace. Employers who fund exercise programs for their employees are beginning to rule in favor of Yoga instead of the regular gym membership. Research shows an individual who is relaxed and at the peak of his mental and physical health will also perform better in the workplace. (6).
Yoga is so popular in today‘―s world that it is increasingly being coined as a religion. Is Yoga a religion? Your guess is as good as mine. Since Yoga comes from Hindu, Buddhist and Jain scriptures, certain aspects of these religions are supposedly integrated in Yoga such as the ideas of karma and reincarnation and the notion of there being many deities in addition to the one ultimate Reality. However, most Yoga gurus deny the existence of Yoga as a religion and go on to say that Yoga does not teach the idea of reincarnation or even impose karmic beliefs.
Yoga is one of the orthodox philosophies of India. While it is not a religion, it is theistic, that is, it teaches the existence of a Supreme Intelligence or Being. However, to practice the techniques of yoga successfully you do not need to believe in such a being. Because yoga is a spiritual rather than a religious practice, it does not interfere with any religion. In fact, many people find that it enhances their own personal religious beliefs. (7).
How can Yoga enrich the religious or spiritual life of a practicing Christian or Jew? The answer is the same as for any practicing Hindu, Buddhist, or Jaina. Yoga aids all those who seek to practice the art of self-transcendence and self-transformation, regardless of their persuasion, by balancing the nervous system and stilling the mind through its various exercises (from posture to breath control to meditation). Yoga's heritage is comprehensive enough so that anyone can find just the right techniques that will not conflict with his or her personal beliefs. (8).
More than that, yogic postures calm down the nervous system and creates sufficient space in the psyche to explore breathing control. It puts the individual in touch with his or her body‘―s life force and opens up spiritual aspects of his or her being.
Yoga should not be looked at as a religion or an exercise, it is more of a system of well-integrated techniques and mind frames designed to alleviate stress and bring about universal harmony throughout one‘―s body, thus infusing feel-good vibes in mind and body.In a world where most good things come with side-effects, Yoga brings a refreshingly different perspective.
(1)Self Discovery: Mind and Spirit
(2)Stress Reduction Techniques and Therapies
(3)Kripalu Yoga, A Way to Better Health
(4)Self Discovery: Mind and Spirit
(5)How to do Meditation and Yoga to Reduce Stress
"Follow Your Heart": Emotions and "Rational" Thoug Name: Laura Bang Date: 2002-11-10 13:24:08 Link to this Comment: 3634 |
Even with many definitions, from Aristotle's 4th century BC definition -- "the emotions are all those feelings that so change men as to affect their judgments, and that are also attended by pain or pleasure" (7) -- to Merriam-Webster's 20th century AD definition -- "the affective aspect of consciousness; feeling"(2) -- emotion, and what causes emotion, can be rather difficult to define, especially in non-scientific terms. Defining the difference between a "true" smile and a "false" smile is almost impossible to put into words, yet most people readily admit that they can distinguish between the two. (4) So what is it that defines emotion?
Scientists are still trying to understand just what causes us to have emotions, but recent researchers have discovered the center of "emotions" in the brain. "A region at the front of the brain's right hemisphere, the prefrontal cortex, plays a critical role in how the human brain processes emotions," says a 2001 University of Iowa report. (6) Scientists monitored single brain cellsneuronsin the right prefrontal cortex and found
"that these cells responded remarkably rapidly to unpleasant images, which included pictures
of mutilations and scenes of war. Happy or neutral pictures did not cause the same rapid
response from the neurons." (6)
The scientists speculated that the rapid reaction of neurons to "unpleasant images" might be related to the results of other studies, which have shown that the brain is capable of responding very quickly to "potentially dangerous or threatening kinds of stimuli." (6) The study is not conclusive, however, as the experiment was performed on only one patient who had epilepsy, but the experimenters stated that "the tissue being studied was essentially normal, healthy prefrontal cortex." (6)
Another interesting aspect of studies of emotion is the differences between the right and left hemispheres of the brain. Left-handed people, who are right-brain dominant, tend to be more emotionally and artistically oriented, but left-handed people are a minority of the population. Studies have shown that the left hemisphere of the brain is responsible for "logical thinking, analysis, and accuracy," whereas the right hemisphere is responsible for "aesthetics, feeling, and creativity." (8) The right brain also dominates in producing and recognizing facial and vocal expressions. (3) Unfortunately, most schools emphasize "left-brain modes of thinking, while downplaying the right-brain ones," (8) and society in general emphasizes rational thought over emotional thought. (1) "The classic assumption is that emotion wreaks havoc on human rationality..." (1) It has been argued, however, that emotions actually contribute to and aid rational thought, rather than being purely irrational thought. (5)
In one study, a businessman, Elliot, suffered from a brain tumor that partially damaged his brain, specifically his prefrontal cortexthe emotional center of the brain. As a result, Elliot "lost the ability to experience emotion; and without emotion, rationality was lost and decision-making was a dangerous game of chance." (1) Without emotions, he could no longer analyze the experiences he had lived through, which left him with nothing to tell him whether a decision would be good or bad. Elliot's lack of emotional response to anything that he experienced led to a lack of understanding what is good and what is bad. This case seems to emphasize the importance of emotions in "rational" decision-making. Emotions "are fundamental building blocks out of which an intelligent and fulfilling life can be constructed." (1)
Since emotions have been observed to be such an important part of who we are, it is worthwhile to wonder where emotions come from. Why do we have emotions, and how are we able to tell the difference between so many subtly different facial expressions that convey different emotions?
Language is a very important part of what defines humanity and how we interact with and understand each other, and facial expressions play an important role in interpreting what another person is feelingsomeone might say that they are okay, but their facial expressions might indicate that they are lying. The importance of facial expressions is easily seen "when we converse on an important subject with any person whose face is concealed." (4)
How do we recognize emotions? When you see someone who is happy, do you pause to thoroughly analyze the person's features before determining that the person is indeed happy? Most people are not aware -- at least not consciously aware -- of performing any sort of in-depth analysis to determine what emotion someone else is feeling, so does that mean that emotions are innate? The discovery of an emotional center in the brain would seem to support this idea.
When Charles Darwin studied emotion in humans and animals in the latter half of the nineteenth century, he hypothesized that emotions are innate, but that humans learned them before they became imbedded in our nature -- that is, after years of practicing and having to learn emotions as part of communication skills, emotions became innate through the process of evolution. (4) Further support of the idea that emotions are innate comes from observing infants and young children, who are definitely capable of conveying emotions, but have not had enough time to actually learn the emotions for themselves.
"I attended to this point in my first-born infant, who could not have learnt anything by associating
with other children, and I was convinced that he understood a smile and received pleasure from
seeing one, answering it by another, at much too early an age to have learnt anything by experience. ...
When five months old, he seemed to understand a compassionate expression and tone of voice. When a
few days over six months old, his nurse pretended to cry, and I saw that his face instantly assumed a
melancholy expression ... [T]his child never [saw] a grown-up person crying, and I should doubt whether
at so early an age he could have reasoned on the subject. Therefore it seems to me that an innate feeling
must have told him that the pretended crying of his nurse expressed grief; and this through the instinct
of sympathy excited grief in him." (4)
This demonstrates the importance of emotional facial expressions in communicationthey are a child's first language, the first way a child may communicate with the others around him. (4)
Scientists still have a lot more research to do before we can truly understand our emotions, but it is clear that emotions are an important part of who we are. Emotions are more than just whims or "following your heart;" emotions are a part of how we think "rationally," as seen in the case of Elliot, the man who lost his emotions. Therefore, it is ridiculous that society frowns on those who think too "emotionally" rather than "rationally" -- they are not two separate ways of thinking, but rather they are interconnected, so that we need both in order to make decisions about ourselves and the world around us. Emotions, and the facial expressions that go with them, are the most truthful aspects of humans"They reveal the thoughts and intentions of others more truly than do words, which may be falsified." (4) Emotions are the intangible and indefinable elements that make us who we are.
"The joy, and gratitude, and ecstasy! They are all indescribable alike."
~ Charles Dickens (9)
References:
1) "Emotion, Rationality, and Human Potential," John T. Cacioppo (University of Chicago); from Fathom: the source for online learning
2) Merriam-Webster OnLine Dictionary: "emotion"
3) "Emotion and the Human Brain" by Leslie Brothers, MIT Encyclopedia of Cognitive Science
4) "The Expression of the Emotions in Man and Animals" by Charles Darwin (1872); Courtesy of "The Human Nature Review" edited by Ian Pitchford and Robert M. Young
5) "Emotions" by Keith Oatley, MIT Encyclopedia of Cognitive Science
6) "UI study investigates human emotion processing at the level of individual brain cells" (Week of January 8, 2001), University of Iowa Health Care News
7) Aristotle's Rhetoric, Book II, Chapter 1, Translated in 1954 by W. Rhys Roberts; written by Aristotle in 350 B.C.
8) "Right Brain vs. Left Brain"
9) Dickens, Charles. "A Christmas Carol", from The Christmas Books. First published in 1843.
How Do We Know What We Know? Tacit Knowledge Defin Name: Diana La F Date: 2002-11-10 15:07:00 Link to this Comment: 3635 |
When I asked a certain professor for help in defining tacit knowledge, he stated that it is ‘°the knowledge that we have without knowing we know it‘± and that ‘°once we know we know it, it becomes harder to know how we know what we know.‘± WHAT?!?! Needless to say, this confused me to no end and only created more questions. The more I researched, the more fuzzy the idea of tacit knowledge became to me.
Tacit knowledge is the knowledge that people have that can not be readily or easily written down, usually because it is based in skills (1). It is silent knowledge that emerges only when a person is doing something that requires such knowledge or when they are reminded of it (2). Whatever governs this knowledge is not conscious. This covers a surprising amount of knowledge that most people have, such as attention, recognition, retrieval of information, perception, and motor control. These are known skills, but they are not easily explained (3). This is not a knowledge that can be explained through a system or an outline in a book (4).
The person ascribed with the theory of tacit knowledge is Michael Polanyi (1891-1976). Polanyi was a chemist, born in Budapest, who became a philosopher later in his life (5). Polanyi thought that humans are always ‘°knowing‘± and are changing constantly between tacit knowledge and focal knowledge, that this in itself is a tacit skill and is used to blend new information with old so that we can better understand it. More easily put, people categorize the world in order to make sense of it. This is something that everyone does, whether they realize it or not, and cannot be replaced by another method. Taken in this context, it may be better to define knowledge itself as a method of knowing. Each person will have the reality of their world shaped by their experience. In this context, all knowledge is rooted in the tacit (6).
This is all well and good, but what does it mean? The problem with understanding tacit knowledge is that is it nearly impossible to be able to grasp in and think on it (7). In order to help someone understand tacit knowledge, all one can do it to give them opportunities to for them to teach it to themselves (8). This is most easily done through examples. This paper is made up of small characters based in the Phoenician alphabet. While reading this, were you even aware of the characters? Probably not. You probably skimmed over the words, heedless of how they were composed, understanding only what the grouping of letters meant. But how do you know the meaning of the words? You didn‘―t have to think about them, you just saw the words and somehow understood what they meant. This is a tacit knowledge(6). In the same way language can be considered a tacit knowledge. How do you know whatever language you speak most often? Do you put any conscious thought into how you say something, or do you just know how to say what you want to convey (9)? Here‘―s another example. What‘―s wrong with the following sentence:
The girl throws ball the.
There is a grammatical rule involved with the exact reason why the above sentence is incorrect. Were you even slightly conscious that you had learned this rule (8)? Tacit knowledge goes beyond rules and meanings that we have learned somewhere along the way but that we have pushed back beyond our consciousness. When you see your friend you can recognize their face. Yet, how do you do this? How do you recognize and differentiate between two people? Many people have brown eyes, dark hair, short hair, etc. How can you tell the difference, and in a split second also(6)?
Tacit knowledge is not easily understood. The more I researched, the harder it became for me to explain, even to myself, what tacit knowledge was. I later figured out the reason for this: I understood tacit knowledge tacitly. Without the aid of examples of what tacit knowledge was, I would still be utterly confused as to the meaning of the phrase. Once it is understood, the explaining seems to come easily. Explaining it in a manner that help others understand it better is almost impossible, though. They must learn it through experiences they themselves have had if they are to understand it at all.
7)Tacit knowledge - riding a bike - John Seely Brown
8)Tacit knowledge and implicit learning
How Do We Know What We Know? Tacit Knowledge Defin Name: Diana La F Date: 2002-11-10 15:07:11 Link to this Comment: 3636 |
When I asked a certain professor for help in defining tacit knowledge, he stated that it is ‘°the knowledge that we have without knowing we know it‘± and that ‘°once we know we know it, it becomes harder to know how we know what we know.‘± WHAT?!?! Needless to say, this confused me to no end and only created more questions. The more I researched, the more fuzzy the idea of tacit knowledge became to me.
Tacit knowledge is the knowledge that people have that can not be readily or easily written down, usually because it is based in skills (1). It is silent knowledge that emerges only when a person is doing something that requires such knowledge or when they are reminded of it (2). Whatever governs this knowledge is not conscious. This covers a surprising amount of knowledge that most people have, such as attention, recognition, retrieval of information, perception, and motor control. These are known skills, but they are not easily explained (3). This is not a knowledge that can be explained through a system or an outline in a book (4).
The person ascribed with the theory of tacit knowledge is Michael Polanyi (1891-1976). Polanyi was a chemist, born in Budapest, who became a philosopher later in his life (5). Polanyi thought that humans are always ‘°knowing‘± and are changing constantly between tacit knowledge and focal knowledge, that this in itself is a tacit skill and is used to blend new information with old so that we can better understand it. More easily put, people categorize the world in order to make sense of it. This is something that everyone does, whether they realize it or not, and cannot be replaced by another method. Taken in this context, it may be better to define knowledge itself as a method of knowing. Each person will have the reality of their world shaped by their experience. In this context, all knowledge is rooted in the tacit (6).
This is all well and good, but what does it mean? The problem with understanding tacit knowledge is that is it nearly impossible to be able to grasp in and think on it (7). In order to help someone understand tacit knowledge, all one can do it to give them opportunities to for them to teach it to themselves (8). This is most easily done through examples. This paper is made up of small characters based in the Phoenician alphabet. While reading this, were you even aware of the characters? Probably not. You probably skimmed over the words, heedless of how they were composed, understanding only what the grouping of letters meant. But how do you know the meaning of the words? You didn‘―t have to think about them, you just saw the words and somehow understood what they meant. This is a tacit knowledge(6). In the same way language can be considered a tacit knowledge. How do you know whatever language you speak most often? Do you put any conscious thought into how you say something, or do you just know how to say what you want to convey (9)? Here‘―s another example. What‘―s wrong with the following sentence:
The girl throws ball the.
There is a grammatical rule involved with the exact reason why the above sentence is incorrect. Were you even slightly conscious that you had learned this rule (8)? Tacit knowledge goes beyond rules and meanings that we have learned somewhere along the way but that we have pushed back beyond our consciousness. When you see your friend you can recognize their face. Yet, how do you do this? How do you recognize and differentiate between two people? Many people have brown eyes, dark hair, short hair, etc. How can you tell the difference, and in a split second also(6)?
Tacit knowledge is not easily understood. The more I researched, the harder it became for me to explain, even to myself, what tacit knowledge was. I later figured out the reason for this: I understood tacit knowledge tacitly. Without the aid of examples of what tacit knowledge was, I would still be utterly confused as to the meaning of the phrase. Once it is understood, the explaining seems to come easily. Explaining it in a manner that help others understand it better is almost impossible, though. They must learn it through experiences they themselves have had if they are to understand it at all.
7)Tacit knowledge - riding a bike - John Seely Brown
8)Tacit knowledge and implicit learning
The Essential Nutrients: Are You Getting Them? Name: Anastasia Date: 2002-11-10 21:26:42 Link to this Comment: 3639 |
After returning home from the hospital, he wondered how it happened. A heart attack at thirty-six was a surprise to him as well as to his entire family. Luckily the signs and symptoms were caught early enough to save his life, but it was discovered that he had a disorder known as hypertensive heart disease. Unfortunately however, doctors were unable to explain why he had such high blood pressure. At any given moment a second myocardial infarction could occur, and could in fact be fatal if not caught early enough. His heart condition lead to depression and thoughts of suicide. He had lived a regular life, eating healthy foods, exercising regularly, and working in a profession that he enjoyed. This disease seemed to just creep up on him and changed his life forever. As he researched the disease more and more and sought medical assistance, he found that it all could have been prevented if only he had added a few small capsules to his diet. Doctors told him that vitamins and dietary supplements could have saved him from becoming part of another statistic.
Vitamins are organic molecules required in the diet in amounts that are small compared with the relatively large quantities of essential amino acids and fatty acids animals need. Tiny amounts of vitamins, ranging from .01 to 100 mg per day, may be enough, depending on the vitamin. Although the requirements for vitamins seem modest, these molecules are essential in a nutritionally adequate diet. Deficiencies can cause severe problems. In fact, the first vitamin to be identified, thiamine, was discovered as a result of the search for the cause of a disease called beriberi. Its symptoms include loss of appetite, fatigue, and nervous disorders. Beriberi was accounted for when it struck soldiers and prisoners in the Dutch East Indies during the nineteenth century. The dietary staple for these men was polished rice, which had the hulls removed in order to increase storage life. The men, and even the chickens, which ate this diet developed the disease. It was found that supplementing their diets with unpolished rice could prevent beriberi all together. Scientists later isolated the active ingredient of rice hulls. Since it belongs to the chemical family known as amines, the compound was named "vitamine" or a vital amine.
Along with the thirteen vitamins that are essential to human beings, many scientists now believe that there are dietary supplements that exist which are vital for the success of life such as Co-enzyme Q10. Co-enzyme Q10 is believed by many to relieve ailments and promote good health as well as a feeling of well being (1). It is found in every cell in the body and acts as a catalyst for ATP, which is used as energy for cellular function. If levels of Co-enzyme Q10 drop within the body, then the levels of energy for that human being will drop as well. Taken as a dietary supplement, Co-enzyme Q10 helps guard against possible deficiencies. It helps fight against aging by increasing the supply of the enzyme as the liver's ability to synthesize the enzyme decreases.
Co-enzyme Q10 improves cardiac function by providing energy to the heart. It contains properties that are beneficial in preventing cellular damage during a heart attack (9). The enzyme has also been used to treat other cardiac disorders such as angina, hypertension and congestive heart failure. Incidentally, it is also helpful during chemotherapy because it provides additional enzymes, while the body's supplies are being destroyed by the chemotherapeutic agents (1). In additional to these benefits, it has also been noted that Co-enzyme Q10 is effective in causing a regression of gum disease, boosts the immune system, and can greatly benefit the obese (2).
Lastly, it has been noted that Co-enzyme Q10 is the most vital fuel for mitochondria. There are roughly 60,000,000,000 cells in the body. In those cells there are 100,000,000,000,000 microscopic bacteria called mitochondria. All premature diseases and sickness are attributed to poor mitochondria health or low Q10 supply to the mitochondria. They can potentially live one hundred years if they are supplied with proper nutrients such as Q10, hydrogen, phosphates, oxygen, vitamins and minerals (3). Inefficient energy production within cells can cause approximately ninety per cent of all mutative damage to the cell infrastructures. Q10, if taken daily will not stop this natural destruction completely, but will help to slow the process.
Along with this revolutionary supplement, researchers have said that introducing fish oils into the daily diet can also be beneficial. There are good fats and there are bad fats in the foods that humans consume. Artificially produced trans-fatty acids are bad in any amount and saturated fats from animal products should be kept to a minimum (4). The beneficial fats or rather oils, since they liquidate at room temperature, are those that contain the essential fatty acids, which are polyunsaturated. They are grouped into two families, the omega-6 EFAs and the omega-3 EFAs. Minor differences in their molecular structure make the two families act very differently in the body. While the metabolic products of omega-6 acids promote inflammation, blood clotting, and tumor growth, the omega-3 acids act entirely opposite. Although both the omega-6 acids and omega-3 acids are needed, it is becoming increasingly clear that an excess of omega-6 fatty acids can have dire consequences (4). Many scientists believe that a major reason for high incidence of heart disease, hypertension, diabetes, obesity, premature aging, and some forms of cancer is the imbalance between the intake of omega-6 and omega-3 fatty acids. In the past, diets included a ratio of omega-6 to omega-3 of about 1:1. An enormous change in dietary habits over the last few centuries has changed this ratio to something closer to 20:1, which is a huge problem (7).
Several studies conducted have associated low levels of omega-3 fatty acids and depression. Other studies have shown that countries with high levels of fish consumption have fewer cases of depression. Researchers at Harvard Medical School have even gone as far as to use fish oil supplementation to treat bipolar disorder and British researchers report encouraging results in the treatment of schizophrenia (4). It has even been noted that fish oils prevent and may help to ameliorate or reverse atherosclerosis, angina, heart attack, congestive heart failure, arrhythmias, stroke, and peripheral vascular disease. Fish oils help maintain the elasticity of artery walls, prevent blood clotting, reduce blood pressure and stabilize heart rhythm (5). Supplementing with fish oils has been found to be entirely safe even for periods as long as seven years and no significant adverse effects have been reported in hundreds of clinical trials using as much as 18 grams a day of fish oils (6).
Now there is also considerable evidence that the consumption of fish oils can delay or reduce tumor development in breast cancer. Studies have shown that a high blood level of omega-3 fatty acids combined with a low level of omega-6 acids reduces the risk of developing breast cancer (8). Daily supplementation of as little as 2.5 grams of fish oils has been found effective in preventing any progression in benign polyps or even colon cancer. Greek researchers report that fish oil supplementation improves survival and quality of life in terminally ill cancer patients.
Heart disease and cancers are killers that affect many human beings. They could possibly be prevented by taking supplements such as Co-enzyme Q10 or adding more omega-3 fatty acids to a daily diet. If started early enough, supplementation could change a person's life. It seems like a small price to pay to save a life.
1)Co-enzyme Q10, Information provided by Alaron Products Ltd
2)Co-enzyme Q10, Information provided by Symmetry
3)Q10 Stable Co Enzyme Australia, Co-Enzyme.com
4)Fish Oils: The Essential Nutrients, Hans R. Larsen
5) Simopoulos, Artemis. Omega-3 fatty acids in health and disease and in growth and development. American Journal of Clinical Nutrition, Vol. 54, 1991, pp. 438-63
6) Pepping, Joseph. Omega-3 essential fatty acids. American Journal of Health-System Pharmacy, Vol. 56, April 15, 1999, pp. 719-24
7) Connor, William E. Importance of n-3 fatty acids in health and disease. American Journal of Clinical Nutrition, Vol. 71 (suppl), January 2000, pp. 171S-75S
The Mystery of Morality: Can Biology Help? Name: Anne Sulli Date: 2002-11-10 22:14:55 Link to this Comment: 3640 |
What is morality? This ambiguous yet powerful concept has puzzled mankind for centuries, never lending itself to a concise and solitary definition. The concept of morality assumes different meaning and value for various individualsat times becoming synonymous with religion, sympathy, virtue, or other equally ambiguous terms. In recent years, scientists have acquired a unique voice in the ongoing debate of human morality. Biologist often turn to the past, reaching for the origins of morality, to elucidate its mystery. Their arguments incite heated debate, particularly with religious thinkers who link morality with God and salvation. Indeed, a scientific explanation for morality not only threatens the authority of religion; it also forces humans to reevaluate their self-image as a species. Yet it must be asked how, and to what extent, biology helps us understand what morality is and how it has evolved? Can one fully explain man's inclination to moral sentiment with science? The sheer duration and intensity of the debate regarding morality proves that there are no clear answers. The following exploration will show how a biological vantage point may be useful to understanding morality. Such a lens, however, is limited and unable to fully expose this mystery.
Central to the morality debate is the disagreement regarding ethical behavior as man's invention or as an intrinsic human quality. The latter belief is consistent with the idea of a law-giving God and the notion of natural rights (1). A biological exploration may not allay this dispute, but it canby accounting for its originsencourage a greater understanding of what morality is. From a biological perspective, moral aptitude is like any other mental trait: the product of competitive natural selection (1). Is this to say that moral beings were more likely to survive and therefore "chosen" by nature to thrive? Such a statement is difficult to prove. Rather, it is more likely that moral behavior is merely a part of a larger system 'tested' by time and nature's selection process. Morality, therefore, may not be an adaptive feature itself, but one associated with another trait(s) preferred by natural selection (3). Such a quality is said to be a pleitropic trait (3). The necessary trait for the development of morality is a higher intelligence (3). Moral aptitude is a product of intelligence just like any other intellectual abilityliterature, art, and technology, for examplefacilities which may not be adaptive themselves (3).
Indeed, an increased human intelligence provides the necessary conditions for moral conduct. An essential and primary ingredient for moral judgment, for instance, is the ability to predict the consequences of one's actions (3). When isolated, certain actions cannot be deemed as moral or immoral behavior. The act of pulling a trigger (not an inherently unethical action) is the classic example (3). Only when one is able to anticipate the outcomes of his/her actions, can such behavior be declared as moral or not (3). This ability is perhaps born alongside the evolution of the erect position in human beings. As man's posture evolved, his limbs changed from simply appendages used for movement to organs of operation (3). Man, for example, can now create tools to aid his existence. The ability to perceive tools as a future aid, however, must precede the act of tool-making. Along with the physical ability to create tools sprouts the intellectual power to anticipate the futureto relate means to an end (1). An increased intelligence, therefore, indirectly augmented man's capacity for moral judgment.
If morality is to be understood as a product of higher intelligence, the question now turns to the motives which inspire ethical behavior. For if moral judgment is an intellectual process rather than an innate tendency, our motives for behaving ethically cannot be purely altruistic. Intelligence, for example provides the ability to maneuver the conflict between cooperation and deflection (1). The most classic example of this situation is the noted Prisoner's Dilemma, which seems to prove that even criminals act under honor and moral principle (4). When two criminals are arrested together, neither will "rat" on the other; they will, instead, accept punishment together (4). This seemingly altruistic behavior is actually the consequence of an intellectual process weighing the benefits and drawbacks of both possibilitiescooperation and deflection. The prisoners decide that because both members are capable of "ratting" on the other with hopes of securing immunity, it is safer and mutually beneficial to preserve their alliance (4).
Such a scenario can be imagined in other situationsfor intellectual activity is always at work. From this perspective, morality is a mental process that measures the potential benefits of one's actions. Individual profit, then, is the primary consideration, although it may often be disguised as an ethical code . (1) This tendency is present even in animals. Vampire bats, for example, drink blood at night for sustenance and often feed those bats that could not acquire food for themselves (4). The obvious payoff is that the "altruistic" bat may in turn be assisted in the future. Cooperation, therefore, is mutually beneficial and ensures the survival of the species (4). Personal interest as the driving force for ethical behavior may be found even in religious morality (5). For why might a personal live under the moral guidelines of a particular religion? For many, it may be to secure a personal reward in the afterlife (5).
When an intellectual or mental process is intertwined with moral judgment, motives for ethical behavior can clearly be viewed as calculated and selfish. Morality, however, cannot be reduced to the simple measuring of gains and losses. It is undeniable that ethical behavior is often enacted even when a foreseeable advantage to such conduct is absent. The compassionate treatment of others, particularly when there is no perceivable reward, is indeed a puzzling issueone which religious thinkers often attribute to the existence of a loving God (4). Yet putting oneself at risk for the sake of another is not unique to human beings; animals also exhibit such behavior (3). When a flock of zebras is attacked, for example, they will each scramble to protect the young within the group, endangering their own lives (3). Humans also react with such instincts, proving that an intellectual process is not always involved in moral judgment. The scientific argument, therefore, that morality is the indirect result of a higher intelligence, may not provide a complete explanation. Only some instances of moral behavior can be attributed to this theory.
A biological explanation of morality is insufficient in other areas as well. As previously noted, by evolution provides a heightened intelligence which sets the foundation for morality (4). An important distinction, however, must be identifiedthat between a human's capacity for moral judgment and the ethical norms accepted by society (3). While the former is indeed influenced by biology, the latter is most likely a product of social and cultural elements (3). Although it appears that natural selection may favor certain moral codes (the ban on incest and the restriction of divorce, for instance, are moral codes that contribute to successful reproduction) it does not, in fact, favor all ethical norms (3). The models discussed earlier, which involve risking one's own life for the sake of another, are clearly not in keeping with natural selection. Moreover, biology cannot justify such codes because our moral standards are both constantly changing and widely varied amongst different cultures (3). Finally, the same heightened intelligence which makes ethical behavior possible would also grant humans the power to accept or reject moral norms (4). Biology clearly accounts only for the development of man's capacity for moral behavior, not the moral codes he has come to accept. Francisco J. Ayala of the University of California likens the distinction to a human's biological capacity for language versus his use of a particular language (3). While biology provides humans with the capability to use language, natural selection does not prefer any specific language over another (3).
What, then, can be concluded about morality? Each voice in this debate (What is morality? Is it an innate quality or social construction? From where does it originate?) provides unique and interesting insight. Yet each argument is limited, providing only a fragment of understanding to the larger puzzle. The biological perspective is one such voiceit demystifies some of the enigma yet will never suffice as a solitary explanation. Biology shows one's capacity for ethical behavior is a product of evolution, but its explanation cannot extend much further. To truly gain a heightened understanding of this ambiguous and highly charged concept, it is most useful to consider not only biological factors, but social, cultural, and psychological influences as well. An interdisciplinary explorationa union, rather than a separation, of various fieldsis indispensable.
1)The Biological Basis of Morality,
2)Biology Intersects Religion and Morality,
3)The Difference of Being Human,
5)The Basis of Morality: Scientific Vs. Religious Explanations,
Cook Your Meat, Please! Name: Joanna Fer Date: 2002-11-10 22:14:57 Link to this Comment: 3641 |
My Two-Year Old is a Punk Rocker?? Name: Margaret H Date: 2002-11-10 23:32:10 Link to this Comment: 3643 |
Although slightly uncommon, it is still possible for toddlers to exhibit actions similar to what one may find at a rock concert: head banging. For a multitude of different reasons, head banging can develop into a habit for young children and can even last for a few years. This striking behavior usually does not result in any permanent injury or damage to the child. Rarely does a child banging her head against the wall, crib, pillow, or other object signify a serious condition or disease. However, there are a few other reasons behind head banging other than the child's future career in the music business.
Up to 20% of healthy children can be found head banging (1). The behavior begins sometime within the first year and can last for a few years afterwards. However, most sources recommend seeing a Doctor if the behavior continues past the age of 4. Head banging can occur at several different times: at sleepy times, during tantrums, and even throughout the night (1). Children can also just randomly start, without any apparent provocation or reason. Because children do not have to be in a certain location for the head banging to start, heads can be hit against any type of material. Walls, cribs, pillows, and floors are the most common surfaces. At times, children can wake up with a headache, develop nasal problems, or have an ear infection as a result of the repeated banging (4). Other consequences include a temporary bald spot in the location of the banging (2). Toddler's heads have adapted for the normal bumps and bruises associated with learning to walk and climb, thereby preventing the more serious head trauma (1).
Although head banging usually is not considered very serious or worthy of medical attention, it does have a clinical classification as Movement Disorder or Rhythmic Disorder. Movements classified under this disorder seem to occur especially during the transition between the state of wakefulness and sleep, as well as the different stages of sleep (4). The disorder contains other behaviors such as head rocking, body rocking, folding, and shuttling (4). Experts speculate reasons for the actions stems for the need of rhythmic stimulation: to help fall asleep, during a tantrum, if under-stimulated or even if over-stimulated. Because children are constantly rocked in utero, once outside the womb, the still look for similar rhythmic movements (1). Children's propensity towards jumping rope, swinging, bumper cars, and dancing can be attributed to this theory (1). Other explanations behind the head banging are the rhythmic sensation it ensues, visual movement it can provide, release of inner tensions, or boredom/frustration when the child cannot sleep (2). Ear infections or teething can be added causes for the excessive movement (1). Most experts encourage parents to ignore the behavior, as it will subside in a few years (2).
In few very cases, ignoring Rhythmic Disorder can be a mistake. The excessive head banging and body rocking can be an early sign of Autism, a neurological disorder (5). Although children who are thought to have Autism exhibit other symptoms that normal head bangers do not. Behavior such as rocking, nail biting, self-biting, hitting own body, handshaking, or waving, in addition to the head banging, can be signs of Autism (5). Autism inhibits a person's "ability to communicate, form relationships with others, and respond appropriately to the environment," (6). Because symptoms of Autism are relatively easy to recognize and usually include more than one behavioral symptom, most doctors and parents are able to quickly decipher if their child has a serious condition.
Medical Experts also rarely equate Rhythmic Disorder with psychological disorders, (4). Since there is little threat of a child's behavior signifying something significant, ignoring the behavior really is the best thing to do. Because ignoring the head banging sometimes can be difficult, a plethora of suggestions exist on the web. One site indicates music therapy, hypnotism, motion-sickness medications, tranquilizers, or stimulants would help both the child and the parent (4). More conservative approaches include placing a metronome by the child's bed (3). Hopefully, the child will recognize a strong beat and will not feel the need to duplicate it. Parents can move the bed away from the wall, add cushioning to the crib, or carpet the floor in order to decrease the noise. One drug, Naltrexone, has had some success in treating children with Rhythmic disorder, although only preliminary research has been completed at this current time (5).
Little is known about the causes behind Rhythmic Disorder. A few studies have indicated that the head banging stimulates the Vestibular system in the inner ear, which controls balance (3). Another unrelated study shows that children who exhibit this kind of behavior were more advanced as compared to their peers (1). The few reports and studies of Rhythmic Disorder published often illustrate that not much is known about this behavior. As with most areas of health, this Disorder requires further study. However, information shows that the Disorder does not indicate a serious problem. In the case of Autism, other symptoms persist and doctors are able to diagnose the condition with ease. So maybe your head-banging child may grow up to worship Nine Inch Nails, but the possibility of her continuing a healthy maturation process is even more likely.
1) Dr. Greene.com : Caring for the Next Future , Featured Article, "Head Banging."
2) PlanetPsych.com A World of Information , "Head Banging by Children" by James Windell.
3) American Academy of Pediatrics website , "Guide to Your Child's Symptoms: Rocking/Head Banging."
4) Kid's Help for Parents website , Sleep Problems
5) MEDLINE Plus Health Information , Stereotypic Movement Disorder.
6) National Institute of Mental Health website , "What is Autism?".
Magic Seeds Name: Erin Myers Date: 2002-11-11 00:02:36 Link to this Comment: 3644 |
INTRODUCTION
Just before its end, the Clinton Administration implemented rules for federally funded human stem cell research allowing embryonic cell research of otherwise discarded sources. Upon inauguration, the Bush Administration immediately put a hold on federally funded human stem cell research until a compromise was reached in August 2001. The controversy over human stem cell research springs the origin of embryonic stem cells.
Within one day after fertilization an embryo, which until this point is simply a fertilized egg, begins to cleave, or divide from one cell to two, from two cells to four, etc. When the embryo reaches 34-64 cells it is considered a blastocyst. It is four or five day old embryos, blastocysts of about 150 cells, which are implanted into a uterus during in vitro inplantation.4 Many embryos are made and kept frozen as back ups in case implantation is unsuccessful. When a family decides they no longer need the embryos they can opt to dispose of them, put them up for adoption, or donated them for research.
Embryonic stem cells come from blastocysts made in a laboratory for in vitro fertilization that are donated for research with the informed consent of the donor. Blastocysts have three components: the trophectoderm, an outer layer of cells that form a sphere; the blastocoel, a fluid filled cavity; and the inner cell mass, a cluster of cells on the interior that may ultimately grow into a fetus.1 It is from the inner cell mass that stem cells are harvested. The inner cell mass is extracted from the blastocyst and cultured on a Petri dish. Here the controversy arises. The embryo is no longer viable without the inner cell mass. For those who consider a blastocyst to be a living human being, this extraction is tantamount to the death of a human. This issue has given rise to a new platform for the anti-abortion vehicle.
The controversy and its resulting restrictions are hindering the exploration of what may be the future of medicine. There is evidence to suspect that stem cells may be used to treat -- even cure -- AIDS, Parkinson's disease, Multiple Sclerosis, heart disease, cancers, diabetes, Alzheimer's, genetic diseases, and a host of other diseases.
MAGIC SEEDS
All stem cells have three promising characteristics: they are capable of proliferation, they are unspecialized, and can be differentiated. Proliferation is the ability of certain cells to replicate themselves repeatedly, indefinitely in some cases. Within six months, thirty stem cells can divide into millions cells.1 Stem cells are unspecialized; they are not committed to becoming a certain type of cell. They can be differentiated under certain protocols, tissue recipes scientists have identified,1 to become any of the 220 kinds of cells in the human body. Herein lay the great possibilities. Scientists may be able to produce cells that can replace damaged or sick cells in a patient with an injury or degenerative disease.6
A large portion of the political debate is devoted to alternatives to embryonic stem cells. Somatic stem cells, also referred to as "adult stem cells," are morally acceptable to the embryonic stem cell research opposition. Somatic stem cells come from select sources in fetal or human bodies. They exist in very small quantities in the umbilical cord of a fetus, bone marrow, the brain, peripheral blood, blood vessels, skeletal muscle, skin, and the liver. The number of somatic stem cells decreases with maturity. They exist to help repair their source, should injury occur. The majority of somatic stem cells are less transdifferentiable than embryonic stem cells. For the most part, they can only be coaxed into cells that are associated with their source. For instance, hematopoietic stem cells, harvested from blood vessels and peripheral blood, give rise to all the types of blood cells; and bone marrow stromal cells give rise to bone cells, cartilage cells, fat cells, and other kinds of connective tissue cells.1 Another disadvantage of somatic stem cell research is the difficulty to produce large quantities of somatic stem cells. Scientists also fear that somatic stem cells may lose their potency over time.12
A new study identifies a somatic stem cell that can
"differentiate into pretty much everything that an embryonic stem cell can
differentiate into." Catherine Verfaillie of the University of Minnesota found these cells
in the bone marrow of adults and dubbed them multipotent
adult progenitor cells (MAPCs). The study claims that MAPCs
have the same potential as embryonic stem cells. These cells seem to grow indefinitely in a
culture, as do embryonic stem cells.
Encouragingly, unlike embryonic stem cells, MAPCs
do not seem to form cancerous masses if you inject them into adults. Skeptics think the scientists stem cell
selection process creates MAPCs and do not think these cells that exists on their own, they think the
scientist have simply found a way to produce cells that can behave this way.8
Stem cell therapy testing in rodents is yielding exciting results. Mouse adult stem cells injected into the muscle of a damaged mouse heart have help regenerate the heart muscle. In another experiment, human adult bone marrow stem cells injected into the blood stream of a rat similarly induced new blood vessel formation in the damaged heart muscle and proliferation of existing cells. Petri dish experiments also have promising applications. Parkinson's disease, a neurodegenerative disorder that affects more than 2% of the population over 65 years of age, is caused by a progressive degeneration and loss of dopamine-producing neurons. Scientists in several laboratories have been successful in inducing embryonic stem cells to differentiate into cells with many of the functions of the dopamine neurons needed to relieve the symptoms of Parkinson's disease.1
There are many more implications for stem cell
research. In addition to cell therapy, human
stem cells may also be used to test drugs.
Animal cancer cell lines are already used to screen potential anti-tumor
drugs.1 The possibilities are endless.
CONCLUSION
There are over 200,000 embryos left over from in vitro
fertilization3
attempts but only about 6 existing embryo stem cell lines2
available for federally funded research under Bush Administration regulations. United State's scientists pioneered this
field of research. Federal funding could
speed the development of therapies and keep the
The anti-abortion opposition believes that life begins at
fertilization and that life should not be compromised even if it is to save the
lives of many. The conservative Family
Research Council goes so far as to say that every frozen embryo deserves "an
opportunity to be born."12 About 15% of pregnancies end in miscarriage,
most of them in the embryo stage before the woman even knows she is pregnant.7 If every frozen embryo was given the
opportunity to be born, and the 85% that statistically survive to become fetuses
were born, there would be 170,000 more babies in the world. This is dramatically more than 15 times the
number of babies born in the
The potential for stem cell therapy is too great to deny federal funding to new embryonic stem cell research. The current strict regulation slows progress and inhibits vital research, restricting federally funded research to six embryonic stem cell sources. There should be supervision to ensure research does not lead to utilitarian purposes, but the number of embryos used should not be limited. Scientists would not need to harvest stem cells indefinitely. At some point, they would have a wide enough variety and would be able to stop.9 Extracting "magic seeds" from a cluster of cells could end disease, but we will never know if we are not allowed to try.
WWW Sources
1) Stem Cells: A Primer from National Institutes of HealthFor more information visit
International Journal of Cell Differentiation and ProliferationAmerica's Secret Disease Name: Brenda Zer Date: 2002-11-11 00:14:10 Link to this Comment: 3647 |
America has a serious health problem - one that usually escapes the notice of most people. Over 15 million Americans are affected by asthma (10 million adults and 5 million children). (3) This chronic respiratory disease can even be life threatening, if not kept in careful check. So, what exactly is asthma and why do so few people seem to know about it?
While asthma is chronic (meaning that it is always with you), its symptoms are not always detectable. Sometimes these symptoms are dormant, waiting for the right irritant to trigger them. Symptoms of asthma include shortness of breath, wheezing, coughing and a "tight" feeling in the chest region. (5) Coughing can either be a dry cough or a wet cough (phlegm comes up the esophagus during rough coughing). The three main components of asthma are: inflammation, muscular contraction and increased mucus production. While these symptoms are usually under control, if they should be aggravated, the person may be in danger of suffocating. (1)
During a severe bought of asthma (commonly known as an asthma attack, or episode) the lining of a person's bronchiole becomes inflamed. The inflammation causes a build up of fluid and cell-clots this swells the tissue and contributes to the blockage. The muscles around the bronchiole involuntarily constrict, causing a further decrease in bronchiole diameter. In addition to all this, an increase in the production of mucus floods the lungs. (1) If the person cannot clear their airways, they can die either of asphyxiation, or of carbon dioxide poisoning (sometimes, fresh O2 is allowed into the system, but CO2 cannot escape causing a massive build up of carbon-dioxide in the system). Persons with asthma have a decreased lung-capacity, which makes everyday breathing hard (my father, for instance, only has 50% of his lung capacity before my maternal grandfather died, he only had 27% capacity [he also had emphysema]).
In 1999, more than 4, 500 people died of asthma or asthma-related conditions. Between 400, 000 and 500, 000 people are hospitalized each year because of asthma. (3) This makes asthma the third ranked cause of hospitalization for children under the age of fifteen. (3) Approximately one in every thirteen school-age children has asthma. (4) Children are most susceptible to asthma because they breathe more air, eat more food and drink more liquid in proportion to their body size than an adult does. As their bodies are still developing, this leaves them more vulnerable to environmental exposures and diseases than adults. (1)
Although children are more likely to develop asthma, there are many different types that adults can have. The form of asthma that is least well recognized for what it is, is exercise-induced asthma. (1) While most people think that it is normal after exercising to breathe heavily, it is not normal to wheeze or cough either during or immediately after exercising. So, while some people may not exhibit signs of asthma every day, it may still be present. Jogging in cool/cold weather sometimes causes bronchiospasms. In general, cool/cold air is bad for persons with asthma, while warm/moist air is beneficial and can help relieve symptoms. People with asthma who still wish to exercise are recommended swimming, as the slow, rhythmic breathing is good for your respiratory system. (1)
Other than air temperature, there are many things that can trigger asthma. To start with, many asthmatics are also allergic to many substances. Many asthma attacks are caused by allergic reactions. That is why the two most common triggers for asthma are allergies and irritants. Outdoor environmental irritants are such things as cold air, cigarette smoke, commercial chemicals, perfume, as well as paint or gasoline fumes. Indoor environmental irritants can include; second hand smoke, cockroaches, dust mites, molds and pets with fur or feathers. As Americans spend 90% of their time indoors, it is important to keep residences and work places as clean as possible. (4)
For the outdoors, studies have shown that air pollution is causing the number of people worldwide with asthma to rise significantly. The groups of people who are affected the most are inner-city residents or persons living in a highly industrial area. Air pollution is a prime factor in asthma related illnesses and deaths.
While taking medications (like albuterol sulfate solution, chromoline sodium solution, albuterol inhalers, or various other bronchio-dilators or pills) can help reduce asthmatic symptoms, asthma has no known cure (although they are considering gene-therapy a possible treatment). (5) People living in low-income areas have no hope of purchasing the prescription drugs needs to fight asthma, so their rate of respiratory problems is the highest of any demographic group.
More than half of all asthma patients spend 18% or more of their total family income on asthma-related expenses. Over $4 billion per year is spent on the hospitalization of asthma patients. (1) Many of my asthmatic friends have been hospitalized more than once and my older brother was hospitalized for his asthma when he was a child.
Although it is not possible to cure asthma at this time, with a habitual use of medication, it is possible to send the asthma into remission. This is what happens with many children. They appear to "grow-out" of their asthma, but in reality, it is only lying dormant inside them.
Many people are affected by asthma and do not even know it. Although it is not a prolific killer, if untreated, asthma can kill. Most people that live in cities should pay attention to the air quality reports for their neighborhood. Persons living in a high pollution area are more susceptible than others are. (4) Certified doctors or allergists can run a simple test to determine your lung capacity.
As asthma affects a large portion of our population, it is surprising the reaction that asthmatic people receive. They are often ridiculed for not being able to "keep-up" with others (this happens especially to children during recess or play-time). While this is a common respiratory disease, some people still find it hard not to look down upon people who use inhalers or nebulizers as being "weak". This creates an entire series of social groupings. Who would have thought that such a common, yet unknown disease could cause so much physical and mental harm?
.
1)Sniffles & Sneezes: Allergy & Asthma care and prevention, a site dedicated to treating allergies and asthma.
2)Allergy Asthma Technology, Ltd. , a pharmaceutical website.
3)Center for Disease Control, their website about asthma great links.
4)Environmental Protection Agency, the EPA's website about the causes of asthma.
5)American Lung Association, ALA's comprehensive webpage of asthma and a series of links.
Colloidal Silver: Miracle Elixir or Plague of the Name: Christine Date: 2002-11-11 00:16:06 Link to this Comment: 3648 |
If someone told you that they were in possession of something that could cure any illness almost instantaneously, would you believe it? If you were told that there was a small risk of permanently changing the color of your skin, would you be willing to run the risk and take it anyway? The name of this elixir is colloidal silver, and it is concocted by suspending microscopic silver particles in liquid. Colloidal silver has been claimed to be effective against hundreds of conditions and diseases, including cancer, AIDS, parasites, acne, enlarged prostate, pneumonia, and a myriad of others. However, long-term use of this silver can lead to a conditional called argyria, where a buildup of silver salt deposit on the eyes, skin, and internal oranges change the skin permanently metallic ashen-gray, making the individual have permanent death pallor. Does colloidal silver really carry the medicinal cure-all properties it is claimed to have, or is it just risk without benefit?
Silver has been used for hundreds of years as both a medicine and preservative by many cultures around the world. The Greeks used silver vessels to help keep water and other liquids fresh. Pioneers put silver coins in the wooden water casks to keep the water free from the growth of bacteria, algae, and other organisms, and placed silver dollars in milk to keep it fresh. In 1901, a Prussian chemist named Hille and Albert Barnes discovered a method of preparing a true colloid by combining a vegetable product with a silver compound and patented it as Argyrol, the only non-toxic antibiotic available at the time. Another scientist, Crede, advocated the use of colloidal silver to fight bacterial infections because colloidal silver is non-toxic and it carries germicidal properties and through his work introduced colloidal silver into medicine (1). The colloidal state proved to be the most effective means to fight infections because it demonstrated a high level of activity with very low concentrations, and also because it lacked the caustic properties of salt. By the mid 1930s there were more than four-dozen silver compounds on the market, although there was a wide variation of their effectiveness and safety. The first reason for the vast differences is that the compound was available in three forms: oral, topical, or injection form. Second, some were true colloids and some were not, with some containing 30% silver by weight and others hardly had any. Third, the freshness of the colloid, the time elapsed since manufacture, had a lot to do with the effectiveness of the compound (2).
In the 1940s, the use of colloidal silver in the medical field began to taper off mainly due to the advent of the modern antibiotics, but also due to three other reasons. The first was the high cost. Even in the depression era of the late 1930's, colloidal silver was reported to have been sold for as much as $200 per ounce (in present day dollars.) The second reason was that many of the silver products available at the time contained toxic forms of silver salts or very large particles of silver related to the available technology of the time. The third reason is that in 1938 the federal Food and Drug Administration established that from that point forward, only those "drugs" which met FDA standards could be marketed for medicinal purposes (1). In 1999, the FDA banned the use of colloidal silver or silver salts in over-the-counter products. Silver products can and are sold as "dietary supplements" in health stores only if they make no health claims, but many advertisers ignore the last restriction and still promote the benefits of the use of colloidal silver.
Prolonged contact or too much of colloidal silver can result in argyria, which produces a "gray to gray-black staining of skin and mucous membranes produced by silver deposition" (3). The normal human body contains about 1 milligram of silver, and the smallest amount of silver ingested reported to cause argyria ranges from 4-5 grams to 20-40 grams. The silver is deposited on the face and diffused all over the skin, and as the individual is under the sun the silver darkens as a result of being oxidized by strong sunlight, thus producing the silver/blue/gray complexion (4). There are a few physical signs that suggest the onset of this condition: the first is a gray-brown staining of the gums, later progressing to involve the skin. The color is usually slate-gray, slightly metallic, or blue-gray and may appear after a few months of silver treatments. The second sign is that the hyperpigmentation is most apparent in the sun, with the exposed areas of skin, especially the face and hands. There are different theories to explain the blue-gray pigmentation to sun-exposed sites, but there are no definite explanations. The third sign is the hyperpigmenatation of the nail beds. The fourth sign is a blue discoloration of the viscera, which is apparent during abdominal surgery (3). While the majority of the individuals using colloidal silver will never developing argryia, some individuals are at a higher risk than others. The Environmental Protection Agency suggests that people with low vitamin E and selenium levels are more susceptible to argyria, as well as individuals with slower metabolisms. People with slower metabolisms have the rest of their natural eliminative systems working more slowly and can be more easily overwhelmed (4). Cases of argyria were most prevalent when silver medications were commonly used, the 1930s and 1940s, and have since become a rare occurrence. The famous "Blue Man," who was exploited in the Barnum and Bailey Circus sideshow, had a classic case of argyria. The most recent case of argyria is Stan Jones, Montana's Libertarian candidate for Senate. He started taking colloidal silver in 1999 for fear that there would be shortage of antibiotics due to Y2K disruptions. People ask him two questions: if his blue-gray skin is permanent and if is he dead. His usual response is that he is practicing for Halloween (6).
Advocates of colloidal silver believe there is a call for urgent action for the use of more natural alternatives than antibiotics due to the increasing difficulty in treating infections. Colloidal silver is argued to be the best alternative, safe for pets, children, plants, and all multi-celled organisms. From his own bacteriological experiments, Dr. Henry Crooks supports the use of colloidal silver, claiming that all known disease-causing organisms die within six minutes of the ingestion of silver. Medical promoters of the use of colloidal silver allege that the presence of colloidal silver near a virus, fungus, bacterium or any single celled pathogen disables its oxygen metabolism enzyme, or its chemical lung, so to say. Within a few minutes, the pathogen suffocates and dies and is cleared out of the body by the immune, lymphatic and elimination systems (5). People in the medical field against the use of colloidal silver argue that just because a product effectively kills bacteria in a laboratory culture does not mean it is as effective in the human body. Products that kill bacteria are actually more likely to cause argyria because they contain more silver ions that are free to deposit on the user's skin.
There are compelling arguments both for and against the use of colloidal silver as an alternative to antibiotics. However, very little research has been done to test the effectiveness of the use of silver in the human body to fight infections, and the risk of argyria increases as the number of people using silver increases while the amount of information known remains constant. If I had an infection of some kind and had the option of taking an antibiotic or drinking colloidal silver, I would choose the antibiotic. For starters, there is more information about the drug. Many scientists have performed experiments with the drug and know a lot about its effects in the human body. Not much is known about colloidal silver, and the risk of taking too much and permanently changing the color of my skin outweighs any benefits the silver may contain. As more research and testing is done about the effectiveness of colloidal silver, it may be discovered that it is a wonderful alternative to antibiotics, but until it is proven safe and effective with low risks involved, then I believe that people should stick to the safe side and take something they know will be effective and not make them look permanently dead. Furthermore, I find it a little disconcerting that colloidal silver kills all bacteria within five minutes. I would worry if it was doing some other damage along the way and I would worry about other possible long-term effects. I believe that too many people are jumping on the bandwagon concerning colloidal silver; I think an extra measure of caution is necessary due to evident health risks involved. In conclusion, I see a need for further research to be performed regarding colloidal silver, its usage protocol and clinical issues.
1) IPS site on colloidal silver,
2) A Brief History of Silver and Silver Colloids in Medicine ,
Sex and Advertising: An "Organic" Experience Name: Heather D Date: 2002-11-11 00:24:06 Link to this Comment: 3650 |
Whenever you turn on the television, it is there. When you are in the doctor's office staring into a magazine, it is staring right back at you. In fact, in today's society, it is assaulting you in sight and sound, no matter where you are or what you are doing. Yes, I am talking about advertising. It is what drives our consumer culture onward. Ads are everywhere, pitching an extensive array of useless products in an equally extensive variety of ways. Advertisers play on several different tactics to get people interested in their products; they humor, self-esteem, peer pressure and many other things, but the one tactic that is most popular and the most effective is using sex in advertising. Why is this ploy so effective? Simply because is plays upon the biological needs of every single human being.
No matter what the product, from shampoo to beer, the tactics are all the same. Get a beautiful person in there and maybe, just maybe, the audience will be tricked into thinking that maybe they can be/have that beautiful person. Seems simple, right? Wrong. The use of these models and how they are posed is actually a very precise art, starting with human biology and the nature of sex. For sake of brevity in this paper, let us just look at print advertising. On the cover of almost every magazine in the grocery store there is a gorgeous model or actor staring at you with that ever so seductive "come hither stare." The thing that most people do not realize that, for the most part, the appeal of that look is generated on a computer.
Graphic artists have become the Picasso's of this technological age; splicing and stretching, they can turn any ordinary woman into a goddess. How do they do this? They simply play upon our biological instincts for procreation. By showing women in a false state of arousal, advertisers are able to associate their products with pleasure and instinctual survival. When a woman is in the early stages of arousal, blood flows to key erogenous areas of the body, namely her breasts and lips. To achieve this effect in print, artists add extra curves and shadows to a woman's breasts and make her lips darker and fuller. Then they enlarge her pupils (another sign of sexual arousal), and lighten the whites of her eyes (because why would her blood be there if it had more important places to be?). Also, in about 65% of all print ads, women are shown with open mouths. This is seen as a very sensual, sexual gesture of submission by men. (1). Advertisers also do these things to make the women look "healthy."
Why is it that this works so well in advertising? According to Richard F. Taflinger, PhD, "Sex is the second strongest of the psychological appeals, right behind self-preservation. Its strength is biological and instinctive, the genetic imperative of reproduction," (2). He also point out, though, that gender also play a huge role in the effectiveness of the advertising.
The biological prerogative of the male is to impregnate as many women as possible in order to carry on the species. Richard F. Taflinger accounts for this by saying, "Genetically, it is the most practical course of action. The more females with which a male mates, the greater number of offspring containing his genes are possible. In addition, the cost of sex in terms of time and energy is considerably lower for the male than the female," (3). By showing a woman in a state of arousal, this is giving a man the "good to go" signal; so advertising is ultimately easier and more effective on men. They are receptive to the immediacy of image, and to the immediacy of the advertising campaign itself.
Women, on the other hand, have a different biological prerogative which makes it more difficult to sell to them. Women instinctively think in the long run and look for that in a sexual partner. Other factors besides health and accessibility come into play. They naturally look for someone who can provide for their offspring, so factors of wealth, power and intelligence quickly come into play and spoil the immediacy of the ad. So women are far more prone to be attracted to images of romantic attachment than sexual imagery. (4) Also, showing a man in the early stages of arousal (like they do with women) is actually counter-productive because women see that as an aggressive, threatening gesture. In today's society women do not want to be threatened, and are more prone to wait and then make her choice of mate.
All these sexual signals displayed by advertisers play upon our most basic, primitive instincts. Though we may laugh at the idea of associating toothpaste with sex, it often is and it sells. I guess the question that follows is not really "why is this so effective on us?" but rather, "what does this do to us?" Does this type of advertising have any sort of psychological or physiological effect on us? Advertisers are playing with instincts which have been formed over a span of millions of years, so it does not seem likely that they can change our most primal ways of thinking about the opposite sex and about sex in general. However, there may lay a danger in the fact that as people become more used to advertising and more adept at deciphering the codes it is sending them, will that change the ways in which they react to these instincts? When we begin to associate the act of sex with gum, there is something intrinsically wrong with that. Unfortunately, with our capitalistic, commercialistic society being what it is, we will continually have to come to terms with the fact that washing your hair is an orgasmic I mean organic experience.
">(YOUR REFERENCE NUMBER).
1)"Sexual Images of Women to Sell Products 'Facism' and 'Bodyism'", an article with some statistics about the use of women in advertising
2) "You and Me, Babe: Sex and Advertising", An article by Richard F. Taflinger, PhD on the use of sex in advertising
3) Biological Basis of Sex Appeal", an article by Richard F. Taflinger, PhD
4)"The Evolutionary Theory of Sexual Attraction", an article by Jan Norman on "The Human Sexuality Web"
The Science Behind Raw Food Name: Virginia C Date: 2002-11-11 00:34:38 Link to this Comment: 3651 |
In the past few years, a new dietary trend has become popular. Raw foodism has hit the US, with a strong base and an ever-growing popularity. Raw foodists claim that a raw-food diet (which some define as as low as 70% uncooked foods, while others are exclusively [100%] raw) can boost overall health, increase energy, ameliorate disposition and physical appearance, and even cure many (sometimes terminal) diseases and ailments. However, the scientific community outside of the raw food community doesn't seem to see this diet in the same light as its followers. What is the science behind the raw food diet, and how much of what its advocates believe is true?
Raw foodists base their practices on the theory that cooking food kills it, destroying its nutritional value (one source quotes that cooking destroys between 30 and 85% of food's nutritional values, (9)) and making it unhealthy and less easy to metabolize. Some raw foodists claim that all raw foods have large counts of enzymes, which are fundamental to human health and digestion and metabolization of food, and which are destroyed when food is heated to above 116 degrees Fahrenheit (8). One article even claims that cancer, heart disease and diabetes are all directly linked to the consumption of cooked foods (6). Another more specifically targets a chemical called acrylamide, which is found in plastics and is known to be carcinogenic, and was recently discovered to be present in high levels in many baked and fried foods (7), while raw (and boiled) foods showed no traces of the chemical. Yet another article goes further and points out that, aside from the dangers of acrylamide in many cooked starchy foods, it has been shown that meat cooked at high temperatures is contaminated by heterocyclic amines, or HCAs, which are also known to be carcinogenic. All in all, the raw food community online has provided many links to scientific articles backing up their theories and practices.
Given all of these interesting scientific pro-raw foodism stances, I am still somewhat skeptical in my research. This is in part due to the fact that, when I was not navigating specifically from pro-raw foodism sites, I was unable to find many articles in favor of raw foodism, and none which were in specifically scientific publications. This fact makes me question the credibility of these sources, simply in that the scientific community at large seemed more skeptical and disapproving than anything else of the raw food movement. However, the reasoning behind anti-raw foodism was not always any more convincing than the pro case.
The majority of the scientific articles stating that raw foods are dangerous are referring to animal-borne diseases, such as e-coli and salmonella. Since most raw foodists are also vegetarians (or even vegans) this tends not to apply. However, vegetarian foods such as unpasteurized milk and juice can harbor harmful bacteria (3), (4). Furthermore, studies have shown that even raw salad greens such as lettuce and spinach can harbor harmful bacteria due to irrigation and fertilization (2). Therefore, there is clearly a safety issue surrounding these raw foods, in that they must be free of harmful bacteria that typical sterilizing processes such as cooking would normally kill. One site in favor of raw foodism includes the caveat that "The only concern here is if you are eating traditionally raised meat which is frequently contaminated with bacteria. You will want to make sure you cook that food." Therefore, we can see that despite the pro-raw stance, there are exceptions made in order to facilitate overall dietary healthiness.
Overall, I simply did not find any current articles praising the raw-foodism diet, outside of that community itself. This selective pro-raw foodism made me believe that, despite the diet's possible benefits, there couldn't be such a strong difference if no one in the scientific community at large has noticed the effects. It may be that there are serious scientific articles out there by non-members of the raw food community, and that I was simply unsuccessful in finding them. However, I searched through every seemingly relevant biology database of journal articles, magazines and studies that I could get my hands on, and the results were consistently 0 articles found for the search "raw food." The only remotely "hard science" type article I found was, while good, only linked to by one particular raw food website (5).
This makes me think that, since the larger scientific community has not yet got wind of this trend, it can't possibly be as big of a deal as its advocates claim. Until the raw food movement goes under serious critical and objective analysis, I am reluctant to believe that the many claims that the body metabolizes raw foods faster/better, or that raw foods can cure diseases, are more than mere speculations and ideals of the pro-raw foodism movement. One site even claims that "a raw food diet creates major improvements [sic] in health. The reasons are not known, but the experience is unmistakable" (10) This very claim, that 'the reasons are not known,' is what I suspect to be the case behind most of the raw foodism claims. However, this is not to say that said claims are definitely false, only that they should undergo more rigorous scientific investigation.
1)NY Times Online
2)Bugs Dress Salad, an article from the online journal nature.com
3)Eating Well: Food Safety, an article from the AARP's online index of articles
4)Labeling Raw and Undercooked Foods, an article on public health from King County, WA
5)Raw Foods vs. Cooked Foods Looking at the Science, a good scientific article that I found on beyondveg.com
6)Raw Food Q & A, from the rawfoodlife.com website
7)Could these foods be giving us cancer?, from The Guardian
8)The Living and Raw Foods FAQ, from Living and Raw Foods website
9)Healing Powers of Raw Food and Juice part 1, from Shirley's Wellness Cafι website
10)A Raw Food Diet, from Nov55 website, a "science and science criticism" site
VeriChip Name: Diana DiMu Date: 2002-11-11 00:40:10 Link to this Comment: 3653 |
Helpful Tracking Devise or too "Big Brother"?
You've heard about it, the possibility of implanting a microchip into a human body as a tracking device, but is this really just limited to science fiction? Sound too much like George Orwell? Not anymore. Using a Global Positioning System in the means of products like VeriChip may help save missing children or the elderly but is it a violation of privacy? Do the positives of such products justify the negatives of its use? By examining the uses of products such as VeriChip I hope to gain a better understanding of its intended use and the benefits it will provide, while taking into consideration the possible negative outcomes of its widespread use. Will such products provide safety and security at too great a cost? Are such products against one's constitutional rights no matter how good the intentions of its creators?
What is it?
Applied Digital Solutions, a Florida-based company, has been in the testing and production stages of microchip products called VeriChip and Digital Angel. VeriChip is a miniaturized, implantable identification device, with the potential to be used for security, financial, health, identification or other reasons. An encapsulated microchip the size of a grain of rice that contains a unique verification number. The microchip is energized and activated when passed by a specific VeriChip scanner. Previously, the chip used radio frequency to energize and transmit a signal of the verification number. (1) More recent tests have developed a chip that will use satellites to transmit signals globally. The newer product, Digital Angel, proposes to integrate wireless Internet technology with global positioning to transmit information directly to the Internet. The microchip is inserted under the fleshy part of the skin, typically under the upper arm. The chip and inserter are pre-assembled and sterilized for safety, and reportedly have little discomfort to administer. Once implanted, the microchip is virtually undetectable and indestructible. It has a special polyethylene sheath that helps skin bond to it to help keep it in place. The chip has no battery and thus no chemicals, and its expected life is up to twenty years. Contact with the body will enable the device to read body temperature, pulse, and even blood sugar content. (2) Research is being done to produce a micro battery that will generate energy through heat or movement. Currently, Global VeriChip Subscription is $9.95 a month as a form of universal identification. The information can be kept up-to-date by using the Applied Digital Solutions' website or calling a secure support center. (2) Currently some products are being manufactured by Applied Digital Solutions or other companies, that are not implants but worn in the form of wristwatches or badges. (3)
Uses and Benefits?
Some people have already begun using VeriChip as a means of providing identification and personal medical information. Through the use of such a microchip, medical records could be saved and carried with at-risk patients for emergency response. Such products would help track down abducted children or lost adults with Alzheimer's disease. Microchips would also help find lost pets or keep track of endangered wildlife, as well as find lost or stolen property. (3) VeriChip could also be used as a means for security. Heightened airport security, authorization for access to government buildings, laboratories, correctional facilities and the like. After September 11th, many feel a personal identification record would be beneficial in the probability of another terrorist attack. (4) Using VeriChip could also help track convicted criminal or possible terrorists from future attacks. Not only limited to health and security issues, the future of VeriChip and Digital Angel could lead to implantable mobile phones, and access to information found on personal computers and the Internet, such as email. (8)
Problems and Risk?
VeriChip and the future Digital Angel still need approval from federal health regulatory agencies to make sure there are no adverse effects to its wearer; however, there is already much controversy about its use. The biggest concern about the use of VeriChip and other similar microchip tracking devices is an invasion of privacy of the user. People fear the risk of third parties who would gain information on the Internet through resale or hacking. Groups like telemarketing companies could use such information for advertising. (3) Many have posed the possibility that if you were able to track down your own child through the use of a microchip, what would prevent other people from doing the same? How many false alarms would the police have to deal with from over-protective parents who thought their children were missing? (7) Parents may deem VeriChip's use in the best interest of their children but it may eventually lead to even more intense invasions of privacy, creating a society of parents who constantly survey their children. Despite the possibility of more easily tracking down abducted children, kidnappers and molesters alike could potentially disable or remove such microchips. (6) The idea that VeriChip will increase security and prevent such terrorist attacks, as September 11th is a difficult ethical question to pose. Would all prisoners on parole be forced to use VeriChip? How would you implant criminals and terrorists? If the government began implanting United States citizens with microchips that held their social security numbers what would happen to tourists, students, or even foreign dignitaries? (7) Currently, the microchips used and in production are passive chips, dormant until activated by a scanner. Future chips like Digital Angel will be active chips, beaming out information all the time. This leads to the problem of creating a continuous power source, as well as developing a chip that is small enough, yet still sensitive enough to receive signals form satellites thousands of miles out in space. (4) With the possibility of implantable mobile phones and personal computers comes the possibility of contracting viruses. (8) The risks of such possibilities are currently unknown; therefore possible solutions do not even exist. Most people fear an invasion of privacy as the greatest fault of implanting microchips. A recent CNN poll said that 76% of Americans said they would not want a devise like VeriChip implanted on their children, while 24% suggested they would. (3)
While companies like Applied Digital Solutions have good intentions I feel that at this stage in development there are still many ethical questions that will prevent the widespread use of products like VeriChip and Digital Angel. Although saving children and the elderly from kidnapping and sickness are admirable causes, the encroachment of privacy by such devices makes me feel that the negatives greatly outweigh its positive intentions. I feel the devices, which may start out favorably, have a lot of potential to be corrupted by outside parties, criminals, or yes, even over-protective parents. I think the widespread use of implantable microchips for security use would be extremely beneficial but could also lead to higher and newer forms of prejudice against people who do or do not use them. I feel many Americans would be strongly opposed to the possibility of the government having full knowledge of their whereabouts at all times. The use of VeriChip would be extremely useful in hospitals but how long would it take before hospitals would invest money in specific scanners for widespread use? Currently VeriChip costs $9.95 a month for standard identification purposes, but with increased technology, would its price go up or down? Would people who decide they do not want VeriChip, or better yet, cannot afford it, be prejudiced against by people or companies that do use such technology? What about the possible viruses or effects that might be caused by microchip use? Would such products affect your body? Your thinking? While I feel there are certainly many Americans who would condone the use of VeriChip and similar products, I feel for the time being that I'd rather take my chances with safety for a little more freedom.
Additional Information:
1) http://www.adsx.com/prodservpart/verichip.html, VeriChip Corporation website, part of Applied Digital Solutions.
2) http://www.adsx.com/faq/verichipfaq.html, VeriChip Frequently Asked Questions.
3) http://www.space.com/businesstechnology/technology/human_tracker_000814.html, States News Service article by Alex Canizares on Space.com website.
4) http://abcnews.go.com/sections/scitech/DailyNews/chipimplant020225.html, article by Paul Eng on ABC News website.
5) http://news.bbc.co.uk/1/hi/sci/tech/1869457.stm, article by Jane Wakefield on BBC News website.
6) http://www.futurecompany.co.za/2000/09/15/gillmor.htm, Can Parents Love too Much? by Dan Gillmor on Future Company website.
7) http://www.thehawkeye.com/columns/Saar/Saar_0728.html, opinion piece by Bob Saar on The Hawk Eye Newspaper website.
8) http://www.nytimes.com/2002/11/10/technology/10SLAS.html, Voices in your Head? Check that chip in your Arm by Matt Richtel New York Times Online.
9) http://home.wanadoo.nl/henryv/biochipnieuws_eng.html, Bio-Chip Technology in the News (Links to other articles).
Why Does Pizza Taste So Good? Name: Amanda Mac Date: 2002-11-11 00:43:53 Link to this Comment: 3655 |
Throughout most of life, humans are taught to disobey their taste buds by eating foods such as brussel sprouts, celery, and liver. How is it that our sense of taste works? Why is it that those things that are so unhealthy for our bodies taste so good? Shouldn't we expect that those foods that are most useful to our bodies are those foods that taste the best? And why is it that those unhealthy foods are enjoyed by practically everyone? Is this sense a biological trait of mammals and if so, is it hereditary?
First, in order to understand the scientific reasons for tasty foods it is necessary to understand the workings of our taste buds. The sense of taste is mediated by both gustatory receptors and olfactory receptors (1); when food or beverages enter our mouth they contact the tongue and palate and volatiles rise into our nasal cavities so that our sense of taste is made up of both smell and taste. Taste, is based upon groups of cells (taste buds) which take oral concentrations of many small molecules, through receptors within the cells, and inform taste characteristics to centers in the brainstem (See Image 1). (2)Taste buds are microscopic onion-shaped bunches of cells buried in the epidermal cell layer of the papillae. Little pores in the cells called, gustatory pores allow the receptors to contact the taste in our mouths. The average adult has about 10, 000 taste buds. These taste buds are most predominant on little knobs of epithelium on the tongue called papillae. Papillae are little bumps on the top of the tongue that increase the surface area for the tastes buds. The papillae also aid in the mechanical handling of food in the mouth. (3)There are four types of papillae (See image 2). The most abundant of the papillae are the filiform, but these contain no taste buds. Fungiform papillae are those that are located on the front of the tongue and appear most noticeably to the human eye. Foliate papillae are the series of folds on the rear edges of the tongue. Lastly, circumvallate papillae are the large bumps on the back of the tongue. (4)
Humans discern four types of taste: saltiness, sourness, sweetness and bitterness. (5) Notably though, scientists have suggested that there is another category, umami, the sensation induced by glutamate, an amino acid that composes proteins in meat, fish and legumes and is also included in MSG. Before this, it was thought that fat did not have a specific taste, but rather provided texture in food. (6) Richard Mattes, professor of foods and nutrition at Purdue proved differently. He proved that humans can taste fat therefore explaining why fatty foods taste so much better than fat-free foods. So, when someone says "this fat-free cookie tastes like cardboard," it is due to the lack of tasty fat in it. Here, though is where the slogan, "Eat everything in moderation," comes to mind. The reason for unhealthiness is not so much because we eat fatty foods, but because we do not eat them in moderation for they are certainly somewhat healthy.
Furthermore, it is true that those foods that are most tasteful are useful in our bodies systems. Glutamate is the major fast excitatory neurotransmitter in the brain. In fact, it is believed that 70% of the fast excitatory CNS synapses employ glutamate as a transmitter. Therefore, glutamate is an essential nutrient to our bodies, particularly our brains. (7)
Also, humans and most mammals share the composite structure of taste buds; practically all mammals have a sense of taste. However, there have been categorized three different types of tasters: super-tasters, medium tasters and non-tasters. Scientists have found that the distinction lies within the amount of taste buds on the tongue; the less the taste buds the less the sensitivity to taste. (4) The difference is due to age, whether or not someone smokes and hereditary. Children born of non-taster parents will most likely also be non-tasters.
So, indeed there are biological reasons for the tastiness of pizza and also reasons for difference in particular tastes. I love spicy food so, I am most likely a non-taster because the spiciness does not affect me the way it would affect a super-taster. And lastly, it is important to listen to your bodies cravings, because there is a high chance that your body lacks a particular nutrient that you are craving. Eat what tastes good. Eat pizza.
1) Campbell, Neil and Jane Reece. Biology. 6th edition. Pearson Education Inc; San Francisco, 2002. p.1074.
2)Physiology of Taste, abundant resource on taste buds
3)Mythos Anatomy homepage, on Mythos Anatomy website
4)"A Taste Illusion: Taste Sensation Localized by Touch", article written by Lina M. Bartoshuk
5)Scientific American homepage, on the Scientific American website
6)Cosmiverse homepage, Purdue University's journal
7)Glutamate as a Neurotransmitter, Glutamate information
Lasers: the most effective option for tattoo remov Name: Emily Sene Date: 2002-11-11 01:00:34 Link to this Comment: 3656 |
Everyone makes decisions that they regret later in life. Some people make bad financial decisions, others bad relationship decisions, and most make bad fashion decisions. If you're lucky, the mistakes you make have only temporary repercussions. If you're not lucky, one of your mistakes was getting a tattoo. Not much can be done for those who outgrow their body art. The only procedures available cause so much trauma to the skin that they leave scars that are as large and offensive as the original design. That is, until laser therapy.
In order to understand how tattoo removal works, it is necessary to know something about the tattooing process. A needle attached to a hand-held "gun" is used to inject the desired pigment. It vibrates several hundred times per minute and reaches a depth of about a millimeter. The ink must penetrate past the top layer of skin, the epidermis, because its cells divide and die very rapidly. The dermis, or the second layer, is much more stable so the design will last with only minor fading and dispersion. The ink is insoluble and will not absorb into the body. Typically, a scab forms over the design and the wound has healed within 3 weeks. (1)
The most popular procedure for removal is laser surgery, but several other methods are still available. Salabrasion has been practiced for centuries and is somewhat antiquated. It involves numbing the area with a local anesthetic then applying a solution of tap water and table salt. An abrading surface, which can be as basic as a wooden block wrapped in gauze, is used to scrape the area until it turns a deep red color. Then a dressing is applied. Essentially, this is a primitive dermabrasion. (6) It works by shaving off the epidermis and then the areas of the dermis containing pigment. Dermabrasion is a more modern version of salabrasion. A rotary abrasive instrument is used to peel off the pigmented skin. It is called cryosurgery when the area is frozen with a special solution prior to abrasion. (2) Since these are rather traumatic procedures, bleeding is likely to occur. Scarring is significant and a virtual certainty. Until the development of laser surgery, dermabrasion was the most effective and convenient means of tattoo removal.
Tissue expansion is a less common procedure. A balloon is placed under the dermis of the patient and slowly inflated. This stretches the skin and forces cells to divide more rapidly. Then, the tattoo is cut out and the new skin is used to cover the excised flesh. If it is performed properly, tissue expansion only leaves a linear scar. (3)
Perhaps the most invasive option is staged excision. First, the area is numbed with local anesthetic. Then, a dermatologic surgeon uses a scalpel to cut into the skin and actually remove the pigmented sections. The area is closed with stitches and leaves scarring wherever an incision was made. This technique works best on small tattoos. (3) For a larger are, a skin graft is necessary. (2)
Since the scarring and pain associated with each of these procedures is often more offensive than the tattoo itself, the option of laser surgery is extremely desirable. As early as the 1960's, scientists began exploring the medical uses of lasers to correct birthmarks such as port-wine stain. Eventually, researchers determined lasers are effective in tattoo removal because heat generated from the beam breaks pigments in the cells of the dermis into small particles which can be absorbed by the body's immune system. The epidermis is "transparent," meaning that the laser travels through it and focuses on the exact level of the pigment. This chars the ink and it breaks down. The tattoo subsequently fades as immune cells attack the foreign particles. Patients liken the sensation of the laser to having hot grease splattered onto the skin, or being snapped with a rubber band. (2) One man reported that his skin smelled like pork chops following the procedure. (7)
The first lasers used for tattoo removal were the Argon and the CO2. They broke down the ink, but at the cost of the other layers of skin. Just as with abrasion and excision therapies, scarring was left in place of the design. (8) Only three lasers have been proven effective in breaking down ink without damaging the surrounding skin- the Q-switched Ruby, Q-switched Alexandrite, and most recently the Q-switched Nd: YAG. They are referred to as "Q-switched" because of the short, high energy pulses of light used in the procedure. (2) The Q-switched Ruby is the most commonly used laser for tattoo removal. However, the Q-switched Nd: YAG has been recently discovered to be more effective on colored tattoos and darker pigmented skin. The beam of light penetrates deeper which increases the amount of damage done to the epidermis. As a result, the surface layers of skin sometimes retain a permanent "frosted" appearance. (6) Research is still being conducted on which lasers are most effective, although it is generally acknowledged that a combination of all three Q-switched beams is necessary in most cases. (10)
The color of the ink and the quality of the tattoo also play a role in laser removal. Black ink absorbs all laser wavelengths which makes it the easiest to treat. Blue is also fairly easy while green and yellow are the hardest. (5) If a tattoo is done by an amateur artist, the ink particles are larger and may be contained on several layers of dermis. This increases the amount of exposure to the beam needed to produce the charring effect. A professional artist will typically have better control over the tattoo "gun" and distribute the ink more evenly and with more precision. (2) No matter what condition the tattoo is in, laser removal is a bloodless, low risk alternative. It is usually performed in several sessions on an outpatient basis.
Redness and swelling are common immediately following the procedure and the site may scab. (2) Side-effects are generally mild. There is a possibility of hyperpigmentation, an abundance of color in the skin at the treatment site, hypopigmentation, a lack of color at the site, or lack of pigment removal. The chance of permanent scarring is only 5 percent. (5)
Typically, having a tattoo removed is more expensive than getting one put on. The cost will range from several hundred dollars to several thousand based on the size, location, pigment color, and number of visits required. Medical insurance will not usually cover the expense because it is considered a cosmetic procedure. (2) However, there are a number of programs available to those who want to get rid of gang tattoos for free. (9)
The advent of laser removal in recent years has made more invasive techniques virtually obsolete. It is more effective, less painful, and results in less long-term skin damage. There is still some controversy surrounding which beams and wavelengths are most effective in different situations, but laser removal is unanimously regarded as the most expedient procedure..
1)www.howstuffworks.com, How Tattoo Removal Works
2)www.howstuffworks.com, How Tattoos Work
3)no-tattoo.com, Tattoo Removal, The Things You Did as a Kid, Cleaning up the Mistakes
4)www.topdocs.com, FAQ on Tattoo Removal
5)American Academy of Dermatology, Tattoo Removal Made Easier With New Laser Therapies
6)www.patient-info.com, Article on Tattoo Removal
7)www.thesite.org, Health and Fitness article on Tattoo Removal
8)Skin Laser Center, Article on Tattoo Removal
9)free tattoo removal programs offered to gang members 10)MedScape from WebMD, Abstracts on Tattoo Removal (note: I had to set up an account on MedScape to view these articles so the link might not work on all computers)
PMS- the Premenstrual Syndrome Name: Roseanne M Date: 2002-11-11 03:19:24 Link to this Comment: 3657 |
"What is wrong with you? Why are you acting this way?"
"Are you ok? Why are you crying all of a sudden?"
"What? Rosie, I think you ate enough already. You're still hungry?"
Have you ever had comments like these said where you couldn't really answer them? This actually happens to me once a month; these sudden outbursts of anger, depression, and ofcourse, the munchies. Some cases it could be more severe than others but the same symptoms definitely appear at a certain time of every month, and this is what society now calls 'PMS.' I always wanted to know what PMS, or the premenstrual syndrome, was defined as exactly. I was curious because I was the one affected so much by it- or so what the magazines 'Cosmo' and 'Glamor' taught me. I often hurt and offended the people I care for most, although my actions were uncontrollable, and felt extremely guilty about what I've done. After what was said and done and the emotion distress I caused myself, I felt that something had to be done. Therefore I was curious to know if there was any way I could lesson the degree of my PMS through research and study for this web paper.
The Premenstrual Syndrome is defined by 'a series of physical and emotional symptoms that occur in the luteal phase of the menstrual cycle, which is the two week time frame between ovulation and menstruation.'(1) It is a disorder characterized by hormonal changes that trigger symptoms in women; an estimate of 40 million women suffer from PMS and over 150 symptoms have been attributed to PMS. The symptoms vary for each individual lasting for about 10 days. Symptoms have been characteristically both physical and emotional including 'physical symptoms as headache, migraine, fluid retention, fatigue, constipation, painful joints, backache, abdominal cramping, heart palpitations and weight gain. Emotional and behavioral changes may include anxiety, depression, irritability, panic attacks, tension, lack of co-ordination, decreased work or social performance and altered libido.' (2)
The original description of PMS has been grouped the same ever since 1931 by an American neurologist. However, the cause of PMS is still unknown. The general consensus is that migraine and depression stem from neurochemical changes within the brain. Also, female hormones play an important role- the 'combination of hormonal imbalance (that is the deficiency in progesterone and excess in estrogen) in fluid retention since it holds fluid causing women to gain up to 5 pounds premenstrually.' (3)
According to theorists and doctors, in order to manage PMS, it is recommended to a) eat 6 small meals a day at 3 hour intervals high in complex carbohydrates and low in simple sugars. This helps balance the sugar energy high and lows. b) drink less or no caffeine, alcohol, salts, fat, and simple sugars to reduce bloating, fatigue, depression and tension. c) drink daily supplemental vitamins and minerals to reduce irritability, fluid retention, joint aches, breast tenderness, anxiety, depression and fatigue. d) exercise 3 times a week for 20-30 minutes to reduce stress and tension. These are daily recommendations by doctors to reduce the degree of PMS without the help of medication.
However, when in certain cases women need medication for severe PMS (5 out of the 40 million), they have 3 options 'a) taking tricyclics (Elavil, Triavil, Sinequan) b) taking tranquilizers (Valium, Ativan, Xanax) and c) taking serotonin.' (2) However, after a few cycles of the above medication, the patient became forgetful, sleepy, and less communicative. Another form of treatment was giving a dose of 100mg of danazol twice daily. Danazol prevents the rise and fall of estrogen levels. Although improvement occurred with danazol treatment (a 80% success rate), menstrual change and nausea were frequent side effects. After several cycles, some patients' hormones were so well controlled that they were able to discontinue this medication.
Although there is yet much to learn about PMS, after this research I could say that the change in nutritional and lifestyle changes are best to lessen the degree of PMS. I thought it was best to avoid medical treatment- but only if your PMS is not that severe. We do live in a society where we demand 'quick fixes' and expect that a pill would cure your every dissatisfaction, however there is no instant cure for PMS. They are far too complex involving too many diverse symptoms and factors to be treated for one single medication. Again, in conclusion to this research paper I would reiterate the importance of daily nutritional and lifestyle changes for a lesser degree of PMS.
WWW Sources:
1) Understanding PMS, a comprehensive PMS website
2) Medical Treatment of PMS- Premenstrual Syndrome, many experiments and results of drugs for PMS
3) What is Premenstrual Syndrome?, a concise description of what PMS is exactly
Cocaine:scaring the crap out of America for Decade Name: Diana Fern Date: 2002-11-11 03:23:20 Link to this Comment: 3658 |
Cocaine has been present in American drug culture for the past three decades and has seen a rise in popularity in the new millennium. From Grateful Dead songs, to investment bankers, as well as in recent movies such as "Traffic" and "Blow" cocaine has permeated the fabric of American society. So what is it that has made cocaine so popular and sought after? Are the biological measures that the American government taking toward eliminating cocaine, ethically sound?
The indigenous peoples of South America have used cocaine for hundreds of years. Indians chewed coca leaves in order to alleviate feelings of fatigue as well as feelings of hunger. Cocaine derives from the coca plant found in Colombia, Peru, and Bolivia. In the modern world cocaine predominantly takes the form of white powder and is snorted through the nostril where it is absorbed into the mucus membrane. Cocaine is a combination of Coca leaves, sulfuric acid, and HCL. It is then purified with water, ammonium hydroxide, and ether. It's chemical formula is C1 7H2 1NO4(1). Cocaine can also be smoked or injected.
This mixture of chemicals creates a multitude of pleasurable effects, which is why it is so habit forming. These pleasurable effects include, euphoria, garrulousness, increased motor activity, lack of fatigue and hunger, and heightened sexual interest. Cocaine stimulates the nucleus accumbes; where in a large amount of dopamine is released by neurons. Cocaine inhibits dopamine receptors by the accumulation of dopamine in the brain. This accumulation of dopamine is what causes the euphoric feeling that cocaine users describe.
Cocaine also has many adverse effects because of its highly addictive nature. It has been associated with cardiovascular problems, respiratory effects, including chest pain and respiratory failure, strokes, seizure, and gastrointestinal complications. (2).Memories of cocaine use can be a powerful draw for former cocaine users to fall into relapse. This memory association is attributed to the hippocampus, an area of the brain that assists in recalling memories. (3).Although cocaine causes many health consequences,the role that government plays in trying to prevent cocaine from surfacing in the United States far outweighs these affects.
The United States government has been focusing in recent years toward eliminating cocaine at the source, the coca plant. Spraying herbicides over areas of presumed coca growth have had severe implications for residents of Colombia, and other countries bordering the Amazon. In June of 2002, 7,000 hectares of food crops were damaged from arial herbicide, sprayed by the Colombian government in a U.S sponsored sweep of the area. 4,000 people and 178,000 animals were found with major skin, respitory and digestive problems due to the herbicide. (4).
The medical issues that have arisen due to the use of chemicals in the war on cocaine have lead to new research in the elimination of cocaine that could be as detrimental as herbicides. The United States and Colombian governments have decided to test out a fungus called Fusarium that could kill coca plants. Yet introducing a new species of plants in an area that has already been ravaged ecologically could have unfavorable effects on the environment. Biodiversity could plummet as species of plants and animals, unaccustomed to the new fungus could die off as a result of a non-native plant being thrust into the ecosystem. Species of Fusarium oxysporum is known to cause disease and could endanger the well being of the humans residing in these areas. (5). The rainforests of this area are already at risk, and government meddling will only worsen the problem.
Although cocaine is potentially lethal, and has affected the lives of thousands of addicts, those people who use cocaine do so of their own freewill. Those people, who reside in the South American countryside, do not choose to have new species of lethal fungi, or choose to have their food crops destroyed. By trying to create a moral infrastructure in the United States, government funded irradiation projects, have affected the lives of campesinos in Colombia and Bolivia in a direct and harmful way. The war and drugs has gone on for decades, and what does America have to show for it? Cocaine is a reality, yet we have a right to choose. The life of someone in Colombia is not worth less than a Statement America chooses to make.
1)"cocaine" Encyclopedia Britannica,
2),
"Research Report Series- Cocaine Abuse and Addiction,
FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
3) Netting, J."Memory May Draw Addicts Back to Cocaine"in Encyclopedia Britannica. Vol. 159. Issue 19. Science news Inc; New York. 2001. p.292.
4) "Drugs war's true cost" in Encyclopedia Britannica. Vol. 32. Ecologist. June2000. P10.
5) Vargas, "Biowarfare in Colombia?" NACLA report on the Americas. Vol. 34. Issue 2. Oct 2000. p20.
Emotions and How They Inhibit Me From Living Name: Laura Silv Date: 2002-11-11 10:19:34 Link to this Comment: 3660 |
One issue that has been raised a lot in class and in the online forum in the last couple of weeks is the question of emotions; why they happen, what affects them, and what exactly controls them. No one in the class was able to answer these questions to the complete satisfaction of every one else, so I thought I would try to pursue this topic a little further. Specifically, I would like to explore the effect that premenstrual symptoms have on women's emotions during their twenty-eight day cycle.
Premenstrual symptoms, or PMS for short, cause all sorts of problems, as any woman can tell you. The British National Association of Premenstrual Syndrome's website (http://www.pms.org.uk) briefly describes the symptoms that one may experience over the four-week cycle. Some of the more common physical characteristics of PMS include bloating and cramping in the stomach, backaches, weight gain due to fluid retainment, skin problems and headaches. The more emotional symptoms include aggression, fatigue, anxiousness, a feeling of being misunderstood, intense sensitivity, mood swings, depression and, most common (for me, anyway), a feeling that one simply doesn't want to get out of bed. In fact, there are more than 150 symptoms that have been attributed to PMS.
While at least one of the many symptoms of PMS is experienced by nearly all women, about 20% of women experience these symptoms in a much more severe manner which affect one's ability to go through simple every-day tasks. Women who experience this form of extreme PMS could possibly suffer from Premenstrual Dysphoric Disorder (PMDD), which was recently added to the American Psychiatric Association's list of mental illnesses. This illness can be described as "PMS intensified by about a thousand", according to a friend who suffers. A more complete explanation of the differences between these two can be found at www.conquerpms.com.
Although it is not entirely clear exactly what causes premenstrual symptoms, the reigning theory is that they are caused by the rise and fall of estrogen levels within a woman's body over the course of a month. Estrogen levels begin to rise slowly just after menstruation ends, reaches its peak two weeks later in the middle of the cycle. The estrogen then falls sharply, only to rise slowly and fall again just before menstruating begins. Because estrogen holds fluid, higher amounts of estrogen carry with them fluid retention. It also increases brain chemicals and activity, both of which fall again as estrogen lessens. This flux can affect mood, causing the emotional symptoms described above. Estrogen also carries with it a sense of vulnerability that is lost again when estrogen falls, leaving women to feel more alert and aggressive.
Endorphins, which are released in the body through exercise, are also commonly believed to affect PMS by relieving some of the physical pain. One can also relieve the intensity of symptoms by changing one's diet to include less sugar, caffeine or alcohol and more fruits and vegetables. Starch especially is thought to lessen the intensity of cramps. A longer list of causes can be found at the website for the Women's Health Channel (http://www.womenshealthchannel.com).
Many women opt for medical treatment of PMS, and with a slew of drugs available, from over-the-counter to prescription medicines, this is typically the most common response. The most common over-the-counter drugs are Pamprin and Midol, which are available in any drug store. Treatments prescribed by doctors usually depend on the age and maturity of the patient's body. For women in their early 20's or younger, a strong pain reliever is usually the more common response unless the woman in question is sexually active, in which case birth control is prescribed in order to kill two birds with one stone, as it were. Birth control is also prescribed by doctors for women whose ages range from the early 20's until the mid-30's As women begin to exhibit symptoms of menopause, hormone therapy is usually prescribed to make the transition easier and estrogen levels more even over the menstrual cycle.
The most difficult thing to deal with about PMS is the emotional distress that it puts one through. Besides the physical pain that makes one wish that they distributed hysterectomies at birth, the emotional pain can cause relationships to suffer and have long-lasting effects such as depression. On a more short-term basis, irritability and acute sensitivity can blow any harmless comment out of proportion. When left untreated, even by over-the-counter medicines, this emotional roller coaster can affect personal relationships with friends, family and co-workers. These symptoms are especially noticeable during menopause. Hormone therapy can help these emotional trials and make the cycle of one's menstrual period or the menopause phase a little easier, not only for the sufferer but also for all those in close contact with her
Laughing Matters Name: Maggie Sco Date: 2002-11-11 10:20:02 Link to this Comment: 3661 |
We all like to laugh, and generally it makes us feel better. Laughter is a common physiological phenomenon that researchers are just beginning to study. What exactly happens when we laugh? What makes us laugh? Is it true that laughter is contagious? Is laughter healthy?
When we laugh, the brain pressures us to simultaneously make gestures and sounds. Fifteen facial muscles contract, the larynx becomes half-closed so that we breathe irregularly, which can makes us gasp for air, and sometimes, the tear ducts become activated (1). Nerves sent to the brain trigger electrical impulses to set off chemical reactions. These reactions release natural tranquilizers, pain relievers and endorphins (2).
There are three different theories for what people find humorous. The incongruity theory is when people's logical expectations don't match up with the end of the situation or the joke. The relief theory is when tension is built up and we need a release of emotion; this is commonly seen in movies in what we refer to as 'comic relief' (1). The relief theory also takes into account laughing at forbidden thoughts (6). The third is called superiority theory, when we laugh at someone else's mistakes because we feel superior to them (1). While what people find humorous can be divided into these three generic categories, many factors affect a person's sense of humor, which is why we don't all laugh at the same things. The main factor seems to be a person's age (1). We have all seen young children laugh at jokes that they don't "get" just because they understand the format for riddles (4). There is always a certain amount of intelligence involved in understanding a joke, no matter how basic or stupid the joke may seem (1). So the older a person gets, the more she learns, and her sense of humor will usually become more mature.
However, laughter also occurs in situations not necessarily considered to be typically humorous. Psychologist and neuroscientist Robert Provine, from the University of Maryland, studied over 1,200 "laughter episodes" and determined that 80% of laughter isn't based around humor (3). We laugh from being nervous, excited, tense, happy or because someone else is laughing (4). The listener isn't just laughing in response to the speaker, either. Provine found that in most conversations, speakers laugh 46% more than listeners do (3). I think the fact that speakers laugh more than listeners implies a kind of nervousness and need for acceptance on the speaker's part. They subconsciously think that if they laugh, the people listening to them will also laugh, and the listeners laughing makes the speaker feel more comfortable.
Conversationalists who think that if they laugh they will also make their audience laugh may not be too far off. It is widely accepted that laughter makes people laugh, even if they do not know the original context that caused laughter. The ability of laughter to cause laughter indicates that humans might have "auditory "feature detectors"--neural circuits that respond exclusively to this species-typical vocalization"(3). These detectors trigger the neural circuits that generate laughter. A laugh generator that is initiated by a laugh detector may be why laughter is contagious (3). So people who are laughing with someone else may not be able to control themselves, even if they do not know what caused the original laugh.
What we consider normal, healthy laughter doesn't come in different forms. Laughter is rigidly structured the same way as any animal call. All types of laughter should be a series of short vowel-like syllables such as 'ha-ha-ha' or 'tee-hee-hee' that are about 210 milliseconds apart (3). When it doesn't follow that structure, laughter usually sounds unnatural or disturbing. Laughter that sounds like 'haa-haaa-haaaaa', that gets louder instead of quieter, or that interrupts the structure of a sentence are all examples of odd laugh forms (5). I realized that many of the examples of 'unhealthy' laughter are what we use in our society to depict villains. Since laugher is structured like animal calls, it is almost as though when we hear something that doesn't follow those patterns, we instinctively know that it is menacing or unnatural.
We often laugh because we're happy, but laughing can also make us happy - and healthy. Laughter releases endorphins, neurotransmitters that have pain-relieving properties similar to morphine and are probably connected to euphoric feelings, appetite modulation, and the release of sex hormones (7). Studies have shown that laughter boosts the immune system in variety of ways. Laughter increases the amount of T cells, which attack viruses, foreign cells and cancer cells, and gamma interferon, a protein that fights diseases (8). It increases B-cells, which make disease-destroying antibodies (1). Immunoglobulin A, an antibody that fights upper respiratory tract infections, and immunoglobulins G and M, which help fight other infections, levels all rise due to laughing (8). The amount of stress hormones are also reduced by laughing, some of which are hormones that suppress the immune system (1). So when you feel better after laughing, you really are happier and healthier.
Laughing is also a full body workout. Some researchers estimate that laughing 100 times is as much of a workout as 15 minutes on an exercise bike (1). This raises the question of exactly what type of laughing do they mean? The kind where your stomach hurts by the time you are finished, or any type of laughing? Also, the average adult only laughs seventeen times a day, so it would take a little more than five days to get the equivalent of 15 minutes on an exercise bike through laughing. Laughing exercises the cardiovascular system by lowering blood pressure and increasing heart rate, which any aerobic exercise will do (6). It probably improves coordination of brain functions, which increases alertness and memory, and helps clear the respiratory tract from coughing (8). Laughter increases blood oxygen; and strengthens internal muscles by tightening and releasing them (6). One doctor says that 20 seconds of laughing works the heart as hard as three minutes of hard rowing (8). My friends who are rowers say that this is practically impossible, but the fact that research indicates that laughing gives you that much of a workout means it must be good for you, even if not to such an extent.
Laughter is a very complex physical process. There are theories on how to classify what we find humorous, which in turn makes us laugh. But even if these categories are correct, there are other things that cause laughter. Any extreme emotion can make people laugh, which is sometimes why we laugh in what are considered socially inappropriate moments (like funerals or car accidents). Someone else laughing also triggers laughter, so it really is contagious. There is a great deal of research that indicates that laughter is healthy for you in a variety of ways, such as boosting the immune system and reducing stress. So if you feel like you're getting sick or you don't have much energy, stop worrying about going to the gym or the health center. You just need to find funnier friends.
1)How Stuff Works, "How Laughter Works".
2)Body Manifestations, by Dr. Sarfaraz K Niazi, 2/9/94.
3)American Scientist, Jan-Feb 1996. "Laughter", by Robert Provine.
4)"The Best Medicine", by Raj Kaushik, from The Halifax Herald Limited, 1/20/02.
5)Nature Science Update, "A Serious Article about Laughter", by Sara Abdulla.
6)Laughing Out Loud to Good Health
7)Bartleby.com, using the Colombia Encyclopedia as a reference.
8)MDA Publications, Quest, Volume 3, Number 4, Fall 1996. "Is Laughter the Best Medicine?" by Carol Sowell.
Illegal Drugs Name: Jennifer R Date: 2002-11-11 11:04:24 Link to this Comment: 3663 |
To Botox or Not to Botox Name: Brie Farle Date: 2002-11-11 12:27:34 Link to this Comment: 3664 |
TO BOTOX OR NOT TO BOTOX
Where does our plight for perfection end? Self-improvement, especially in the area of personal appearance, is applauded in American culture. In April 2002, the FDA announced the approval of Botulinum Toxin Type A to temporarily improve the appearance of moderate to severe frown lines between the eyebrows (glabellar lines), a medical condition that is not serious. Botulinum Toxin Type A, better known as Botox, was first approved in December 1989 to treat eye muscle disorders (blepharospasm and strabismus) and in December 2000 to treat cervical dystonia, a neurological movement disorder causing severe neck and shoulder contractions. (1).
Botox is a protein produced by the bacterium Clostridium botulinum. In medical settings, it is used as an injectible form or sterile, purified botulinum toxin. In 1895, Emile P. Van Ermengem first isolated the Botulinum microbe. He discovered that this bacterium produced a toxin, and understood that this was what caused disease. However, it wasn't until 1946 that the toxin was isolated in a crystal form by Edward J. Schantz. In the early 70's, Dr. Alan Scott began investigating the use of botulinum toxin injections to treat crossed eyes (also called strabismus). Clinical studies for this purpose were initiated in 1977. (6).
Shortly after these studies, Dr. Jean Carruthers, a Canadian opthalmologist, noted a marked decrease in the appearance of frown lines on a patient that was receiving botulinum toxin injections to relieve twitching of the eye (blepharospasm). Soon after, Dr. Carruthers teamed up with husband Dr. Alastair Carruthers, a Canadian dermatologist, to use the botulinum toxin to treat frown lines and crow's feet. The results of these treatments were published in 1989, laying the foundation for a revolution in cosmetic surgery. (6).
Small doses of toxin are injected into the affected muscles and block the release of the chemical acetylcholine that would otherwise signal the muscle to contract. Thus, the toxin paralyzes the injected muscle. Botox worked so well to help medical conditions, it was tested as a cosmetic procedure. (1).
"In placebo-controlled, multi-center, randomized clinical trials involving a total of 405 patients with moderate to severe glabellar lines who were injected with Botox Cosmetic, data from both the investigators' and the patients' ratings of the improvement of the frown lines were evaluated. After 30 days, the great majority of investigators and patients rated frown lines as improved or nonexistent. Very few patients in the placebo group saw similar improvement." (1).
Within a few hours to a couple of days after the botulinum toxin is injected into the affected muscle(s), the spasms or contractions are reduced or eliminated altogether. The effects of the treatment are not permanent, reportedly lasting anywhere from three to eight months. By injecting the toxin directly into a certain muscle or muscle group, the risk of it spreading to other areas of the body is greatly diminished. (2).
When Botox is injected into the muscles surrounding the eyes, those muscles can not "scrunch up" for a period of time. The wrinkles in that area, often referred to as "crow's-feet," temporarily go away. (2).
(For before and after pictures, see (7))
Most of the patients in the study were women, under the age of 50. The most common side effects were headache, respiratory infection, flu syndrome, blepharoptosis (droopy eyelids) and nausea. Less frequent adverse reactions (less than 3% of patients) included pain in the face, redness at the injection site and muscle weakness. (1).
In June 2002, the American Headache Society released findings of 13 studies that indicate Botox rid a number of patients of severe headaches. (3).
One particular project suggests that people plagued with headaches who also had Botox injections for cosmetic reasons suffered from fewer migraines, experienced a reduction in the disabling effects of migraines and used less pain medication. (3).
The headache and Botox connection began emerging in 1992 when a California physician noted his patients who got Botox injections said they were having fewer headaches. (3).
"The biggest advantage to Botox is its lack of side effects, especially compared to other medications," Dr. William Ondo of the Baylor College of Medicine said in an AHS press release. "It really is extremely safe and appears to be very effective for some people." (3).
Researchers think Botox blocks sensory nerves that relay pain messages to the brain in order to relax muscles, making them less sensitive to pain. (3).
"Scrunching" your eyebrows in a concerned or angered expression relays these messages and may cause headaches.
"More than half of the 48 patients in a study at a Mayo Clinic in Scottsdale, Arizona, said their migraine occurrences dropped by 50 percent or more. Of the ones who had a positive response, 61 percent said they had headaches less frequently and almost 30 percent said the headaches were less severe. At the Baylor College of Medicine Headache Clinic, 58 patients participated in a controlled trial. Some received Botox and others had placebos. After three months, 55 percent of the patients who received Botox reported at least moderate improvement in their headaches. Two of the 29 who got the placebo water injections reported the same results." (3).
What's the worst that can happen, you might ask, from having a toxic substance injected into your face?? According to results from a study conducted at Wake Forest, Botox side effects are minimal. Doctors found a small risk the skin around the injection site would droop temporarily. (3). This is known as blepharoptosis and occurs in about 5% of patients. It usually appears 7 to 14 days after the injection and can last 4 to 6 weeks. A more speedy method of treating it is the application of prescription eye drops (iopidine). In many cases, these drops will help resolve the droop within a few days. To reduce the risk of blepharoptosis, it is recommended that patients obtain Botox from a physician who is experienced in its use. It is also important for a patient to remain vertical for 4-6 hours after the injection. This allows the Botox to be taken up in the treated area and reduces the chance of displacement to other muscles. The injected site should not be touched for two to three hours following injection. (4).
Botox sounds like a miracle drug for those desiring a wrinkle-free face. It has minimal side effects and is relatively inexpensive to other forms of cosmetic alterations such as surgery. Different patients will require different amounts of Botox treatment, which can vary the cost. According to recent information, Botox treatment can cost anywhere from $300 to $700 per treatment. (6). However, what should we anticipate is the future of Botox?
Botulinum Toxin Type A, Botox, is related to botulism. Botulism is a form of food poisoning that occurs when someone eats something containing a neurotoxin produced by the bacterium Clostridium botulinum. Botulinum toxin A is one of the neurotoxins produced by Clostridium botulinum. (2).
Thus, muscle paralysis is the most serious symptom of botulism, which in some cases has proven to be fatal. The botulinum toxins attach themselves to nerve endings. Once this happens, acetylcholine, the neurotransmitter responsible for triggering muscle contractions, cannot be released. Essentially, the botulinum toxins block the signals that would normally tell your muscles to contract. If, for example, it attacks the muscles in your chest, this could have a profound impact on your breathing. When people die from botulism, this is often the cause; the respiratory muscles are paralyzed so it is impossible to breathe. (2).
Said in this way, Botox may not sound so harmless. It is a serious toxin, and although the side effects from facial injections do not sound lethal, its users should realize the lethal properties of the substance.
Furthermore, what will happen to all of the Botox enthusiasts when they decide to discontinue Botox injections? The current recommended 'dosage' is to return once every three months for new injections between the eyebrows. These muscles are paralyzed and weakened. Over time, will these muscles be able to function properly without Botox injections or will the toxin have weakened their natural capabilities to the point of destruction? When all of the women from the study are eighty years old, will their eyes be visible under permanent blepharoptosis?
We are living in an era of self-manipulation and self-perfection. Is it not ironic that many of the same individuals who do their grocery shopping exclusively at the organic market are off having poison injected into their face to look healthy? The newest rage of self-improvement enthusiasts is Botox Parties. Botox parties are one of the newer, more controversial ways to administer Botox. Typically, Botox party guests will get a quick lecture on the risks before receiving Botox treatments in a private area. Alcohol is sometimes served, although it should never be served prior to the treatments (and many doctors will say that alcohol should never be involved in any medical procedure, before or after). (6).
"Some people enjoy Botox parties because of the support they receive from other guests. In addition, a Botox party can be a more economical way to have treatment, since the prices for the actual toxin are usually lower in large groups. At any rate, these occasions have been growing ever more popular, and many highly qualified physicians look upon them with disfavor." (6).
There are many physicians who refuse to do Botox parties. They believe that no medical procedure should be administered in a social setting. They also argue that it is impossible to meet the specialized needs of each individual Botox patient in a party setting where the doctor is administering up to ten Botox treatments in an hour. (6). It almost sounds as if Botox is comparable to an addictive drug; a qualified dealer, a group of high paying clients, and a social setting complete with food and alcohol to make it more acceptable and more fun.
So far, Botox has been approved to help with cross-eyes, uncontrollable blinking, cervical dystonia, and now moderate to severe frown lines between eyebrows It is being studied to help excessive sweating, spasticity after a stroke, back spasms, and headaches. (2) Is Botox a problem or is the newest and best cure to a myriad of medical and cosmetic concerns? Apparently, the risks of Botox injections, and the unknown future effects of Botox, are not enough to discourage Botox enthusiasts.
It is a difficult generation who can no longer find a distinction between potential inflictions of self-harm and striving to look the best. In Hollywood, the treatments are so popular that some directors complain that their leading actors can no longer convincingly perform a full range of facial expressions. (5)
Doctors knowledgeable in Botox are in high demand. This increases the possibility that not all doctors know all they should. This includes knowing when Botox won't be useful at all. "Muscles cause some wrinkles, but many result simply from the loss of elasticity that goes naturally with aging (or, less naturally, with smoking and sun exposure), causing the skin to sag and crumple". (5)
There are treatments for this sort of wrinkle, but Botox isn't one of them, says Dr. David L. Fledman, director of plastic surgery at Maimonides Medical Center in Brooklyn New York. "I had a patient recently who came in asking for Botox." He says. "It would have done no good at all. In fact, she might have ended up looking worse." (5)
Botox isn't a cure-all, and it is accompanied by some strange side effects. In September 2002, the company that distributes Botox, Allergan Inc., was asked to revise their advertising. Thus, all advertisements for Botox will disappear until Botox is advertised seriously and realistically as a medical procedure and not as a simple method to destroy, "those tough lines between your eyebrows." (8)
...."If you don't mind getting shot up with poison and you don't mind paralyzing parts of your face-well, you've got plenty of company."(5)
1) FDA , an article posted by the FDA upon approving Botox
2) How Stuff Works , A great site with explanations on how everything works!
3) CNN , an article regarding the helpful effects of Botox on headaches.
4) www.ebody.com , Details about the side effects of different medical procedures
5) Vreflect.com, Talks about Botox as a cultural phenomenon
6) Botox Injections Information , A professional site with information and links to doctors
7) Botox Injections Information, Go here to see pictures!
8) Sunwellness Magazine, An article from September 2002 announcing the FDA's notice to Allergan Inc. to halt advertising
Dyslexia Name: Lawral Wor Date: 2002-11-11 12:41:39 Link to this Comment: 3665 |
Dyslexia is a learning disorder that effects a large portion of the population. Once it is diagnosed, it can be overcome, but undiagnosed it can prove to be a great hardship to people who have it, especially children in grade school who are trying to learn to read or do basic math. Dyslexia has recently been linked to genetics and to brain abnormalities. A lot of research and positive action is being conducted to help people, especially children, work around dyslexia so that they can function normally at school. However, not everyone sees dyslexia as a harmful thing. Many groups are dedicated to the creativity and artistic talent that usually goes hand in hand with dyslexia as well as many groups formed by dyslexics for dyslexics that provide support and an outlet for artistic endeavors. With research and new teaching techniques the effects of dyslexia can be overcome, but for some, that is not the goal.
Researchers have been working on finding the root of dyslexia for years. While it is still unknown why and how dyslexia occurs, a lot of advancements have been made. Dyslexia is now sometimes classified as a genetic brain anomaly. The anomalies in a dyslexic's brain impair how they perceive and therefore learn language skills. (3) It is still unclear where in the brain these anomalies would occur and to what extent. One of the theories, however, is that dyslexia is caused by anomalies in the brain's lipid metabolism. (3) The research is still very preliminary.
Dyslexia is first and foremost a language-based learning disorder. It is characterized by problems with single word decoding and usually is undiagnosed based on the age and level of school of the person suffering from it. (1) If caught when a child is young, kindergarten or first grade, the child can learn to overcome the major pitfalls of dyslexia with special learning techniques such as phonological training and become a strong reader and a strong student. Using multi-sensory techniques to teach children with dyslexia seems to be the most effective. By using all of their senses to learn and then to practice, children are "overlearning" in an effort to make up for their poor memory and initial confusion. (2) If the child with dyslexia is not diagnosed until after they have formed most of their reading habits, around third grade, the special learning techniques are not as effective. (1)
As dyslexia is becoming more main stream and losing some of the stigma attached to it as a learning disorder support groups made for dyslexics by dyslexics have become more and more common. These groups share methods for working around dyslexia and emotional support for those that suffer from it. Most of them also have special programs for parents or teachers of dyslexic children. These groups all stress the fact that it is highly possible to be dyslexic and still be successful. Some even have lists of famous people who have been reported to have learning disorders such as dyslexia. These lists usually contain the professions of those listed, further emphasizing the variety of ways in which dyslexic people can achieve success. The lists are usually very eclectic including such luminaries as Walt Disney, Winston Churchill, MC Escher and Whoopi Goldberg. (4)
Many of the support groups for people who suffer from dyslexia and their families also celebrate the positive benefits of dyslexia. Dyslexia makes language development difficult because it causes people to think in pictures. (5) For this reason, many dyslexics are very talented and creative artists. There are many websites for these groups that display the work of their members. They share painting, drawings, poems, stories, and other artwork from their members whose ages range from elementary school age to college students to adults. For many of the people who post their work with these support groups, dyslexia is the reason that they are artists. It is either their inspiration or the source of their talent. Either way, they comment on their need to have it but the conflicting hardships it causes. Karin Peri, a teenager, wrote a poem entitled "Dear Dyslexia" that perfectly exemplifies this inner conflict. (6) She writes:
Because of you
I see a different angle.
you make me who I am,
But what would life be without you?
A life free of constant frustration,
A chance to see things "correctly"
To say exactly
What I have to say
And write exactly
What I have to write.
But without you
Would I have anything to write?
Anything to say?
Would I have a poem?
Her words are echoed in the work of many of the other artists who post their work on these sites.
Though most dyslexics will admit that dyslexia has been hard and things would have been easier without it, especially school, there are some who embrace it and what it has to offer to them. With new teaching techniques that encompass more of the senses rather than trying to force dyslexics to learn traditionally, it will be easier for future children with dyslexia to hold on to the benefits that it can bring and still be successful in school. The proliferation of support and informational groups for sufferers, their families, and their educators continue to provide services and help to spread these practices.
3)Brooks, Liz. "Dyslexia: 100 Years on Brain Research and Understanding." Dyslexia Review Magazine. Spring 1997.
Dreams Name: Elizabeth Date: 2002-11-11 13:04:27 Link to this Comment: 3666 |
Dreams
While we sleep, our bodies rest from the events of the day and recharge
in order to face the next round of challenges it will face during its waking
life. During sleep, our brains produce a
fractured, often nonsensical amalgamation of random events and people, otherwise
known as dreams. These dreams often
provoke powerful reactions of fear or pleasure, as, for all their
improbability, they follow reality in such a way as to trick the dreamer into
reacting to this fantasy world as if it actually existed. However, although dreaming undeniably is a
large and memorable part of one's nightly sleep cycle, scientists have yet to
define for certain the biological function of dreams. Some believe dreams are a remnant of our
Neanderthal past, when our ancestors used dreams as a sort of training ground
for developing appropriate reactions in the life or death struggles they faced
every day. Others thinks
dreams simply stem from random impulses which produce images from one's daily
life with no particular significance. Others believe that dreams serve to "clean
out" the emotional stress accumulated during the day. Whatever the hypothesis, it is difficult to
prove for certain the purpose of dreams.
Researchers have identified four
distinct stages to the sleep cycle (1). Of these, the phase known as Rapid Eye
Movement (REM) is most closely associated with dreaming. The REM phase, characterized by rapid heart
rate, distinct brain waves, and an increased amount of electrical activity in
the brain, produces the most vivid and memorable dreams (2). Initially, scientists believed that dreams
only occurred during REM sleep. While
studies have identified dreams during other phases of the sleep cycle, the most
powerful dreams are still associated with REM sleep(1). Previously,
scientists and psychologists believed that dreams were simply a byproduct of
the functions of REM sleep, but the discovery of the possibility of dreams occurring
during non-REM stages of the sleep cycle undermines the validity of this
theory. This has led many in the
scientific community to develop new and often farfetched theories of the
biological function of dreams.
The formal study of dreams first
began with psychoanalysts like Sigmund Freud, whose dream theory of 1900 served
as an influential and widely accepted conception of why humans dream. Freud believed that dreams reflected the
baser impulses of the human subconscious, impulses which could not be acted
upon in society. His observations were
based on subjects whose disturbing dreams haunted them even while they were awake (2). However, while dreams certainly can reflect
the subject undertaking actions which they would never have the opportunity to
do in real life, Freud's theory seems to imply that
only the seriously troubled dream.
Research has shown that all humans dream, whether or not they remember
their dreams the next day.
In the 1960s and 70s, researchers at
the Harvard Laboratory of Neurophysiology focused on observing the biological
causes of REM sleep in order to better understand why humans dream. They discovered that REM sleep is induced by
the release of the brain chemical acetylcholine. The release of this chemical stimulates nerve
impulses which recreate random bits of one's internal information in a sequence
which may not conform to logic. J. Allan
Hobson and Robert McCarley, the primary researchers
at the Harvard laboratory, named this new theory the activation-synthesis
hypothesis. From this hypothesis, Hobson
developed an idea of dreaming not as an arena in which to explore hidden urges,
but as an opportunity for mental "housekeeping". He also believed that dreams could serve to
solidify emotional ties to memories (2).
Since scientists like Hobson
established a biological basis for why humans dream, other researchers have
developed their own theories regarding the purpose of dreams, while undermining
others' hypotheses. Hobson's concept of
dreams as an opportunity for mental reorganization has been criticized as
research has shown that very little of the day's events recurs in that night's dreams
(4).
Rather, dreams tend to deal with larger issues of conflict and emotions,
which has led others to develop a concept of dreams as stimulated by a threat simulation mechanism, a
remnant of the days when humans faced life or death struggles on a daily
basis. This theory also takes into
consideration the recurring dreams of war veterans and trauma victims, as in
these cases the brain attempts to present the dreamer with the former conflict
again and again in order to prepare them to deal more effectively with such a
catastrophe in case they are ever in such a position again (3). Many agree with this theory in part, as they
recognize the problem solving aspect of dreams, but may not believe in the
existence of a threat simulation mechanism (5). Still others believe dreams have no
biological function at all, only a cultural significance assigned by human
attempts to make sense of dreams (4).
Humans may never discover the actual
biological function of dreams. While many
theories of dreams as an opportunity to reorganize one's thoughts and to solve problems
sound feasible, it is difficult to prove anything conclusively, due to the relative
youth of neurobiology and the shadowy nature of dreams themselves. As our general understanding of the brain
develops, scientists may be better able to understand why we dream.
Works Cited
3. "The Biological
Function of Dreaming"
DEHYDROEPIANDROSTERONE: By Any Other Name would be Name: Chelsea Ph Date: 2002-11-11 15:00:53 Link to this Comment: 3667 |
What exactly is dehydroepiandrosterone? Dehydroepiandrosterone (DHEA) is one of the hormones secreted by the adrenal glands, located on top of the kidneys in human beings. DHEA has been toted as everything from "chemical trash" to "the fountain of youth drug". Thousands of studies on DHEA have been conducted, but few have been long-term, and even fewer have been done on humans. Despite this, however, many people continue to use DHEA as an over-the-counter medication for heart disease, to combat the aging process, cancer, obesity and many other ailments. What is the biological role of DHEA? Is it a viable possibility that DHEA really is a miracle drug?
DHEA is the most abundant steroid hormone found in the human body, and is used in the synthesis of other hormones, such as testosterone and estrogen. Levels of DHEA in the human body peak around 20-25 and steadily decline with age. One of DHEA's most important functions is counteracting the presence of high levels of cortisol, a chemical that "accelerates the breakdown of proteins to provide the fuel to maintain body functions"(1) while the body is under significant stress. Cortisol is designed to allow the body to react quickly when threatened, but can be damaging when produced for a long period of time. DHEA works as a buffer between the body and cortisol, and is triggered by the same stress that stimulates production of cortisol. As age increases, and DHEA levels decrease, the body has fewer defenses against the effects of cortisol, hence the idea of supplementing the body's reserves. However, there is wide debate within the medical community about the consequences of taking DHEA supplements because of the lack of long-term testing on humans.
While the information on humans is non-forthcoming for the moment, there are many interesting theories about the effects of DHEA based on studies done with laboratory animals- mainly mice and rats. DHEA was also found to inhibit the growth of cancer cells and to help genetically obese mice lose weight, as well as aiding strength, agility and memory in older mice. While the conclusions are very exciting for the rodent community, the question remains as to whether or not these results can be duplicated in humans. "For 50 years we've studied estrogen replacement therapy in women, and look at how much anxiety the latest studies on estrogen are causing. We have no equivalent ... studies for these other substances. We just don't know." (6)
Based on the information gathered from these studies on rodents, DHEA is thought by many to be an essential chemical for the body's tissues. It is theorized by one man (no indication that he is a doctor or scientist could be found!) that bringing the body's levels of DHEA up to those of a 25-year-old, the course of Alzheimer's can be slowed, and the immune system can be stimulated to fight cancer, degenerative diseases and AIDS(5). While this information is tempting to believe, and very convincing in theory, there simply is no proof to back it up.
While levels of DHEA do appear to have negative correlations with aging, disease, and immunity on the surface, the blasι attitude with which it has been marketed is highly inappropriate. Rats and mice are not human, they do not have levels of DHEA even approaching ours, and just because you feel good now does not mean that you will in a year, or five, or ten. Interestingly enough, some of the many side effects thought to come from long-term use are breast and prostate cancer because DHEA is used in the synthesis of testosterone and estrogen; therefore, too much can actually cause tumors. The presence of tumors in mice was significantly reduced because they have very little DHEA naturally, but the physiology of humans (because of the already high levels of the hormone) may trigger the reverse effect. Our country's culture is obsessed with being youthful, healthy and thin and marketers tote DHEA as the cure-all, despite the lack of conclusive evidence to support their claims.
The time may come when appropriate, long-term trials indicate that there are benefits that outweigh the risks of taking DHEA. It is important to bear in mind, however, that while reduction in DHEA levels occurs with age, it does not necessarily follow that supplements will prevent disease or inhibit the aging process. Our culture, perhaps from genetic predisposition, craves youth and the health that goes with it; the idea of something that will keep you young is too tempting, and far too eagerly accepted. Many other factors play into the aging process, including diet, exercise, genetics, and environment, and the effects of these factors cannot just be erased by taking a pill.
References:
1) University of California, Berkeley Homepage , a resource from the medical courses offered at UCABerkeley
2) The University of Montana Research and Scholarship Page , a resource from the University of Montana
3) Cognitive Enhancement Research Home Page , CERI Homepage
4) DiagnosisTech International, Inc. , DiagnosisTech Information Page
5) The DHEA Homepage , Interesting site linking DHEA to human maladies
6) AARP Home Page , An article from the AARP
7) Quackwatch Home Page , An article from HealthNews
8) Anti-Aging Revolution , A chapter from "DHEA and Pregnenolone: The Anti-Aging Superhormones"
The Hip Questions Name: Katie Camp Date: 2002-11-11 15:20:55 Link to this Comment: 3671 |
Controversy surrounds the abnormal development of the acetabulum and femoral joint in infants, the hip. The definition of Congenital Hip Dysplasia or Developmental Dislocation of the Hip (CHD/DDH) is a variety of types and degrees of severity diagnosed by the many tests that are used for its diagnosis. Treatments vary as well and by far the most confusing question to CHD/DDH is the question of its root cause. Congenital Hip Dysplasia is thought to "run in families" (1) and many statistics support the assumption that it is a concern of genetics. However, other research has also observed that an infant has the probability of developing problems with the hip in the womb and after birth, unassociated with the genetic factor, identifying the disorder as Developmental Dislocation of the Hip. Whether one identifies with the evidence to support its heredity or the idea that any child can suffer from a deformed femur and acetabulum through any fault of development, it is important to be aware of CHD/DDH affecting one to five children per 1,000 births (3), how it is diagnosed, the different degrees of severity to which it occurs and the multiple choices in treatment.
CHD/DDH is the abnormal formation of the hip joint, causing easy subluxation or dislocation of the hip. The hip joint, a "ball and socket" joint, is comprised of the acetabulum, the socket and the head of the femur, the ball. There are three main classifications of CHD/DDH. Dysplasia is the term to describe just the "abnormal development" or malformation of the femur and/or the acetabulum. Subluxation classifies a partially dislocated hip and a further classification is a completely dislocated hip. A four tiered scale developed by Dr. Crowe in 1979 is used in diagnosing newborns to identify the severity of malformation and degree of dislocation. Crowe I is the least severe; the femur and acetabulum are almost normally developed and there is less than fifty percent dislocation. Crowe II hips result from abnormal development of the acetabulum and a fifty to seventy-five percent dislocation. In Crowe III stage, the acetabulum lacks a roof and so the femur portion creates a "false acetabulum" using the pelvis, resulting in complete dislocation. "High hip dislocation" can be used to describe the classification of Crowe IV because the acetabulum is completely underdeveloped and the femur sits high on the pelvis in attempt to form some sort of joint articulation. (1)
To diagnose CHD/DDH and indicate the degree of severity a variety of tests are routinely performed on newborns. The Barlow test is positive, when the "hip is flexed...thigh adducted [and] pushing posteriorly in ling of the shaft of [the] femur causing [the] femoral head to dislocate posteriorly from [the] acetabulum" (7). This however is not entirely conclusive of CHD/DDH and it is confirmed by the performance of Ortolani's test. The Ortolani test involves bringing the "femoral head from its dislocated posterior position to opposite the acetabulum," reducing the dislocated hip, otherwise described as bringing it back into proper positioning (6). If positive, this produces an audible "clunk" as the hip is reduced. The Barlow test shows that a hip has the potential to dislocate whereas the Ortolani test confirms its dislocation. Because both of these physical tests require experience and specific skills to identify the feelings and sounds of positive results, controversy surrounds the use of the Barlow and Ortolani tests. Examination by x-ray and ultrasound has become an additional tool in diagnosis. X-ray, however is less common because whereas it shows the hip in one fixed position, ultrasound can allow the hip to be seen in many positions and during movement. Ultrasound was first developed as a CHD/DDH diagnostic tool in 1978 to confirm positive physical tests and identify the degree of abnormality of the hip. A scaled developed by Graf is based on the depth and shape of the acetabulum as viewed in ultrasound and is similar to the Crowe scale described earlier. A type one is normal and no treatment is necessary while a type two hip has a shallow acetabular cup and is just "developmentally immature" in infants less than three months but should be treated in those older. In type three the hip is partially dislocated and in four is completely dislocated, both requiring treatment. (5) Other general symptoms that prompt further examination is a discrepancy in leg lengths, asymmetrical skin folds around the pelvic area, and a limp.
Just as there are many different degrees of dislocation and dysplasia (malformation) associated with this disorder, there are many different treatments. The purpose of treatment is to force correct development of the acetabulum so that the femoral head can sit properly in the joint and further subluxation or permanent dislocation does not ensue. The most common and perhaps simple way of achieving such an effort is the use of the Pavlik harness, von Rosen splint, or a stiff shell cast. Each are used on infants in the first six months of life to spread the infant's legs apart and force the femoral head into the acetabulum, applying pressure to "enlarge and deepen the socket" (2) and develop the hip normally. Closed manipulation repositions the joint by moving the leg around to get the femoral head in the proper location. In cases of children older than six months and other severe cases, treatments can involve surgery. Surgery manually repositions the joint. Before the age of four, femoral osteotomies are performed to reconstruct the hip. Pelvic osteotomies are performed after the age of four to limit instability and reduce the dislocation of the hip (1). A malformed joint and dislocated hip is obviously not fatal but treatment is important because the resulting pain is often unbearable. If undetected CHD/DDH can cause severe, painful arthritis as the forces of weight bearing wear down the cartilage of the femoral head that usually allows for comfortable and easy motion of the joint. CHD/DDH left undetected requires later treatments involving the use of anti-inflammatories, walking devices (such as a cane), physical therapy, and most often a resort to total hip replacements to correct hip alignment (1).
Despite the variety of CHD/DDH types and treatments, CHD/DDH is fairly uncommon only resulting in 1.5 births per thousand, although with more specified examination techniques this number is possibly as high as 5 births per thousand. It is hard to connect each case with one another and determine the specific cause of CHD/DDH. There are significant similarities in cases of hip malformation that have suggested the genetic causes of CHD/DDH. In many cases, CHD/DDH has "run in families" (1) and in particular ethnic groups. The prevalence in North American Indian communities has often been as high as 35 cases in one thousand births (3) and a frequency in the Lapp community, the natives of Norway (4). Also, since the majority of cases are of the left hip in female infants as well as the first born genetics could connect these cases. It has been observed that there is an increased incidence in mothers that carry a high level of a particular collagen, a bone and cartilage building material (1), often have CHD/DDH newborns. Also, mother's with high hormonal changes and a strong estrogen presence during pregnancy results in "increased ligament laxity...thought to cross over the placenta and cause the baby to have lax ligaments" (2) affecting the development of the hip. Causes of CHD/DDH are then tied together in common hereditary features.
CHD/DDH is also argued to be purely developmental, meaning it could result in any person just based on particular physical situations. For example, CHD/DDH is common in breech and caesarian births probably because of the positioning and pressure on the acetabulum and femur. Another argument is that in the case of increased incidence in the Native communities of North America and Norway is not that genetics play a role, because the practice of swaddling and use of cradleboards results in "extreme adduction" (2), bringing the hips together and displacing the femoral head from proper positioning in the acetabulum. Developmentally, the femoral head has reached its maturity in the womb while the acetabulum completes its development in the first few months of life (3). In cases of unusual positioning of the hips the acetabulum could fail to properly develop its superior position, thus allowing dislocation. Finally, because some cases of diagnosed CHD/DDH self-correct with the completed development of the acetabulum in the first few months of life it could be possible that this development of normalcy is in much the same manner as the chance that the dislocation is a developed disorder.
While CHD/DDH is simply defined as the subluxation or complete dislocation of the hip joint, the variety in which it presents itself, confusion in diagnosis and the controversy surrounding the identity of its cause makes CHD/DDH truly complicated. Whatever the root cause, it is fortunate the answers are sought to the questions raised. Increasing awareness allows more and more infants to be diagnosed and treated, usually resulting in 90% success (4). Complication in understanding CHD/DDH is beneficial, as it allows for more observation of cases and the further discovering and understanding the phenomena behind it and within it.
World Wide Web Sources
1)Total Hip Replacement in Congenital Hip Dysplasia, useful general CHD information resource, also delving into the subject of the treatment of hip replacement.
2)Congenital Hip Dysplasia, general information source, including good introduction of common treatments.
3)Developmental Dislocation of the Hip, comprehensive notes of occurance statistics, risk factors, treatments, and links to sites that futher explain tests and treatments.
4)What is Hip Dysplasia?, general information and brief history of the common Pavlik harness treatment.
5)Screening for Developmental Dysplasia of the Hip, comprehensive overview of CHD/DDH complete with good visual resources.
6)Ortolani's Test: for Congenital Hip Dislocation, simple and understandable description of the Ortolani test.
7)Barlow's Test, simple and understandable description of the Barlow test.
The Biology of Dreams Name: Heidi Adle Date: 2002-11-11 18:03:31 Link to this Comment: 3675 |
"Just as dreams are unreal in
comparison with the things seen in waking life, even
so the things seen in waking life in this world are unreal in
comparison with the thought-world, which alone in truly real."- Hermes
Since the beginning of their existence, heterotrophic organisms have been defined by the need for sleep. Humans accept it (more or less willingly) when they are infants and embrace every opportunity for it as college students and adults. It does not take a lot of psychological or biological background to tell that it is critical to human life. Our bodies simply stop functioning after a long period of time without it and the more we get the better we feel. But what if sleep is not only necessary for the body but the mind as well?
This is the origin of the dream. If one studies the fundamentals of biology she is sure to learn that nothing exists if it is unnecessary for survival because it would have regressed over the course of billions of years. What then is the importance of sleep to the human mind? One might think that sleep is the same as being unconscious but people take sleeping pills to knock themselves out and wonder why they still feel horrible or even worse the next morning. In fact, sleep is full of mental activity. During sleep muscles tense; blood pressure, pulse, and temperature rise; and various senses are alert (4). Random thoughts occur throughout the night, sometimes even taking on some scheme. This phenomenon is called a dream.
What is a dream? It would be pretentious of anyone to assume that modern psychology or biology have grasped all the complexities of dreams. Yet, especially in the past two centuries, many theories stand and observations have been made. There are at least three indicators that someone is dreaming.
The first indicator is called rapid eye movement (REM) sleep. As the name indicates, the eyes of the sleeper move back and forth at rapid speed during her sleep. If one is to wake someone up indicating this rapid eye movement, the sleeper is sure to tell of the vivid dream(s) she just experienced (4).
The second indicator of dreaming has to do with the EEG (electroencephalogram) system. If one takes a closer look at a sleepers brain wave pattern in REM sleep, there are striking similarities to the pattern at an awake stage (7). It consists of desynchronized minimal waves in both cases (3).
The third indicator that someone is dreaming is if they are paralyzed. In fact, paralysis is thought to protect the dreamer from acting on their dreams. This paralysis is due to certain neurons in the frontal lobes of the brain. The activity of the brain during this stage of sleep begins in a structure called the pons which is located in the brain-stem. The pons send messages to shut off the neurons in the spinal cord which results in an almost full body paralysis (2).
The first REM session occurs c. 90 minutes after falling asleep and then in 90 minute intervals after that. Depending on how long one sleeps, she can have between four and six REM sessions each night. (2).The first session is very short no longer than five minutes. Each succeeding REM session get longer and the average person's longest dream can be up to thirty minutes long. (1).
Psychology has made some early advancements in the subject of dreams dating back to the Austrian neurologist who developed Psychoanalysis: Sigmund Freud. His theories have oftentimes been taught as the truth which is always a problem. His successors today believe that dreams serve as mental relief and problem solving. In the past decades, however, biologists have made considerable advances in the field of dreams. They state that the most important function of dream sleep is the growth of the brain. This is a result of the observation that infants dreams four times as much as adults. Neurobiologists have discovered that "neurons (brain cells) sprout new axons and dendrites (nerve fibers) during dream sleep. This brain growth gives us a stronger network of brain circuits which allow us to have greater intellect...Although many brain chemicals are involved in sleep and dreaming, two very important ones are the neurotransmitter serotonin and a brain hormone called melatonin. Both are produced by the pineal gland of the brain" (1). Melatonin is meant to calm the brain and induce sleep. Serotonin on the other hand triggers the brain to dream.
Since I wrote about the effect of alcohol on the fetus in my last paper, I thought it might be interesting to consider the effect it has on sleep and dreams. The neurotransmitter, as I hope I've made quite clear, is crucial to the dreaming process. Alcohol causes the level of Serotonin in the brain to drop considerably which results in what appears to be dreamless sleep. This is sleep without REM activity. On the other hand, when alcoholics try to withdraw, many experience delirium tremens (DTs) (2). These nights are characterized by shaking, sweating and hallucinations. Many biologists believe that the mind takes the opportunity of the absence of alcohol and overproduces serotonin which results in the hallucinations.
It is important to understand that not sleeping can be harmful on at least two levels and can lead to hallucinations while one in awake. Generally one's body will compensate for lack of dream sleep one night by dreaming more the following night until the normal quota is reached. Unless you are an alcoholic who does not sleep in which case you will quite literally "loose your mind" (2).
As with any field of science, there is a fair amount of controversy surrounding dreams, some of which has been presented already. Furthermore, as with any field of scientific research, it is safe to assume that the controversy will never end. There are many theories but one in particular I would like to concentrate on. David Maurice, Ph.D. is a professor of ocular physiology in the Department of Ophthalmology at Columbia-Presbyterian Medial Center. He is one of the many that questions the wide-spread belief that REM sleep exists mainly to process memories of the previous day. Maurice hypothesizes that "while sleep humans experience REM to supply much-needed oxygen to the cornea of the eye...[He] suggests that the aqueous humorthe clear watery liquid in the anterior chamber just behind the corneaneeds to be 'stirred' to bring oxygen to the cornea." In addition he states that "[w]ithout REM our corneas would starve and suffocate while we are asleep with our eyes closed" (5).
The reason for Maurice's engagement in this field of study began some years back when he started observing animals. He says: "I wondered why animals born with sealed eyelids needed REM or why fetuses in the womb experience a great amount of REM" (5).
David Maurice then developed his hypothesis after learning about a young man who had an accident and whose eyes had been immobilized as a result. His corneas had become laced with blood vessels to supply the corneas with oxygen. We know that when eyes are shut, oxygen can reach the cornea from the iris solely by way of the stagnant aqueous humor. Maurice did the calculations and found that the oxygen supplied under these conditions would be insufficient. This ultimately formed his hypothesis that REM must bring oxygen to the brain somehow.
As I indicated in the beginning, the functions of dreams are still unclear and heavily under debate. Dreaming may play a role in the restoration of the brain's ability to cope with tasks such as focused attention, memory, and learning. Dreaming may "just" be a window to hidden feelings. Almost everything is possible and we may never know. We do know that "You have, within yourself, an ability to make yourself experiences no one else has ever had. And hence to see things no one has ever seen and learn things no one has ever learned" (6). Maybe it is as important to individualize dreams as it is to analyze the general population's dreams. We might just by able to learn about ourselves and in the process, learn about others as well which the beauty of science is, after all. Whether the thought is soothing or uncomfortable, as you continue to sleep and dream you must know that the controversy of the biology of dreams is one that won't ever go to sleep.
How Does Homeopathy Work? Name: Chelsea W. Date: 2002-11-11 18:59:55 Link to this Comment: 3677 |
The litany of side-effects warned against for even the most mundane of mainstream medications often seems enough to drive one to explore alternatives. Homeopathy is one such alternative. First systemized in the late 1700's by Samuel Hahnemann, M.D.(1), homeopathy is a form of medicine based on stimulating the body's own immune responses, while minimizing the risk of exacting any harm in the process (2).
Although homeopathy is now often regarded as something of a "fringe" form of medicine in the United States, this was not always historically so (3). In fact, in 1900, homeopathic physicians made up 15% of all physicians (3). However, homeopathy has since been subjected to attempts by the American Medical Associations and other practitioners of conventional medicine to marginalize its practice - due largely to concerns over the criticism of mainstream pharmaceuticals inherent in homeopathy and the economic threat which homeopathy was seen to pose to conventional medicine (4). Its popularity is nonetheless growing domestically (3). And, abroad, homeopathic medicines are quite popular and widely-accepted: 39% of French physicians have prescribed them (and 40% of the French public has used them), 42% of British physicians refer patients to homeopaths, and 45% of Dutch physicians considered them to be effective (3).
Specifics on the Workings of Homeopathy
The Law of Similars
Also known as "like cures like," the Law of Similars is a central tenet of homeopathic medicine (5). This law refers essentially to the premise that a substance which in overdose will cause certain symptoms can, in small and appropriate doses, stimulate the body's immune system to help cure the disease marked by these symptoms (2). It is also worth noting that this particular aspect of homeopathic theory (though not some other aspects of it) is made use of in conventional medicine as well (2). Vaccinations, allergy medications containing small doses of allergens, and radiation as cancer treatment (given that radiation in large doses can cause cancer) are all examples of such instances (2). It remains somewhat unclear why this type of "like cures like" is effective (although there is much evidence that suggests that it is so) (6). One study found specifically that a homeopathic remedy known as Silicea stimulated parts of the immune system known as microphages (which fulfill the role of swallowing up foreign substances) (6).
Symptoms as Manifestations of the Body's Attempts to Heal Itself
Another important idea in homeopathy is the recognition that symptoms, biologically speaking, symptoms of a disease are not the disease but rather manifestations of the body's attempt to heal itself (2). Thus, suppressing symptoms may not be the most effective means of treating an illness (the recent realization that suppressing fever may not always the best course of action is one example of this premise) (2). Homeopathy, instead, attempts to work with the body's natural immune system rather than to suppress it (6).
Individualization
Homeopathic medicine also places a high value on the importance of individualization of treatment (2). Although some ailments may have similar general symptoms, often specifics of these conditions differ and may result from different causes (2). So, it is important to recognize this and treat illnesses in as individualized a fashion as possible. In fact, homeopaths may often inquire into personality traits or seemingly less related common complaints of patients in order to get a overall sense of the workings of the patient's body, all of which is, after all, inter-connected and inter-dependent (2).
Homeopathic Medicine and Its Relationship to Conventional Medicine
A number of clinical studies have provided evidence to support claims of the effectiveness of homeopathy (several such studies are discussed in an excerpt from Consumer's Guide to Homeopathy, see note (7)). They also offer the substantial benefit over conventional medicine of being extremely safe (8). However, although most homeopaths object to certain facets of conventional medicine (such as its tendency to often work simply to eradicate disease symptoms rather than focusing on the underlying disease) most will acknowledge that there are instances in which other methods should be used (8). For example, some ailments may be best treated through changes in lifestyle choices, others may require surgery (something which homeopathic remedies may help to prevent in some cases, but not all) (8).
Homeopathy Today
With its rich history, homeopathy remains immensely relevant and useful today, even providing medicines effective in treating post-traumatic stress disorder in this time of terrorist threats (9). And, over the past ten years, there has been a 25-50% annual increase in the domestic sale of homeopathic medicine (3).
The relationship between homeopathic medicine and the conventional medical community also raises interesting questions about science. If science involves endeavoring to be "less wrong" so to speak, might there be an added responsibility with specific respect to the field of medicine to minimize the risk of adverse effects when one is wrong - to, as homeopathy does, attempt to first not do harm?.
1)Homeopathy Timeline, from the Whole Health Now website
2)A Modern Understanding of Homeopathic Medicine, from the Homeopathic Educational Services website
3)Ten Most Frequently Asked Questions on Homeopathic Medicine, an article by Dana Ullman, M.P.H., from the Homeopathic Educational Services website
4)A Condensed History of Homeopathy, from the Homeopathic Educational Services website
5)What is Homeopathy?, from the National Center for Homeopathy website
6)Homeopathic Medicine and the Immune System, from the Homeopathic Educational Services website
7)Scientific Evidence for Homeopathic Medicine, an excerpt from Consumer's Guide to Homeopathy, on the GaryNull.com website
8)The Limitations and Risks of Homeopathic Medicine, from the Homeopathic Educational Services website
9)Homeopathy Responding to Crisis, from the website of the National Center for Homeopathy
Iraq's Biological Weapons Name: Kate Amlin Date: 2002-11-11 19:42:25 Link to this Comment: 3679 |
As the government's desire to attack Iraq becomes more of a frightening reality each day, many questions remain unanswered. Is Iraq really a "threat"? Does Iraq really have weapons of mass destruction (WMDs)? More specifically, does Iraq have biological weapons (BWs)? Should the United States be worried about an attack by Iraqi biological weapons? The answers in the status quo are rather murky, but there is concrete evidence that Iraq used to have a WMD arsenal. After Iraq invaded Kuwait and the Gulf War ensued, the UN Security Council passed Resolution 687, insuring Iraq's full co-operation with UN weapons inspectors to guarantee that all of Iraq's WMDs would be destroyed (1). This resolution never included military enforcement (2); instead it was contingent on economic sanctions. From 1991- 1998, UN inspectors scoured the country for WMDs to destroy. Although UNSCOM (the UN weapons inspection team) maintains that they demolished almost all of Iraq's WMDs, even the inspectors have admitted that Iraq was covertly hiding a large supply of BWs (1). The UN found that Iraq had horrendously large amounts of ricin, a biological weapon derived from castor beans that is deadly and has no antidote (3). Iraq was also found to be in possession of a multitude of ballistic missiles fitted with carrying devices to disperse chemical and biological weapons (CBWs) (4). U.S. officials thwarted the Iraqis in their attempt to smuggle 34 U.S. military helicopters transformed to include weapons systems that would deliver CBWs (3). Even the Iraqis themselves admitted that their country was fostering an active biological weapons program after Saddam Hussein's son-in-law, Hussein Kamil, defected to Jordan in 1995 (3). Kamil had been in charge of Iraq's WMD program and acknowledged that Iraq had been hiding many of its biological agents from UNSCOM, including a whopping 2,265 gallons of anthrax (3). The UN weapons inspectors were kicked out of Iraq in 1998 (5). At that time, the Western world believed that UNSCOM had successfully destroyed the vast majority of Iraq's supply of weapons of mass destruction. But many things could have happened in the course of the following four years.
Many political scientists assert that Saddam does not have WMDs, and in particular biological weapons, and that George W. Bush is simply looking for a reason to invade the country. Stephan Zunes, chair of the Peace and Justice program at the University of San Francisco, eloquently illustrated this point in an article for the think tank, Foreign Policy in Focus: "Despite speculation-particularly by those who seek an excuse to invade Iraq-of possible ongoing Iraqi efforts to procure weapons of mass destruction, no one has been able to put forward clear evidence that the Iraqis are actually doing so, though they have certainly done so in the past. The dilemma the international community has faced since inspectors withdrew from Iraq in late 1998 is that no one knows what, if anything, the Iraqis are currently doing" (1). The strength of the Iraqi military has been severely mitigated since the early 1990s due to casualties during the Gulf War and the effects of years of economic sanctions (14). In the status quo, the military is probably too weak to produce any WMDs. Even if Iraq has retained stockpiles of BWs, they would most likely be useless. If the Iraqis tried to disperse biological weapons with the SCUD missile technology that they had during the Gulf War, 90 percent of the biological agents would be destroyed when the bomb detonated (4),and with such a feeble military new technology would be difficult to manufacture. Iraq would have an extremely difficult time dispersing anything from their residual BW arsenal, as Stephen Zunes explains: "[T]here are serious questions as to whether the alleged biological agents could be dispersed successfully in a manner that could harm troops or a civilian population, given the rather complicated technology required. For example, a vial of biological weapons on the tip of a missile would almost certainly either be destroyed on impact or dispersed harmlessly. To become lethal, highly concentrated amounts of anthrax spores must be inhaled and then left untreated by antibiotics until the infection is too far advanced. Similarly, the prevailing winds would have to be calculated, no rain could fall, the spray nozzles could not clog, the population would need to be unvaccinated, and everyone would need to stay around the area targeted for attack" (1). To be effective, biological weapons must be scattered under perfect conditions, conditions that would be extremely hard for the Iraqis to replicate (4). Western nations also fear that Iraq will give BW to terrorists, although this scenario is highly unlikely even if Iraq does have a stockpile of biological weapons. Iraq has no incentive to give WMD to terrorists, since the international community would severely punish them for such an action, and probably have not for over ten years (6). Although some think otherwise, Iraq never claims to target the United States when it does sponsor acts of terrorism (7). The allegations that Iraq is harboring members of Al Quaeda are false since all such members have been found in Kurdish areas, spheres that are beyond Iraqi control (7). One of the most convincing arguments as to why Iraq does not have BW capabilities is that Iraq has recently agreed to give new UN weapons inspectors unobstructed access to all weapons facilities and some presidential palaces in order to prove that Iraq does not have weapons of mass destruction (8). Since no western powers have been allowed in Iraq to collect evidence in the last four years, there is simply no credible proof (2) that Iraq either has biological weapons or intends to use them for nefarious purposes.
Conversely, empirical confirmations leads some to believe that Iraq has maintained a supply of weapons of mass destruction (4). Especially since "[t]he inspectors withdrew entirely from Iraq in 1998, and Hussein has refused to let them back in, giving his regime four years to find a better hiding place for his weapons" (5). Since UNSCOM never accounted for 100 percent of Iraq's biological weapons and Iraq covertly added to its stockpile while the inspectors were in the country, it is intuitive to assume that Iraq has added to its arsenal over the last few years (5). Since 1998, Iraq has purchased dual-use substances under the guise of purported civilian purposes, which could be used to produce biological weapons (9). BWs are easy to hide and do not take much space to make (1). Additionally, "one of the most frightening things about BWs production is the mobility of operations" (1). Therefore, Saddam could have easily hidden and increased a biological weapons arsenal over the last four years. Also, Saddam could use profits obtained from smuggling oil during the last few years to increase his production of WMDs, in order to compensate for his weakened military program (10). The possibility that Iraq has retained BWs is particularly terrifying due to the horrendous amount of destruction that biological weapons cause. Laura Mylroie, research associate of the Foreign Policy Research Institute, Philadelphia PA, gives a particularly grim assessment: "Since Kamil's defection, Iraq has acknowledged producing 2,265 gallons of anthrax. Anthrax is extraordinarily lethal. Inhalation of just one-ninth of a millionth of a gram is fatal in most instances. Iraq's stockpile could kill 'billions' of people if properly disseminated and dispersed.[5] Anthrax, unlike some other biological agents, has an extremely long shelf life. Although Baghdad claims to have destroyed its anthrax stockpile, it can produce no documents to support that assertion, while UNSCOM interviews of Iraqi personnel allegedly involved in the purported destruction produced contradictory accounts. Thus, no reasonable person credits the claim" (3). President Bush asserts that Iraq has developed weapons systems capable of carrying and dispersing CBWs (7). Although his claim is unsubstantiated, multiple foreign policy think tanks have found evidence that proves Bush's fear. Iraq definitely has a fleet of SCUD ballistic missiles that could be fitted with chemical and biological agents. These SCUDs can carry a 500-kg load of chemical or biological agents that can be dispersed over an area of 650-km (4). Even more frightening is the possibility that Iraq has turned some or all of ITS 78 Czech L-29 trainer airplanes into unmanned weapons carriers. These planes, which are controlled remotely, could be used to deliver extremely large quantities of biological agents over extremely long distances (4). The UN found that Iraq had indeed re-wired the L-29s for this purpose, Great Britain discovered a large number of L-29s that had been turned into carrier systems for BWs during Operation Desert Fox, and the CIA reported that Iraq tested the L-29s for their effectiveness during the year 2000 (4). President Bush also worries that Iraq is actively selling weapons of mass destruction to terrorists (9). Iraq has sponsored both the Palestine Liberation Front and Mujahedin-E Khalq, two terrorist groups that are anti-Israel (9). Bush also worries that Iraq is in cohorts with Al Queada, and that they will combine to increase the potency of an attack against the United States (7). All in all, Iraq probably did retain some of its biological weapon capability. According to President George W. Bush: "UN inspectors believe Iraq has produced two to four times the amount of biological agents it declared and has failed to account for more than three metric tons of material that could be used to produce biological weapons. Right now, Iraq is expanding and improving facilities that were used for the production of biological weapons" (11).
Although neither conclusion can be fully substantiated, the empirical evidence and feasibility of a BW program indicate that Iraq probably does have a biological weapons program. However, invading Iraq is definitely not the most desirable way to destroy Saddam's WMD capabilities. No country wants to blindly trust Saddam's claim that Iraq does not have any weapons of mass destruction. But weapons inspections and the continuation of economic sanctions would be a feasible way to control Iraq's WMD program (7). Specifically, Saddam would have great difficulty hiding weapons if he does indeed give weapons inspectors unhindered access to his county (12), as he has said he will during the last month.
With out absolute concrete evidence to document Iraq's WMD possession (1) attacking Iraq cannot be justified. Even if Iraq does have weapons, they are almost impossible to disperse (4) and, most importantly, Saddam will not use them. The Gulf War proves that Saddam is rational and will not weapons of mass destruction against the West (13). Saddam did not use WMDs during the Gulf War because he was deterred by the threat of U.S. nuclear weapons (6). There is no reason to assume that Saddam would act differently a decade later. Saddam knows that if he used WMD that it will be the end of his regime - and ultimately his life, because the Western world (particularly the United States) will annihilate him (1). Therefore, although Iraq probably does have BWs, and weapons of mass destruction in general, war with Iraq cannot be justified at this time since Iraq would not and probably could not use any weapons of mass destruction.
1) "The Case Against a War with Iraq", Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. October 2002
2) "Bush's United Nations Speech Unconvincing" Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. September 13, 2002
3) "Iraq's Weapons of Mass Destruction and the 1997 Gulf Crisis", Laura Mylroie. Meria, The Middle East Review of International Affairs. V.1, #4, December 1997
4) "Defending Against Iraqi Missiles", Staff Writer. A Strategic Comment from the International Institute for Strategic Studies. V.8, #8, October 2002
5) "Iraq's Had Time to Really Hide its Weapons Sites", John Parachini. RAND, an U.S. Think Tank, originally appeared in Newsday, September 19, 2002
6) "President Bush's Case For Attack On Iraq is Weak", Ivan Eland. The CATO Institute a Libertarian Think Tank. October 7, 2002
7) "President Bush Fails to Make His Case" Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. October 8, 2002
8) "Iraq: 'No Blocks to Inspections", Staff Writer. CNN, 0ctober 12, 2002
9) "Axis of Evil: Threat Or Chimera?", Charled Pena. The CATO Institute, a Libertarian Think Tank, Summer 2002
10) "Iraq: The Case for Invasion", Interview of Kenneth Pollack. The Washington Post. October 22, 2002
11) "President Bush's Address to the United Nations", George W. Bush. CNN, September 12, 2002
12) "Get Ready for a Nasty War in Iraq", Daniel Byman. RAND, originally published in The International Herald Tribune, March 11, 2002.
13) "Why Attack Iraq?", Ivan Eland. The CATO Institute, a Libertarian Think Tank, September 10, 2002
14) "Top Ten Reasons Why Not to 'Do' Iraq", Ivan Eland. The CATO Institute, a Libertarian Think Tank, August 19, 2002
Body Odor-An Unpleasant Encounter Name: Melissa A. Date: 2002-11-11 21:10:09 Link to this Comment: 3681 |
"What is that smell?" you ask.
"It is absolutely disgusting," you reply to yourself as you continue walking along. Then you realize that this smell just keeps on following you, from classroom to lunchroom to dorm room, even in the courtyard. Then it hits you that garbage has not been following you everywhere but you are the cause of that disgusting smell; you have body odor. What is this phenomenon of body odor? According to John Riddle, Body odor is the term used for any unpleasant smell associated with the body(1).
Most of us are concerned about how we look especially, how we smell so body odor which can be a potentially fatal blow to a person's social life is not a welcome addition to one's ingredients for success. It is this human fear of being excluded that led to the invention of the term, body odor. In the 1910s and 1920s, advertisers highlighted people's discontent with the things around them and with themselves in order to encourage them to buy their products. A group of advertising men used the term B.O. to mean body odor in a women's deodorant advertisement for their product, Odo-Ro-No. It played upon women's sentiments that beauty was important to achieving their main goal in life: a husband.
Listerine, which today has mundane advertisements with bottles of Listerine and a voice talking about the results of laboratory research that show Listerine's ability to combat gingivitis, cavities and so on, also used this approach. Listerine had an advertisement that showed "pathetic Edna" who was approaching her "tragic" thirtieth birthday and was always "a bridesmaid and never a bride" apparently because she suffered from Halitosis or bad breath (of course it could not have been personality problems that was hindering Edna's romantic progress!). As a result of this ad, Listerine's sales went from $100,000 a year in 1921 to more than $4 million in 1927. Body odor is a major concern for most people and even those who do not suffer from it are concerned about preventing body odor because of its negative social consequences.(2)
People are very sensitive about how they smell. Humans do not appreciate the power of a bad smell as much as skunks appreciate theirs. The striped skunk which is known in the scientific community as Mephitis mephitis accurately shoots a narrow stream of yellow fluid, butyl mercaptan, up to 10 feet at a threat. If the fluid hits the eyes of the threat it may cause temporary blindness. Even if the skunks misses, which is rare, the musk will cause nausea, gagging and general discomfort. (Perhaps next time you are at a party and you receive some unwelcome attention, you should raise those armpits or let out a breath of air to your unsuspecting predator.) Most people however, try to do quite the opposite and purchase expensive perfumes to mask odors and create a sensual smell that will attract the opposite sex. They spend a lot of money buying perfumes like Object of Desire by Bvlgari and they do not realize that the fluid that skunks emit is commercially used as a base for perfumes because of its clinging nature.(3) This makes me wonder whether if it is only by having a bad smell that we can get a good smell.
So what exactly causes us to have that bad smell? Most of the causes are related to lifestyle choices. If you use drugs, toxins or herbs such as alcohol and cigarette smoking, you body will smell. Also if you eat certain foods such as garlic and raw onions you will have an unpleasant breath. You can also develop body odor by simply sweating excessively or practicing poor hygiene. A couple other causes of body odor include: tooth or oral conditions such as periodontal disease and gingivitis. Also, there can be inborn errors of metabolism for example, aminoaciduria. Most of these problems are a result of a decision that the sufferer had made about how he chooses to lead his life. If he chooses to be a chain smoker he will smell like a cigarette. If he chooses not to shower or wash his clothes then he will have a pungent smell. If he chooses not to practice proper dental hygiene then he will have Halitosis. However, if he sweats excessively because he has a fear of social situations then his cause of body odor cannot be as easily rectified. In addition, people who suffer from aminoaciduria also cannot easily change their lifestyle to rectify their body odor because aminoaciduria results from an enzyme deficiency. We should not judge or marginalize people because of their smell because the cause of it may go beyond poor hygiene.(1)
Almost everyone has received a compliment about the perfume that he or she is wearing. This shows that people react to scents and that a pleasant scent encourages a favorable perception of that person. New research has shown that some individuals are highly sensitive to smelling a component of body odor which is called androstenone. Furthermore, if the person can easily smell androstenone then he will decide whether or not he likes the person based on the smell. What is androstenone? It is a human pheromone which is a chemical attractant that is in body secretions like perspiration. Men release large quantities of androstenone while women omit small amounts. So men are more likely to be judged by their smell than women. According to the study, fifty percent of people cannot smell androstenone at all and one half of them can only catch a whiff and enjoy the scent. Those who can smell androstenone, on the other hand, do not like the smell and compare it to urine or perspiration. The study went on to show that there was a correlation between the ability to smell androstenone and the androstenone-smeller's judgment of the person. In other words, if someone can smell androstenone on someone else and finds the smell unpleasant then he will dislike the person.(4)
Clearly, there is a lot at risk if one has body odor since man is a gregarious animal then body odor can make him unable to maintain or even start relationships. We have realized this so there are number of ways to eliminate body odor. One of the most rudimentary ways to get rid of body odor is to wash with both soap and water especially in the groin area and armpits which are more likely to smell. The best soap to use is a deodorant soap which impedes the return of bacteria. Showering, as well as, washing your clothing regularly will help to prevent body odor. In addition, we should wear natural fabrics like cotton which absorb perspiration better than synthetic materials like polyester. Athletic apparel makers like Nike and Adidas are adopting this idea in their clothing design by creating materials that cause the sweat to evaporate faster.
You should also use commercial deodorants which mask underarm odor or use antiperspirants which reduce the amount of perspiration. If these fail then you should turn to France, the land of fine perfumes and "Le Crystal Nature" which is a chunk of mineral salts that helps to keep bacteria under control without irritating the skin. A more serious approach to fighting body odor is Drionic which is an electronic device that plugs up overactive sweat ducts and keeps them plugged for up to six weeks.(5) There are many ways to avoid having body odor. The easiest way to find out what is available to you is to take a stroll in your local Eckerd, CVS or Riteaid.
Body Odor is a major concern for human beings. It is one concern that affects men and women to almost the same extent. We are concerned about our smell because people judge how we take care of ourselves by our smells and use this information to decide whether we are worthy of friendship because of it. We need to pay attention to how we smell not only because of social interactions but because odors from our body may alert us to a medical problem like a urinary tract infection or periodontal disease. It must be noted however that much of the hype about body odor comes from marketing consultants who need to sell their companies' product and play on our insecurity. Try to avoid being caught up in this web of commercialism while at the same time taking good care of your body.
1) Body Odor. It gives basic information about Body Odor.
2). It provides information on the less renowned tidbits of history.
3). It provides information about the striped skunk.
4). It provides information that otherwise would not get a lot of attention.
5) It provides online information about problems that affect teenagers
Think before you flush or brush Name: Sarah Tan Date: 2002-11-11 21:40:11 Link to this Comment: 3683 |
One of my friends from high school has made a habit of putting toilet seat lids down before she flushes. She started doing this about four years ago when she heard that when toilets are flushed, water droplets are expelled from the toilet bowl into the air, and when they land, other areas of the bathroom get "contaminated" by toilet water. That always amused me, but when I went over to her house, I humored her and followed this personal rule of hers. However, I didn't knowand chances are, she didn't knowjust how justified she was in worrying about in what is known as the "aerosol effect" in toilets. My discovery that there is actually a technical term for this phenomenon was the first indication that there might be something scientifically legitimate to it. It seems to have first been brought to light by University of Arizona environmental microbiologist Charles Gerba when he published a scientific article in 1975 describing bacterial and viral aerosols due to toilet flushing (2). He conducted tests by placing pieces of gauze in different locations around the bathroom and measuring the bacterial and viral levels on them after a toilet flush, and his results are more than just a little disturbing.
First is the confirmation of the existence of the aerosol effect, even though it is largely unrecognized. "Droplets are going all over the placeit's like the Fourth of July," said Gerba. "One way to see this is to put a dye in the toilet, flush it, and then hold a piece of paper over it" (8). Indeed, Gerba's studies have shown that the water droplets in an invisible cloud travel six to eight feet out and up, so the areas of the bathroom not directly adjacent the toilet are still contaminated. Walls are obviously affected, and in public or communal bathrooms, the partitions between stalls are definitely coated in the spray mist from the toilet (1). Also, toilet paper will be cleanest when it is enclosed in a plastic or metal casing; after all, it's subject to the same droplets splattering on it, and its proximity to the toilet bowl makes contamination potential obvious. The ceiling is also still contaminated and is in fact a potential problem site because it is often overlooked in the cleaning process. Bacteria cling to ceilings and thrive in the humid environment there; if the situation is left untreated for months or years (as is often the case), odors remain in restrooms that seem to have been to be otherwise thoroughly cleaned (1). The bacterial mist has also been shown to stay in the air for at least two hours after each flush, thus maximizing its chance to float around and spread (2). "The greatest aerosol dispersal occurs not during the initial moments of the flush, but rather once most of the water has already left the bowl," according to Philip Tierno, MD, director of clinical microbiology and diagnostic immunology at New York University Medical Center and Mt. Sinai Medical Center. He therefore advises leaving immediately after flushing to not have the microscopic, airborne mist land on you (4). Worse still is the possibility of getting these airborne particles in the lungs by inhaling them, from which one could easily contract a cough or cold (6).
Obviously, the idea of toilet water being unknowingly distributed around the bathroom is less than appealing, but a study of this sort calls for looking in detail at precisely what microscopic organisms we're dealing with here, even if we don't really want to know. Put rather graphically, it can be summed up as the F3 force: Fecal Fountain Factor, compounded by the favorable temperatures for bacterial propagation in room temperature toilet water (3). Using a more scientific viewpoint, streptococcus, staphylococcus, E. coli and shigella bacteria, hepatitis A virus and the common cold virus are all common inhabitants of public bathrooms, but just because they're all over the place doesn't mean we necessarily get sick. After all, humans carry disease-causing organisms on our bodies all the times, but with healthy immune systems, the quantities in which these organisms exist is not enough to affect us, particularly with a good hand-washing after every restroom visit (4). This begs the question, however, of the number of people who actually wash their hands after going to the toilet, and more importantly, the number who wash their hands effectively. Simply rinsing one's hands under running water for a few seconds without soap, as some people do, is not effective at all. The way to ensure maximum standards of hygiene is to lather your palms, the back of your hands, in between fingers, and under fingernails for 20-30 seconds with soap and hot water; the friction will kill off the bathroom bacteria (6).
Toilet seats have actually been determined to be the least infected place in the bathroom because the environment is too dry to support a large bacterial population (7). In accordance with that theory, the underside of the seat has a higher than average microbial population. The place in a restroom with the highest concentration of microbial colonies in restrooms is, surprisingly, the sink, due in part to accumulations of water where these organisms breed freely after landing their aerial journey. While toilets are obviously not sterile environments, they tend to not be as bad as people think because they receive more attention and are cleaned more often. "If an alien came from space and studied the bacterial counts, he probably would conclude he should wash his hands in your toilet and crap in your sink," Gerba said (2). The alien would almost certainly not put your toothbrush in his mouth because, with its traditional, uncovered spot in the bathroom, it is one of the hotspots for fecal bacteria and germs spewed into the air by the aerosol effect (5). Understandably, the toothbrush with toilet water droplets on it is one of the most retold horror stories to emerge from Gerba's report.
There are also greater implications from the study of the aerosol effect than simple grossness factor. Most obviously, bathrooms should be cleaned even more meticulously than before, with emphasis not just on and around the toilet, but equal emphasis on all areas of the bathroom because all areas are equally affected by the spray. Using the right cleaners is important because all-purpose cleaning solutions are not necessarily antibacterial, whereas most cleaners made specifically for restrooms are referred to as disinfectants or germicidal cleaners (1). Given that the sink area teems with bacteria, one must now be more careful about washing hands properly after walking into the bathroom for any non toilet-related purposes like washing your face and brushing teeth. Using a hair dryer can potentially be problematic in regard to bacteria counts because the effect would be largely the same as hot-air hand dryers, which actually increase the bacteria on hands by 162 percent, as opposed to paper towels, which decrease them by 29 percent (7). If you're still not convinced that bacteria exist in any significant quantities on your hands, consider that kitchen sink actually harbors the most fecal matter in the average home, carried there by unwashed hands after using the bathroom (5). A tablespoon of bleach in a cup of warm water on the offending sink will fix the situation... for the day.
To limit the scope of the aerosol effect, the simplest method is to close the lid on the toilet every time before flushing (5). This would also provide the peace of mind that while you are washing your hands for 30 seconds, microscopic, bacteria-laden water droplet will not be descending upon your person. Unfortunately, most public toilets, including the ones in Bryn Mawr's dorms, don't even have lids for that option. Besides, given the large number of people who have used the toilet before you, it probably wouldn't make much difference. After washing your hands, use a paper towel to turn off the faucet and to open the door to leave, in order to avoid being recontaminated (4). And today, get a new toothbrush and always, always keep it in the medicine cabinet or some other enclosed place after use (2).
(1) Janitorial Resource Center - Dr Klean.
(2) A Straight Dope Classic - Cecil's been asked.
(3) Car Talk's mailbag - People are talking back.
(4) WebMD - What can you catch from restrooms?
(5) Harvard Gazette book review - Overkill, by Kimberly Thompson
(6) When in doubt, Ask Men - What can you catch from (men's) restrooms?
(7) Sean Blair: Writer. Researcher. Editor. - Killer offices.
(8) The Atlantic Monthly - Something in the water.
Chocolate: Aphrodisiac or Euphamism? Name: Michele Do Date: 2002-11-11 22:11:07 Link to this Comment: 3685 |
"In most parts of the world chocolate is associated with romance, and not without with good reason. It was viewed as an aphrodisiac by the Aztec's who thought it invigorated men and made women less inhibited. So when it was first introduced to Europe, it was only natural that chocolate quickly became the ideal gift for a woman to receive from an admirer or a loved one, and of course, vice versa" (3).
What does chocolate have in common with lobster, crab legs, pine nuts, walnuts, alcohol, and Viagra? It has a reputation as being an aphrodisiac. Throughout history, there has been a pursuit for sexual success and fertility by various means including foods and pharmaceuticals. The American Heritage College Dictionary defines aphrodisiac s "arousing or intensifying sexual desire...Something such as a drug or food, having such an effect" (5). According to the Food and Drug Administration, "an aphrodisiac is a food, drink, drug, scent, or device that, promoters claim, can arouse or increase sexual desire, or libido" (2). Myths and folklore have existed since the beginning of time asserting that specific goods, or aphrodisiacs, increase sexual capacity and stimulate desire. Named after Aphrodite, the Greek goddess of sexual love and beauty, she was claimed born from the sea and many types of seafood have acquired this reputation. Similarly, chocolate's reputation as an aphrodisiac originated in both Mayan and Aztec cultures over 1500 years ago. Is chocolate really an aphrodisiac? How does it work? Does it produce different effects for men and women?
Made from the cocoa bean found in pods growing form the trunk and lower branches of the Cacao Tree, the earliest record of chocolate was in the South American rainforests around the Amazon and Essequibo rivers. The Mayan civilization worshipped the Cacao Tree for they believed it was divine in origin, thus its Latin name, Theobrom Cacao, means "food of the gods", and "cacao is a Mayan word meaning 'God Food.' Cacao was later corrupted into the more familiar "Cocoa" by Europeans" (3). Since emperors were considered divine, the Aztec emperor Monteczuma drank fifty golden goblets of chocolate a day in order to enhance his sexual ability. Consequentially, when the Spanish Conquistadors discovered chocolate and introduced it to Europe and the rest of the world, it continued to be associated with love (3).
Chocolate is a very complex food and scientists have investigated it in order to unlock its secrets. When consumed, it has been observed to have affects on human behavior (3). Chocolate contains two particular substances called Phenylethylamin and Seratonin, both of which serve as means for mood lifting. "Both occur naturally in the human brain and are released by the brain into the nervous system when we are happy and also when we are experiencing feelings of love, passion and/or (dare I say it?) lust. This causes a rapid mood change, a rise in blood pressure, increasing the heart rate and inducing those feelings of well being, bordering on euphoria usually associated with being in love" (3).
When chocolate is consumed, it releases Phenylethylamine and Seratonin into the human system producing the same arousing effects. Since eating chocolate gives an instant energy boost, increasing stamina, it is no wonder why its effects have given it a reputation as an aphrodisiac. Both Phenylethylamine and Seratonin are substances that can be mildly addictive, hence explaining the chocoholic. But women are more susceptible to the effects of Phenylethylamine and Seratonin than men (3). This illustrates why women tend to be chocoholics more than men.
Three other chemicals and theories are used to explain why chocolate makes people feel "good." "Researchers at the Neuroscience Institute in San Diego, California believe that 'chocolate contains pharmacologically active substances that have the same effect on the brain as marijuana, and that these chemicals may be responsible for certain drug-induced psychoses associated with chocolate craving'" (4). Although marijuana's active ingredient that allows a person to feel "high" is THC (tetrahydrocannabinol), a different chemical neurotransmitter produced naturally in the brain called anandaminde has been isolated in chocolate. "Because the amounts of anandamide found in chocolate is so minuscule, eating chocolate will not get a person high, but rather that there are compounds in chocolate that may be associated with the good feeling that chocolate consumption provides" (4).
In the body, anandamide is broken down rapidly into two inactive sections after produced by the enzyme hydrolase found in our bodies. In chocolate, however, there are other chemicals that may inhibit this natural breakdown of anandamide. Therefore, natural anandamide may remain extensively, making people feel good longer when they eat chocolate (4).
Although chocolate contains chemicals associated with feelings of happiness, love, passion, lust, endurance, stamina, and mood lifting, scientists continue to debate whether it should be classified as an aphrodisiac. "'The mind is the most potent aphrodisiac there is,' says John Renner, founder of the Consumer Health Information Research Institute (CHIRI). 'It's very difficult to evaluate something someone is taking because if you tell them it's an aphrodisiac, the hope of a certain response might actually lead to an additional sexual reaction'" (2). Despite scientific difficulty in proving chocolate an aphrodisiac, it does contain substances that increase energy, stamina, and feelings of well being. The reality is that chocolate makes you feel good and induces feelings of being in love. Everyone appreciates receiving a gift of chocolate from a loved one because it makes you feel loved. Perhaps the historic euphemism associated with chocolate is what really provokes people to feel it is an aphrodisiac.
1)Johan's Guide to Aphrodisiacs
2)Looking for a libido lift? The facts about aphrodisiacs, Food & Drug Association
3)Is chocolate an aphrodisiac?, By Janet Vine
4)Chocolate, aphrodisiac or prevention against heart attacks
Other Sources:
5)The American Heritage College Dictionary. 3rd Edition. USA: Houghton Mifflin Company. 1993.
DNA: Fingerprinting in the Court of Law Name: Kyla Ellis Date: 2002-11-11 23:52:00 Link to this Comment: 3687 |
As we dive head first into the new millennium, we are eager to embrace new "modern" technologies and ideas. One such idea is that of identification through the analysis of DNA, or genetic "fingerprinting." But, is this an idea we should rush into accepting? Should a shard of bone or fingernail be enough evidence to convict a person, to send them to the electric chair? And how does it work, anyway? Forensics and DNA have always been the areas of biology that interested me the most, so I decided that for this paper, I would explore the controversy as well as learn more about the process.
Deoxyribonucleic acid, or DNA, is made up of two strands of genetic material spiraled around each other. Each strand contains a sequence of bases (also called nucleotides). In DNA, there are four possible bases: chemicals called adenine, guanine, cytosine and thymine. The two strands of DNA are connected through chemical bonds at each base. Each base bonds with its complimentary base, as follows: adenine will only bond with thymine, and guanine will only bond with cytosine. DNA is a chemical structure that forms thin structures of tightly coiled DNA called chromosomes, which can be found in the cell nucleus of plants and animals. Chromosomes are normally found in pairs; human beings typically have 23 pairs of chromosomes in every cell. Pieces of chromosomes (or genes) dictate particular traits in human beings (1). There are millions of possible patterns, which gives rise to different physical appearances in humans. Every person's DNA also has repeating patterns, which allows scientists to determine whether two samples of DNA are from the same person.
To analyze the genetic patterns in ones DNA, scientists must go through extensive, meticulous steps. The first of such steps performed is called a Southern Blot. This is a brief outline:
The DNA must first be isolated, either by chemically "washing" it, or by applying pressure to "squeeze" the DNA from the cell. Next restriction enzymes cut the DNA into several pieces. The DNA pieces must then be sorted by size using electrophoresis, whereby the DNA is poured into wells of a gel, and then an electrical charge is applied to the gel. The positive charge is opposite the wells, and since DNA is slightly negatively charged, the piece of DNA is attracted toward the positive electric charge. The smaller pieces move more quickly than the larger pieces, and thus go farther than the larger pieces. The DNA is then heated so the DNA denatures (bases break apart) and render a single strand. The gel is then baked to a sheet of nitrocellulose paper to permanently attach the DNA to the sheet. This completes the Southern Blot, which is now ready to be analyzed.
To do this, an X-ray is taken of the Southern Blot after a radioactive probe has been allowed to bond with the denatured DNA on the paper. Only the areas where the radioactive probe binds will show up on the film. This allows researchers to identify, in a particular person's DNA, the occurrence and frequency of the particular genetic pattern contained in the probe.
(For more details visit (8). )
Every strand of DNA has pieces that contain repeated sequences of base pairs, called Variable Number Tandem Repeats (VNTRs), which can contain anywhere from twenty to one hundred base pairs. Our bodies all contain some VNTRs. To determine if a person has a particular VNTR, a Southern Blot is carried out. The pattern that results from this process is known as a DNA fingerprint. VNTRs come from the genetic information donated by our parents; we can have VNTRs inherited from either our mother or father, or a combination of the two, but never a VNTR either of our parents do not have. Because these combinations are inherited, each person's DNA fingerprint is unique.
I notice that this is a very involved process, and having performed it myself, I can assure you it is difficult and time consuming. The possibilities for human error are definitely there. If we were to rule out human error, however, the accuracy of these tests far surpasses all such tests that we have so far. The closest thing we have to this is analyzing fingerprints, which can be smudged, or otherwise distorted. Fingerprint experts never give evidence unless they are 100% sure, meaning they had the whole fingerprint and found an exact match. One expert claims that if fingerprinting were introduced today, there would be a terrible time convincing people of its validity(2). However, since it has been going on for so long, it is widely accepted, and therefore more "valid" than DNA identification. People are comfortable with it.
DNA is also easily contaminated. Since the results are derived from microscopic elements, the slightest disturbance can be a factor. It is even possible that some of the expert's own genetic material could be mixed in with the sample, and no one would know. The relative "newness" of DNA fingerprinting is another factor; people don't understand it like they would fingerprints. In a court of law, lawyers can hold up the pictures of the two matching fingerprints, and the evidence is right in front of the jurors' faces. With DNA, the evidence is harder for people with no experience in forensic science to grasp, and they essentially have to take the scientist's word for what they are seeing(3). The last problem is that of DNA sample size and age. The smaller the sample, the more likely it is to have room for error in testing. Age of the specimen also matters, if it is old and small it is less likely that there will be an error free test.
Putting the doubts aside, DNA can be a valuable tool in criminal justice. So far, at least 10 people on death row have been pardoned due to DNA evidence examined after their initial trials(4). There was a case in 1999 of a man by the name of Clyde Charles who was convicted of aggravated rape and sentenced to life imprisonment. He served nineteen years he was finally proclaimed innocent due to DNA tests(5). This is a chilling reality that we have to face: have innocent people been convicted of horrendous crimes and put into jail, or even executed, while the guilty go free?
In my research, I also got the impression that part of the controversy surrounding DNA fingerprinting is the fact that courts of law do not want to admit they are wrong. As I mentioned above, convictions of innocent people and acquittals of guilty people, do not reflect well on our legal system. No one wants to admit making mistakes and therefore being possibly inept at doing their job, especially if their job is determines who is sent to death row. It is a bit of an embarrassment to admit our legal system could have such a huge glitch. One case that I got this impression from was that of Joseph Roger O'Dell, arrested for and convicted of murder, rape, and sodomy of a young woman. From death row, he made repeated pleas for a DNA test, but he was refused each time. After his death the last of the DNA evidence in his case was burned without any further testing(5).
In May of 2001, to date, more than 85 people in the United States had been set free through post-conviction DNA testing, and, as I said above, 10 of them had been on death row. The FBI has been analyzing DNA in rape and rape-homicide cases since 1989. When arrests were made on the basis of other evidence in such cases, biological specimens were sent to the FBI for DNA analysis. In 26 percent of the cases, the primary suspect was excluded by DNA evidence(6). The question is, how many of these would have been found not guilty without DNA evidence?
This country is committed to the idea of justice. If we are sending people that are not guilty to jail, that messes with our entire conception of our legal system. I believe that forensic science is a huge step in the right direction toward justice.
DNA FIngerprinting in A Court Of Law Name: Kyla Ellis Date: 2002-11-12 00:00:34 Link to this Comment: 3688 |
As we dive head first into the new millennium, we are eager to embrace new "modern" technologies and ideas. One such idea is that of identification through the analysis of DNA, or genetic "fingerprinting." But, is this an idea we should rush into accepting? Should a shard of bone or fingernail be enough evidence to convict a person, to send them to the electric chair? And how does it work, anyway? Forensics and DNA have always been the areas of biology that interested me the most, so I decided that for this paper, I would explore the controversy as well as learn more about the process.
Deoxyribonucleic acid, or DNA, is made up of two strands of genetic material spiraled around each other. Each strand contains a sequence of bases (also called nucleotides). In DNA, there are four possible bases: chemicals called adenine, guanine, cytosine and thymine. The two strands of DNA are connected through chemical bonds at each base. Each base bonds with its complimentary base, as follows: adenine will only bond with thymine, and guanine will only bond with cytosine. DNA is a chemical structure that forms thin structures of tightly coiled DNA called chromosomes, which can be found in the cell nucleus of plants and animals. Chromosomes are normally found in pairs; human beings typically have 23 pairs of chromosomes in every cell. Pieces of chromosomes (or genes) dictate particular traits in human beings (1). There are millions of possible patterns, which gives rise to different physical appearances in humans. Every person's DNA also has repeating patterns, which allows scientists to determine whether two samples of DNA are from the same person.
To analyze the genetic patterns in ones DNA, scientists must go through extensive, meticulous steps. The first of such steps performed is called a Southern Blot. This is a brief outline:
The DNA must first be isolated, either by chemically "washing" it, or by applying pressure to "squeeze" the DNA from the cell. Next restriction enzymes cut the DNA into several pieces. The DNA pieces must then be sorted by size using electrophoresis, whereby the DNA is poured into wells of a gel, and then an electrical charge is applied to the gel. The positive charge is opposite the wells, and since DNA is slightly negatively charged, the piece of DNA is attracted toward the positive electric charge. The smaller pieces move more quickly than the larger pieces, and thus go farther than the larger pieces. The DNA is then heated so the DNA denatures (bases break apart) and render a single strand. The gel is then baked to a sheet of nitrocellulose paper to permanently attach the DNA to the sheet. This completes the Southern Blot, which is now ready to be analyzed.
To do this, an X-ray is taken of the Southern Blot after a radioactive probe has been allowed to bond with the denatured DNA on the paper. Only the areas where the radioactive probe binds will show up on the film. This allows researchers to identify, in a particular person's DNA, the occurrence and frequency of the particular genetic pattern contained in the probe.
(For more details visit (8). )
Every strand of DNA has pieces that contain repeated sequences of base pairs, called Variable Number Tandem Repeats (VNTRs), which can contain anywhere from twenty to one hundred base pairs. Our bodies all contain some VNTRs. To determine if a person has a particular VNTR, a Southern Blot is carried out. The pattern that results from this process is known as a DNA fingerprint. VNTRs come from the genetic information donated by our parents; we can have VNTRs inherited from either our mother or father, or a combination of the two, but never a VNTR either of our parents do not have. Because these combinations are inherited, each person's DNA fingerprint is unique.
I notice that this is a very involved process, and having performed it myself, I can assure you it is difficult and time consuming. The possibilities for human error are definitely there. If we were to rule out human error, however, the accuracy of these tests far surpasses all such tests that we have so far. The closest thing we have to this is analyzing fingerprints, which can be smudged, or otherwise distorted. Fingerprint experts never give evidence unless they are 100% sure, meaning they had the whole fingerprint and found an exact match. One expert claims that if fingerprinting were introduced today, there would be a terrible time convincing people of its validity(2). However, since it has been going on for so long, it is widely accepted, and therefore more "valid" than DNA identification. People are comfortable with it.
DNA is also easily contaminated. Since the results are derived from microscopic elements, the slightest disturbance can be a factor. It is even possible that some of the expert's own genetic material could be mixed in with the sample, and no one would know. The relative "newness" of DNA fingerprinting is another factor; people don't understand it like they would fingerprints. In a court of law, lawyers can hold up the pictures of the two matching fingerprints, and the evidence is right in front of the jurors' faces. With DNA, the evidence is harder for people with no experience in forensic science to grasp, and they essentially have to take the scientist's word for what they are seeing(3). The last problem is that of DNA sample size and age. The smaller the sample, the more likely it is to have room for error in testing. Age of the specimen also matters, if it is old and small it is less likely that there will be an error free test.
Putting the doubts aside, DNA can be a valuable tool in criminal justice. So far, at least 10 people on death row have been pardoned due to DNA evidence examined after their initial trials(4). There was a case in 1999 of a man by the name of Clyde Charles who was convicted of aggravated rape and sentenced to life imprisonment. He served nineteen years he was finally proclaimed innocent due to DNA tests(5). This is a chilling reality that we have to face: have innocent people been convicted of horrendous crimes and put into jail, or even executed, while the guilty go free?
In my research, I also got the impression that part of the controversy surrounding DNA fingerprinting is the fact that courts of law do not want to admit they are wrong. As I mentioned above, convictions of innocent people and acquittals of guilty people, do not reflect well on our legal system. No one wants to admit making mistakes and therefore being possibly inept at doing their job, especially if their job is determines who is sent to death row. It is a bit of an embarrassment to admit our legal system could have such a huge glitch. One case that I got this impression from was that of Joseph Roger O'Dell, arrested for and convicted of murder, rape, and sodomy of a young woman. From death row, he made repeated pleas for a DNA test, but he was refused each time. After his death the last of the DNA evidence in his case was burned without any further testing(5).
In May of 2001, to date, more than 85 people in the United States had been set free through post-conviction DNA testing, and, as I said above, 10 of them had been on death row. The FBI has been analyzing DNA in rape and rape-homicide cases since 1989. When arrests were made on the basis of other evidence in such cases, biological specimens were sent to the FBI for DNA analysis. In 26 percent of the cases, the primary suspect was excluded by DNA evidence(6). The question is, how many of these would have been found not guilty without DNA evidence?
This country is committed to the idea of justice. If we are sending people that are not guilty to jail, that messes with our entire conception of our legal system. I believe that forensic science is a huge step in the right direction toward justice.
1)How Is DNA Fingerprinting Done,
2)Fingerprint Identification: Craft Or Science?,
3)You DNA ID Card,
4)How DNA Evidence Works,
5)The Case For Innocence,
6)How DNA Technology Is Reshaping Judicial Process and Outcome,
7)DNA files,
8)Southern Blot,
Nicotine: How Does Tt Work Name: Sarah Fray Date: 2002-11-12 01:16:51 Link to this Comment: 3691 |
Nicotine is a colorless liquid that smells like tobacco and turns brown when it is burned (2). It is the chemical in tobacco products that interacts with the brain and causes addiction. The use of tobacco products such as cigarettes, chew, or cigars allow for the nicotine to move quickly throughout the body and the brain. Nicotine can be absorbed through the mucosal linings and skin of the nose and mouth, or through inhalation. When inhaled, the nicotine is absorbed by the lungs and moved into the blood stream from which it reaches the brain in less than eight seconds (4).
The effects of nicotine on the human body are diverse. In high concentrations, through the ingestion of some pesticides or the consumption of tobacco products by children, nicotine can cause convulsions, vomiting and death within minutes due to paralysis. However, in smaller doses nicotine has much milder effects. Nicotine has desirable properties such as heightened awareness and increased short term memory. Other aspects of nicotine include heightened breathing, heart-rate, constriction of arteries and pleasure stimulus in the brain.
Nicotine and the Brain
The brain consists of millions of nerve cells that communicate through chemicals called neurotransmitters. Each neurotransmitter has a particular three dimensional shape that allows it to fit into receptors that are located on the surface of nerve cells (4). Nicotine has a chemical structure that very much resembles the chemical structure of the neurotransmitter Acetylcholine. The similarity of the two chemical structures allows nicotine to activate the Cholinergic receptors naturally stimulated by Acetylcholine. These receptors are located not only in the brain, but also in muscles, adrenal glands, the heart and other peripheral nervous systems (1). These receptors are involved in numerous bodily functions such as muscle movement, breathing, heart rate, learning and memory.
The Nicotine, although very similar to Acetylcholine, does not act exactly like the neurotransmitter and consequently causes the systems that it affects to function abnormally. The Nicotine causes a spontaneous release within the brain of other neurotransmitters that affect mood, appetite and memory. Additionally, many systems such as the respiratory and cardiovascular systems are sped up (4). Nicotine causes the pancreas to release glucose, causing smokers to be marginally hyperglycemic.
Another significant interaction between nicotine and the brain is the release of the neurotransmitter dopamine in the nucleus accumbens (1). Dopamine is a neurotransmitter produced in the pleasure center of the brain. Normally this area of the brain serves to reinforce healthy habits, such as producing dopamine when the body is hungry and then receives food. The production of dopamine causes feelings of reward and pleasure (4)).
Recent studies have showed that nicotine selectively damages the brain. Amphetamines, cocaine, and ecstasy and most addictive drugs damage a particular half of the fasciculus retroflexus. The faciculu retroflexus is a bunch of nerve fibers located above the thalamus. It has been discovered that nicotine affects the other half of these fibers. These fibers control emotional control, sexual arousal, REM sleep and seizures (3).
The Addiction
Nicotine is known to be an addictive drug. Less than seven percent of all smokers who attempt to quit are successful (2). While some of the addiction may be attributed to the social and psychological patterns created by using products containing nicotine, there is also vast evidence that the addiction is chemical as well.
Nicotine causes a strengthening of the connections responsible for the production of Dopamine in the ventral tegmental area (VTA) of the brain pleasure or reward center (5). This strengthening results in a release of dopamine. This is the process used by the brain to enforce positive behavior. The Nicotine artificially stimulates this process, thus encouraging repetition of the Nicotine intake (5).
The Nicotine is quickly metabolized and altogether absent from the body in a few hours, causing the acute affects of the Nicotine to be short lived. This quickly dissipated state of effects creates the need for multiple doses of Nicotine throughout the day in order to prolong the effects and fend off withdrawal (4). Multiple dosages of Nicotine creates a tolerance within the body. In order to receive the desirable traits of the nicotine, the body must consistently take in more of the chemical.
The ending of a Nicotine habit induces both a withdrawal syndrome that lasts about a moth, and intense cravings that may last over six months. The withdrawal syndrome includes such symptoms as irritability, attention deficits, sleep disturbances and increased appetite.
More specific data is sought by the scientific community in order trace the exact portion of the brain responsible for the force of nicotine within the brain. Many studies indicate a particular portion of the receptors with which the nicotine interacts as a key componant in the process of nicotine addiction. The Choinergic receptors that the nicotine stimulates are made up of multiple subunits. In one particular study, the beta subunit was isolated and removed from a number of mice. In subsequent experiments, the mice missing the beta subunit did not self- administer nicotine. The mice with the beta subunit in tact did self administer nicotine (1).
Why Does Any of This Matter?
Nicotine addictions are estimated to account for 70 times the deaths in the United States that all other drug dependences combined (5). Approximately one of every six Deaths in the United States is attributed directly or indirectly to smoking (4). The activities associated with nicotine can cause respiratory problems, lung cancer, emphysema, heart problems, and cancers of the oral cavity, pharynx, larynx and esophagus.
Surveys show that around 90 percent of smokers would like to quit. Unfortunately, because of the addictive qualities of nicotine, very few, less than ten percent are successful (3). Nicotine replacement therapies allow for a lower intake of nicotine, without the harsh effects of tobacco forms of nicotine use. There are also non-nicotine therapies that use pills such as bupropion, an anti depressant, to help quiet the withdrawal effects. Lastly, there are behavioral treatments such as clinics, and formal session based counseling that have been developed and are often used in cooperation with one f the chemical supplements (2).
The continual research on the specific interactions between the brain and nicotine has the potential to create a more effective strategy for those who are seeking to stop using nicotine. It may also be possible to discover how nicotine causes the positive effects such as heightened awareness and strengthened short term memory. This could lead to a method to obtain such effects without receiving the undesirable aspects of nicotine use.
Sources
1) 1)Connecticut Clearinghouse, Connecticut State rescource center for iinformation on, and help with, alcohol and drugs
2) 2)http://ericcass.uncg.edu , Educational Information Rescources Center at University of North Carolina
3) www.cnn.com3)www.cnn.com,
4) 4)www.nida.nih.gov, National Institute On Drug Abuse informational sight
Method for Madness: The Body's Impact on a Person' Name: Tegan Goer Date: 2002-11-12 14:19:51 Link to this Comment: 3694 |
I am-among other things-an actor. As such, people occasionally ask me how I get myself to "feel" frustrated or desperate or happy or surprised or any of the other emotions I have been asked to attempt to portray over the years that I've been doing theatre. And when asked, I have had to carefully explain that what an actor does on a stage is not exactly feeling, but rather expressing feeling. That is, being able to act is not having the ability to feel emotions; it is not some kind of empathy for the feelings of others divinely distributed to some people with artistic temperaments (as much as some people-those who consider themselves actors and those who do not-might think). Acting is the craft of expressing emotions, creating with your physical self an image for an audience.
And here's the interesting part, the part that relates to myself as a student of biology in addition to myself as an actor: a lot of actors will report-the one writing this paper included among them-that the more closely an actor is able to duplicate the physical embodiments of an emotion, the more that actor can "feel" whatever emotion he or she is trying to reproduce. As best I can see it, emotions are the physical manifestations we register and associate with them. Fear is a quickened heart rate, a trembling voice, short, quick breaths, and some other physical reactions that everyone knows but cannot express easily in words, which is how our brains identify and catalog the sensation "fear". I wondered, though, if this contention was just a load of pseudo-scientific nonsense I had somehow managed to concoct from studying acting theory.
In antiquity, physicians/philosophers were convinced that physical humors controlled emotion; that emotional imbalances were directly a result of improperly balanced internal fluids. As years passed, this connection of the emotional to the physical began to fade from medical theory. It did not really reappear until the mid-nineteenth century, when physicians like William Cullen and Robert Whytt began to once again seriously research "physiological connection between emotions and disease." (1) For quite awhile, the body of research into the matter seemed satisfied with the notion that there was some connection, that emotional and physical states affected one another in some intangible and un-specific way. Human bodies and minds are mysterious and individually unique: the best most researchers could come up with were some interesting case studies, but little in the way of general, applicable theories.
In a separate field of intellectual endeavor, Stanislavsky, a Russian actor-turned-director at the end of the nineteenth century, was advising his actors to study not literature or poetry or philosophy but rather biomechanics. Emotions could be recalled by recreating the actions linked to them. "It's possible to repeat this feeling through familiar action, and, on another hand, emotion getting linked with different actions, force actor into familiar psycho-physiological states." (2) (Grammatical error his, not mine.) Stanislavsky's Method is pretty standard acting technique: you can find it in any acting textbook written since the time of his death. Only recently has there been any scientific investigation into evidence to confirm his theories (I have no idea if the scientists doing the inquiries were influenced by Stanislavsky).
Recent research into the connections between facial expressions and emotions associated with them, for example, show that while changing moods affect a person's facial expression, changing the expression also changes a person's mood. It is now thought that "involuntary facial movements provide sufficient peripheral information to drive emotional experience," (3) a theory known as the facial feedback theory. In a study where two groups were asked to rate the funniness of various cartoons while either holding a pencil with their teeth and without touching their lips (creating a smile-like expression) or holding the pencil with their lips only and without touching their teeth (frowning, as it were), the "smiling" group rated the cartoons as substantially funnier than the either the "frowning" or the control group, doing neither. (4) Autonomic changes similar to those seen with certain emotions were experienced by participants who were instructed to make certain faces; that is, changes in the circulatory and nervous systems were observed when facial expressions were altered. A suggested notion of why this is so has to do with how the brain receives oxygen: "Blood enters the brain by way of the carotid artery. Just before the carotid enters the brain, it passes through the cavernous sinus. The cavernous sinus contains a number of veins that come from the face and nasal areas and are cooled in the course of normal breathing. Thus, there is a heat exchanged from the warm carotid blood to the cooler veins in the cavernous sinus." (4) While frowning, for example, the construction of some facial muscles alters the flow of air and blood to the brain, resulting in the brain warming up. Smiling, on the other hand, widens the face and nasal passages, resulting in a more cooling effect on the brain. So Stanislavsky's suggestion to his actors that in order to better express their character's emotions they must first replicate their physical state is based in some real, if in his case intuited, science.
So why, if an actor can alter his emotions by altering his physical state, can't a person rid themselves of depression, say, by forcing herself to smile all the time? Well, not all of the physical aspects of our emotional states can be duplicated easily or voluntarily. The same way a musician masters an instrument and can then perform pieces that he has not written, an actor uses his body and voice to do the same. Some actors have better control over their "instrument" than others, just as musicians have varying degrees of skill. And just as the sound of an instrument can be drastically altered by outside influences (an electric guitar with an amplifier has a completely different sound than an acoustic guitar without one), our bodies and therefore our emotions can be altered through the use of external stimuli, such as taking an anti-depressant or losing a fistfight.
In order to be convincing to an audience, an actor need only reproduce the visible and audible manifestations of any emotional state he is trying to convey: that is, look happy, sound angry, and so on. In duplicating just the outward embodiments, small bits of the emotions can creep into an actor's mind, but ultimately, an actor is not out to feel a certain way. He or she is out to make an audience feel a certain way. A final thought on the science behind an actor's believability, and facial expressions: certain one of the forty-some-odd muscles in the face are much more difficult to voluntarily control than others. Ones that move when a person is actually smiling, and not faking a smile, for example, create subtle differences in the contours of the face, differences that the average person may notice subconsciously but not be quite able to pinpoint; the way some people can tell when they are being lied to but not be able to say just why. Experts trained in reading faces can note the differences. And yet, some people are better at controlling these less-voluntary muscles than others. By some estimates, about ten percent of the population has the ability to control some or most of these muscles: natural actors or liars whose facial expressions are extra-believable because the average person can't fake them. Woody Allen, for an example, is able to control one of the less voluntary muscles in the face used to express sadness-according to one researcher-one that moves his eyebrows up and down for emphasis as he speaks. (5) This actor/student of biology is aware that she is able to voluntarily move a few of the muscles in her face usually used to express legitimate anger, involving a slight raising of the eyebrows, tightening of the jaw, and a pulling of the ears closer to the head. Nearly anyone can learn to move their less voluntary muscles: while it is easier for some than for others, all that is required is diligence, and careful, creative observation. I guess that makes stagecraft and science fairly similar after all.
2) Method Acting For Directors , A sort of lousy translation, but a good overview.
3)About.com, Bi-Polar Disorder: Smiling is Good For You
5)Emotions and Smiling, An interview with Paul Ekman about his reasearch on facial expression and other fun stuff. You might try this link if the other one doesn't take you straight to the article.
Aromatherapy: Why it makes 'scents' Name: Stephanie Date: 2002-11-12 21:22:43 Link to this Comment: 3708 |
In countries around the globe, scented oils have been used as medicines for thousands of years, varying in its therapeutic values and uses. Ancient Egypt often used scented oils for their therepuetic effects as different types of medicines, for ailments or diseases (1). In more recent times, scented candles have bombarded the market claiming remedial benefits on mood and cognition. And in institutions for medical practice in clinical aromatherapy can be found in places around the world. But while the interest in aromatherapy has heightened over the years, so has the skepticism surrounding the practice. Product claims to alter health or provide cures have only contributed to the cynicism. Though, individually, a products? capability to enhance a person?s state or mood is debatable, the fundamental theory linking mood to distinct scents is in fact a viable speculation. Underneath the commercialized hype lies scientific data supporting a correlation between scents and mood. A number of recent studies relating to the topic imply the presence of a link and further investigate the olfactory sense and its specific stimulation in the brain. At its most basic level, aromatherapy can effectively be used to alter moods or states.
The process of scent stimulation begins when the molecular chemicals which make up a scent are inhaled through a person?s nose. After traveling through the nasal passage they reach cillia, hair-like fibers connected to the olfactory epithelium, a highly concentrated area of neurons that can send messages to the brain. When the molecule binds to the cillia, the neurons are prompted and send an axon to the brain which processes the perception of smell in what is known as the olfactory bulb, located in a region behind the nose (2).
"Humans can distinguish more than 10,000 different smells (odorants), which are detected by specialized olfactory receptor neurons lining the nose.... It is thought that there are hundreds of different olfactory receptors, each encoded by a different gene and each recognizing different odorants" (3).
The process that occurs after the scent ?reaches? the brain has yet to be fully understood but it seems that the message transmitted to the olfactory system when a scent is smelled is not the only region of the brain that receives it. In one study, anxiety of patients undergoing an MRI was observed. When patients were submerged in a vanilla-like scent, a 63% of the patients showed a reduction in anxiety (4). In another study, it was found that spiced apple and powder-fresh scents ?improved performance on a high-stress task,? (4). In an Austrian study, the effects of a citrus or orange scent in the waiting room of a dental office was studied. While patients were waiting to for dental treatment, they were immersed in an orange smell. It was found that the odor had a relaxant effect, mostly on women. A lower level of anxiety, a more positive mood and a greater sense of calmness were discovered to be direct effects of the orange odor, in comparison to the control group (5).
Many businesses have even subscribed to the idea of altered mood or state through specific scents and used it to increase production. A Japanese company which began using, what was dubbed, ?environmental fragrancing? in which air-conditioning ducts released various therepuetic scents every six minutes to improve alertness or relieve stress. It was found that the introduction of a lemon scent reduced keyboard errors by 50% (4).
While these studies show the effects of specific types of scents and their link to mood, a link as also been determined between scents and their degree of pleasantness on an individual basis and their capability of altering mood. In one such study, habitual smokers were given a variety of different scents to rate on a scale of pleasantness. After being nicotine-deprived for a significant amount of time, the smokers were given the scents and the effect on their craving was observed. It was concluded that the cravings diminished when a non-neutral odor was smelled. Particularly, those odors which the smoker had rated as ?unpleasant? decreased cravings the most (6).
In another related study, observations were made on the heart rate of patients who inhaled unpleasant scents. It was determined that heart rate was increased when the patients inhaled unpleasant scents or were asked to rate them (7). This study serves as support for the theory that the olfactory system first rates a perceived scent on a scale of pleasantness. In conjunction with the other studies, it could be assumed that individual ratings of ?plesantness? vary on an individual basis and may even be culturally derived. The effects of vanilla, for instance, seem to be varied among cultures:
"When Americans smell a strong odor, it seems to remind them of their animality or mortality. On the other hand, vanilla is known to be comforting to Americans, but has no particular effect on Japanese. This may be because it is an unfamiliar smell and therefore has no link to the granny's kitchen of their childhood" (4).
Consequently, a connection between memory or past experience may also influence the degree of personal pleasantness of certain smells.
Through the various studies concerning the correlation between scents and their trigger in other regions of the brain, it can be seen that there is in fact a link. The findings of ?universal? (perhaps limited culturally) mood triggering scents gives further implications of the olfactory system?s connection with other parts of the human brain. The idea of using scents as a means of altering mood or state of mind is valid. Evidence suggests that aromatherapy is a well-founded science and one in need of even deeper investigation. Perhaps scents and their specific associations could be used on a greater scale in the future, to increase productivity, improve mood or just enhance well-being. It seems that this age-old science often waved off as ?phony? has a very scientific base. When a greater understanding of the brain and its related regions is attained, perhaps the science will, once again, become more widely accepted.
2)
4)The Role of Smell in Language Learning
5)Ambient Odor of Orange in Dental Office Reduces Anxiety and Improves Mood in Female Patients
6)Effects of Olfactory Stimuli on Urge Reduction in Smokers
7)Influence of affective and cognitive judgments on autonomic parameters during inhalation of pleasant and unpleasant odors in humans
Rigor Mortis; An Examination of Muscle Function Name: William Ca Date: 2002-11-12 22:57:56 Link to this Comment: 3712 |
Soon after the time of death, a body becomes rigid. This stiffening is the result of a biochemical process called Rigor Mortis, Latin for "stiffness of death". (1) This condition is common to all deceased humans, but is only a temporary state. The slang term "stiff", used to refer to a dead person, originates from rigor mortis. Within hours of the time of death, every muscle in the body contracts and remains contracted for a period of time. Before I am able to explain the biological cause of this condition, I must first describe the structure of muscles and the process of muscle contraction.
Muscles have many different levels, beginning with the individual muscle fibers. Muscle fibers are a combination of many cells but have structures similar to that of an individual cell. The organelles found in a normal cell are also found in muscle fiber, but are given different names: the plasma membrane is sarcolemma, endoplasmic reticulum is sarcoplasmic reticulum, mitochondria are sarcosomes, and the cytoplasm is called sarcoplasm. Muscle fibers are composed of tissue units called sarcomeres. Sarcomeres are connected horizontally to extend the length of the muscle fiber. The components of an individual sarcomere are a thick and thin filament, myosin and actin, respectively. The ends of a sarcomere are what are called Z lines. Rows of actin extend from these Z lines but do not meet in the middle. Spaced in between the rows of actin, and not connected to either Z line are the myosin filaments. Strands of actin take on a double helix shape and are surrounded by a long protein strand called tropomyosin spotted with various small protein complexes called troponin. Underneath the tropomyosin are myosin active sites, where myosin is able to bind to the actin. Along a strand of myosin are "heads" that protrude towards the actin. Actin and myosin are the central actors in muscle contraction. (2)
Muscle contraction begins in the brain with a nerve impulse sent down the spinal cord to a motor neuron. The action potential started in the brain is passed on to the muscle fibers through an axon where it is carried into a neuromuscular junction. (2) The neuromuscular junction, also referred to as the myoneural junction, releases acetylcholine when the action potential reaches the junction. When the acetylcholine comes into contact with receptors on the surface of the muscle fiber, a number of transmembrane channels open to allow sodium ions to enter. (3) This influx of sodium ions creates an action potential within the fiber which triggers a release of calcium ions from the sarcoplasmic reticulum.
Calcium ions filter throughout the sarcomeres and bind with the troponin complexes, causing a shift in the tropomyosin structure and exposing the myosin binding sites on the actin. A "power stroke" follows, wherein the myosin heads drop the ADP and Pi, which hold the heads in a cocked back position, and move laterally thereby moving the actin filament at the same time. Finally, ATP binds to the myosin heads, thereby detaching from the actin. Upon release from the actin, the ATP breaks down into ADP and Pi, giving energy to return the myosin to its cocked position, thereby renewing the cycle. (4)
The relaxation of a muscle depends upon the termination of the action potential beginning at the neuromuscular junction. An enzyme within the muscle fiber destroys the acetylcholine, thereby stopping the action potential that acetylcholine produces. Therefore, calcium ions are no longer released from the sarcoplasmic reticulum. In fact, the already loose calcium ions are brought back into the sarcoplasmic reticulum. Finally, the myosin and actin are unable to bind, thereby contracting, because the myosin active sites need calcium ions to be exposed. (2)
The supply of ATP is central in the continuing process of muscle contraction. ATP originates from three sources; the phosphagen system, glycogen-lactic acid system, and aerobic respiration. In the phosphagen system, muscle cells store a compound called creatine phosphate in order to replenish the ATP supply quickly. The enzyme creatine kinase breaks the phosphate from this compound and the phosphate is added to ADP. This source of ATP can only sustain muscle contraction for 8 to 10 seconds. The glycogen-lactic acid uses the muscles' supply of glycogen. Through anaerobic metabolism, the glycogen is broken down and creates ATP and the byproduct lactic acid. This method does not require oxygen and is able to supply more ATP than the phosphagen sytem, but occurs at a slower rate. Finally, the aerobic respiration allows glucose to be broken down into carbon dioxide and water in the presence of oxygen. The glucose supplies come from the muscles, the liver, food, and fatty acid. This method creates the most ATP and for extended periods of time, but takes the most time. (5)
With all of the information on how muscles work, it is now possible to explain the process behind rigor mortis. Death terminates aerobic respiration because the circulation system has ceased. (6) Therefore, the muscles rely on the phosphagen and anaerobic metabolism methods to acquire ATP. As stated above, these sources only provide a small amount of ATP. This lack of ATP disables the myosin heads from detaching from the actin. Meanwhile, calcium ions leak from extracellular fluid and the sarcoplasmic reticulum, which is unable to recall the ions, into the muscle fiber. (6) The ions perform their task as if the body were alive, disengaging the tropomyosin and troponin from the myosin active sites. The muscle contracts when the myosin shifts, but the lack of ATP prevents it from detaching, and the muscle remains contracted. Such a process occurs in all muscles as the body becomes rigid.
Rigor mortis usually sets in within four hours, first in the face and generally smaller muscles. The body reaches maximum stiffness within twelve to forty-eight hours. However, this time may vary due to environmental conditions cooler conditions inhibit rigor mortis. (1) Rigor mortis is only a temporary condition. During the process, the body has been accumulating lactic acid through anaerobic respiration. Lactic acid lowers the pH of the muscles, and deteriorates the contraction of the muscles. (7) The body loses its rigidity due to the decay of the muscles. In conclusion, rigor mortis is the stiffening of the body due to a lack of ATP after death. Only temporary, the condition is environmentally sensitive and is believed to occur in all grown humans.
1)"How does Rigor Mortis work?"
2)"Contraction and Rigor Mortis"
4)"HowStuffWorks 'How Muscles Work'"
What's Owl Got To Do With It? Name: Catherine Date: 2002-11-13 02:59:04 Link to this Comment: 3713 |
Owls are notoriously wise birds; they know how many licks it takes to get to the center of a Tootsie Roll pop, they are Winnie the Pooh's best advisors, Harry Potter's messenger friends, and even Bryn Mawr's semi-mascots. Being so advanced, they are also a part of animal adaptation and evolution. Because I know next to nothing about owls, I decided to undertake some snowy owl reproduction research. The question I set out to answer was: "How do snowy owls reproduce, and how does that relate to an adaptive process to evolution?", because I know that there must be a clever way for these intelligent old birds to produce offspring. I just do not know what this method is.
Snowy owls' reproduction has many phases, including courting, mating, egg-laying, nest-building, taking care of the young, and the young leaving the parents' nest. Each of these illustrates how evolved and adapted to their environment owls are.
Male snowy owls apparently have a mating call and ritual procedure which involve courtships beginning in midwinter, and which last through March and April, distanced from the breeding grounds. To attract the females, males will take off in exaggerated flight patterns while hooting loud, booming, repeated calls, or they will stand in open-wing postures, with the wings closest to the females angled slightly towards them, attempting to receive some attention. Their mating cries can be heard up to six miles away, in the tundra region where the snowy owls reside. When the males see the females, they will swiftly snatch a gift lemming with their claws. They will land and place it somewhere visible to the females. Males may then push the gift towards females, or spread their wings and waddle around the victim, concealing it. To satisfy still uninterested females, the males may take off for more lemmings. They often feed their catches to the females. On the ground, males will "bow, fluff their feathers, and strut around with wings spread and dragging on the ground." (6)
Perhaps these rituals are not as "sophisticated" as that of humans, but this does seem survival of the fittest to me. Owls have a proper courtship, away from reproducing responsibilities. Males must display their assets and give presents in order to win the females over. Females obviously have criteria for their partners and evolution has taught them that they need partners who surpass others of their kind.
When the females are finally content with the males' courtship attempts, the couple fly off, soaring up and down through the sky. The males sometimes swoop to catch more lemming and pass their game to the females. They begin mating and breeding in April to May. Because they are a class of birds, snowy owls lay eggs.
The egg-laying process in itself is a sign of evolution. Female snowy owls typically lay five to eight white eggs in a clutch, and sometimes up to fourteen, depending on the lemming population. Around every four years, there is a predictable thinning of lemming numbers, and the owl pair may not breed at all. If a first egg clutch is unsuccessful in hatching, the couple probably also will not renest for the rest of the year. The reproduction quantity thus relies on the fickle prey population, which is an agile adaptive system indeed. In this way, there is almost one hundred percent nesting success achieved by the snowy owls.
Eggs are laid about every other day, so that the older and stronger chicks will have advantages in time periods when there may be a shorter food supply later (they will consume most of the food their parents bring, and may possibly even slay younger siblings and eat them). The females incubate for about thirty-one to thirty-four days, keeping the eggs warm, while the males guard the nest and do all of the hunting and bringing in of the food. The survival of the fittest ideal is incorporated inside the species' competition with proper care by parents from other predators, while the babies are still un-hatched. Offspring may compete between each other, even to the death, but intruders are not allowed to interrupt the growing process.
To build a nest for their eggs, snowy owls make a hollow on the exposed, snow-free, dry tundra ground with shallow talon scrapes approximately three to four inches deep and one foot across, on top of an elevated rise or mound. Gravel bars and abandoned eagle nests are occasionally frequented. The nests are almost exclusively made on the ground, lined with moss, lichens, feathers, scraps of vegetation and their own feathers. Sites are near good hunting areas, commanding a view of surroundings. Some areas are only used once for breeding, but other areas are occupied for several years at a time. Territories around the nests range anywhere from one to six kilometers, and do overlap with other pairs' regions.
Clearly, this is a well "thought-out" plan for the parents to keep their young safe and sound. They even consider where they will have easy access to food, but will be difficult to find. Evolution has made this a habit, or else the owls would not survive.
Snowy owl eggs begin hatching one by one, over the interval of a month. Owls have adapted to the shell life, by gaining temporary incisors for cracking through eggs. They employ temporary "egg teeth" to crack through the shells. The chicks are blue-grey while they stay in their hole in the ground, and will be covered in a snowy-white down and face around three weeks after hatching success. At the same time (about sixteen to twenty-five days), their primary wing feathers will grow in, and they may begin to wander away from the nest. But this is before they can fly, and so both parents feed and tend to the young until then.
Both the male and female owls will feed, protect, and bring up their chicks until their babies are ready to fly away and hunt on their own. Nestling owls take about two lemmings per day, but a family of snowy owls may eat up to one thousand, five hundred lemmings before the owlets may be able to scatter. Because they are so defensive, snowy owls may aggressively attack intruders up to one kilometer away from their nest sites. Males will sometimes fight in midair, and females may defend their territories or potential mate against other owls of their sex. Males may defend their young using a "crippled bird" act to lure predators away from nests. They have developed scenarios in order to survive.
Owlets fledge in about forty-three to fifty-seven days, in which they also become able to search and hunt for food themselves. The young clearly require an entire summer's worth of special care by the owl parents. Adaptation has made snowy owls smart reproducers with wise habits and precautions.
1) Enchanted Learning
2) Lady Wild Life's Endangered Wildlife
3) Minnesota Department of Natural Resources
4) Oregon Zoo Animals
5) Ross Park Zoo
6) The Owl Pages , Information About Owls
7) Tribune-Review
8) University of Michigan
Why Can My Professor Not Match His Clothing: Is Co Name: Margot Rhy Date: 2002-11-16 22:49:22 Link to this Comment: 3759 |
Everyday when I go to my first class, my professor's wardrobe offends me. He somehow thinks that he can get away with wearing brown pants, navy socks, and black shoes in the same outfit. Eventually, I had to ask myself whether my judgments are just really too harsh or if my opinions have scientific support. If my professor is not disgusted by his clothing choices while I am, is color just a subjective experience? This question led to my investigation of the true meaning and essence of color. Where does color come from chemically and physically and how do our eyes perceive color? Do colors really have relationships, those which my professor seemingly does not understand, and can science explain them?
Color is a difficult quality to explain because it is the one characteristic that can mark the difference between two objects exactly the same all other physical traits, such as size, shape, and texture (2). Putting words to the difference between an item that is red and one that is yellow is harder than describing the difference between an item that is tall and one that is small. It is almost impossible to avoid purely subjective and emotional words when describing color. Also, we only know how to assign a. specific word to a color because we are trained to. How else can we know how to give red the word "red" if a kindergarten teacher never holds up those color flashcards? Therefore, in order to understand what color really is, it is necessary to understand how it is produced.
Colors can be defined by two different processes. The first is the physical diffraction of white light and the other is the interaction between electrons within a molecule and light. White light is a mixture of many colors and when it hits, for example, a prism, it splits into different components in a flat spectrum. Each component of the light has its own wavelength, thus yielding what we perceive as color (1).
A chemistry-based approach of defining color deals with the energy of an electron inside of a molecule. An electron can be excited from a lower orbital to a higher orbital by absorbing a specific wavelength of light and "the loss of this wavelength from the balanced white light source results in color (1)." Seeing pigment color is a process dependent on the actual molecules of the object, how those molecules interact with light, and how our eyes perceive that interaction. This can be explained by the concept of turning out the lights on a red chair- did the red chair become any less red? The answer is "yes"- even though turning off the lights does not change the molecular structure of the chair, red can only exist because of the light. The electrons in the molecules of the chair can only interact with the light. The chair is not red because the molecules create red. The molecules in the chair's pigment absorb all other wavelengths and reflect certain wavelengths back (1). Furthermore, this process only matters when our eyes process the light. This is the step in which our visual system captures those wavelengths, processes them through our retinas, interprets them in our brains, thus allowing us to give one word for that whole experience, "red." However, both processes send the same wavelengths, so the eyes and brain know that red means red, no matter if we see it in a rainbow or in the pigment of the chair (1). The eyes and brain can do this because of the nature of our biological visual system.
The biological process that enables us to see color is a subjective experience. It begins with "the stimulation of the light receptors in the eyes," leading to the conversion of light stimuli or images into signals, and then the, "transmission of electrical signals containing vision information to the brain through the optic nerves (5)." We are capable of seeing color due to the photoreceptors in the retina of the eye that are sensitive to light. There are two kinds of photoreceptors in the retina- rods and cones. Rods are receptive to amounts of light while cones are sensitive to different colors (2). There are three types of cones and each type is sensitive to the different-sized wavelengths of the visible spectrum. Long wavelength or "red" (R) cones, which are most sensitive to "greenish yellow" wavelengths, middle wavelength or "green" (G) cones, most sensitive to "green" wavelengths, and short wavelength or "blue" (B) cones, most sensitive to "blue violet" wavelengths (6). From this information arises the "trichromatic color theory", that says the primary colors are red, blue, and green (6). Basic neural programming transforms the basic outputs of these three cones into four channels of chromatic color signals and a colorless channel that determines brightness. Therefore, our perception of color comes from the amount and type of light being absorbed by each cone type (2). Also, our color vision follows some basic rulesthe stimulation of the R and G cones gives the perception of red and green, when these two cones are stimulated about equally, we perceive yellow, and that the stimulation of B cones creates the perception of blue (6). What makes this process subjective is the fact that the molecules that make up my eyes and brain are not the same molecules that make everyone else's. Therefore, no two people can perceive color the same way. However, even though color is a subjective experience, the idea of complementary colors can exist. Newton made the first arguments for this.
Newton formed a color wheel and, although this may not sound like much, it was revolutionary. His experiments and writings deal greatly with arguing against Aristotelian theory because he wanted to make clear that hue could be conceived and described separately from light and dark (4). To systemize this idea into a process, he had to use the spectrum as a reproducible color reference to identify and name the hues in nature. However, Newton had to overcome the idea of colors existing in just a flat spectrum, only considered to be near or far from one another with. By thinking in terms of a circle, Newton automatically discovered that colors can be linked together and have relationships. However, he did not just bend the spectrum into a circle to form relationships between colors. He specified the rules for color's placement on the wheel with geometry and physics. He determined that, "saturated hues are on the circumference of the circle, white or gray is at the center, complementary colors are opposite each other on the circle, and the color of a mixture is located at the 'center of gravity' of all of the hues in the mixture weighted by their brightness (3)." Through these ideas, Newton gave words that eradicated the subjectivity in the process of linking colors together. He gave a color's placement on the wheel physical justification. However, Newton relied on the principles of light to make predictions about how both types of color mix. This was misleading because he did not take into account that pigment color does not work the same way as light coloration (3).
I am arguing, though, that there really is a connection between pigment colors in the form of a color wheel too. Yes, yes- pigment color cannot be clearly defined because seeing it really is a subjective process dependent on ever-changing light and on our different biological systems. However, these concepts do not erase the scientific reasons that uphold the existence of complimentary pigment colors.
Ewald Hering argued for the existence of non-arbitrary but scientific relationships between pigment colors when he proposed his own color wheel that was based entirely off of the subjective experience of color (6). Although he understood the trichromatic color theory, Hering was not satisfied with it. This color theory cannot explain why yellow is psychologically just as primary as red or blue or green; nor can it explain either why we can visualize mixing red with yellow to get orange but not red with green and to get red-green (6). He devised a color wheel of his own to answer his questions. By saying that red, blue, yellow, and green are the four fundamental hues that can be contrasted to one another, blue to yellow and red to green, Hering made the connection between the subjectivity of our perception and the existence of complementary color for pigment. He justified their relationship as opposites through the fact that they can be mixed to form any color that appears on the spectrum (6). It turns out that these complements, and not the raw R, G and B cone responses, are the better framework for describing the discrimination between two very similar colors and the prediction of hues in a color mixture. This is because the "translation from receptor responses to opponent codings happens in the retina: the brain never "sees" the trichromatic outputs (6)." So the four colors of red, blue, yellow, and green, and not the three "primary" colors of the color receptors, led to developing a color model that respects how color is more than just our different perceptions. This allows for the existence of judgments that can be made consistently over time by more than just one person and for color theory. Hering had devised a color system that can understand our biologically subjective experience and look beyond it to describe what else is also going on in the relationships between colors.
Hering's color wheel just began to tap into the connections between pigment colors that do exist for mathematical and scientific reasons. Furthermore, his is not the only color wheel but to explain all of them and to trace their evolution requires far more space than I have here. However, Hering's color wheel alone gives me enough support to justify my opinions of my professor's wardrobe. Argue as he might that color is purely subjective because it is really an experience dependent upon light and molecules, how colors connect is not subjective. Colors have relationships that are upheld by science. The new questions that come up ask how more does the brain and psychology play a part in determining color relationships, why we tend to associate feelings with color, and how much is more advanced color theory subjective..
1) Dr. K.D. Luckas. Chemistry 104 Laboratory Manual: Supplement for the Major's Section. Bryn Mawr College; 2001.
2)RIT Munsell Laboratory, FAQ section on the RIT Munsell Color Science Laboratory website
3) page that discusses "Mixing with a Color Wheel" , the color section on this website a good source for information about all aspects of color
4)"Color Psychology", page that discusses "Color Psychology"
5) Molecular Expressions website , page that discusses light and color
6) Opponent Processing of Color, page that discusses "Light and the Eye"
Sleep: It Does a Lot More than You Think Name: Meredith S Date: 2002-11-24 13:08:48 Link to this Comment: 3859 |
Your doctor and your mother always recommended getting at least eight hours a sleep a night. Everyone knows without the proper amount of sleep, the mind will be groggy the next day, and as a result, many more mistakes will be made, meaning that you should get a full night's rest before taking a test, or a little nap before a long drive in a car. But scientists are beginning to realize that sleep is not just a mental recharger, but also important for the body as well. When a person sleeps, the body and mind are working just as hard as when the person is awake, correcting chemical imbalances, assuring proper blood sugar levels for the next day, and maintaining the memory(1). Before electricity, people would generally go to sleep when the sunset and rise when it rose, assuring that they got enough sleep to maintain a healthy mind and body. But in a highly industrialized nation where the light bulb has expanded the working day into 24 not 12 hours, it is becoming apparent that more and more people are sleep deprived. And with that deprivation, more and more scientists are realizing, comes not only a mental deficiency, but also a physical one.
It is not quite clear what physically happens in the body when one sleeps. Although scientists can read brainwaves on an EEG, they are not sure when exactly the brain is doing, although they acknowledge that dreaming is a large part of it. When the body is sleeping, the brain goes through four different stages, called the REM (Rapid Eye Movement) sleep cycles. At different stages, the brain is active in different ways, as seen on EEG brainwave readers. In the first stages of sleeping, the body begins to relax, the heart rate slows, and people often feel as though they are falling or otherwise weightless. As the body slips into the second and third stages of the cycle, it is very apparent that the brain is not acting in the same way (i.e. emitting the same brain waves) as when the body is awake, but nevertheless the activity is still there. This is where your body performs daily maintenance and healing, and where deep restful sleep occurs"(2).
If the body does not go through enough REM cycles, it cannot fully heal itself, making the body act sluggish the next day. Some signs of sleep deprivation include reduced energy, greater difficulty concentrating, diminished mood, and greater risk for accidents, including fall-asleep crashes. Work performance and relationships can suffer too. And pain may be intensified by the physical and mental consequences of lack of sleep(3).Thus, staying up all night to study for that test or finish that presentation actually is more detrimental than originally thought. Even everyday tasks, such as driving a car or even answering the phone are affected by lack of sleep, making those people who work under such conditions a danger to themselves and others. The memory is also affected. During sleep, the brain may recharge its energy stores and shift the day's information that has been stored in temporary memory to regions of the brain associated with long-term memory(3).
Scientists are realizing more and more the physical effects of lack of sleep. Sleep deprivation also weakens the immune system, preventing the body from being able to ward of infections and viruses. But, it also affects the chemical balances within the body. Men, who are normally healthy, start to show affects of aging after only a few nights of less than adequate sleep. In a study done at the University of Chicago, Dr. Eve Van Cauter found that, "after four hours of sleep for six consecutive nights, healthy young men had blood test results that nearly matched those of diabetics. Their ability to process blood sugar was reduced by 30 percent, they had a huge drop in their insulin response, and they had elevated levels of a stress hormone called cortisol, which can lead to hypertension and memory impairment"(4). Such physical effects were unheard of before this study, and as a result, scientists are now looking into connections with lack of sleep and obesity.
One such consideration is how the body regulates sleep itself. The body is monitored by the called the Circadian Rhythm, a natural internal clock that resets itself every 24-hours(5). This clock releases different chemicals in the body, depending on if it thinks the body needs to sleep or be awake. It is most easily set by direct, or as scientist are now discovering, indirect light. It is a common fact that it is easier to sleep with a light on than without, and scientists are now realizing that is because of the Circadian Rhythm. What this means is that every time you turn on a light, you are resetting the Rhythm just a little, making the individual cells within the body not release chemicals or produce the necessary proteins at the right time. Resetting the Rhythm also means that the body is working overtime, making it more out of balance and less efficient. Thus, not only are the necessary chemicals imbalanced, but the body will age faster as it is forced to work for longer and longer hours without being able to restore itself.
This discovery, in connection with the dietary habits of many industrialized nations, could possibly help to explain another factor of obesity. The invention of the light bulb made the once unproductive and dark night as valuable and bright as day. Now, people can work 24 hours a day, making industry and the lives that run it more crowded and hectic. More and more people are trying harder and harder to fit more into their days, and as a result, sleep is often slighted. The ultimate effect of this new lifestyle is more stress and a greater usage of artificial light, which are now proving to reset the Circadian Rhythm as much as exposure to the sun. This means that in highly industrialized countries in which artificial lights can make the night as bright as the day, people tend to be more sleep deprived(6). Scientists have proven that shining lights on rats causes them to awake earlier than if a light had not been shown. The same is true with humans(7). When the body awakens too early, it cannot fully restore itself, making the chemical imbalances remain imbalanced. Thus while people think that they are waking up because the body has had enough sleep, it is really because the body's Rhythm is off. And as a result, these people think that they are getting enough sleep, when in actuality, they are hurting the body more by off setting its own natural clock and the natural processes that occur during sleep.
Sleep is a major part of our lives, this is more than evident by the fact the most scientists would agree that the average person needs between seven to ten hours of sleep a day that is almost one third of an entire lifetime spent sleeping. Once thought to be only a necessary for the brain's functioning alone, it is becoming more and more apparent that the body needs sleep just as much if not more than the brain. Besides reviving energy, sleep maintains chemical balances that create better moods, stabilize chemical imbalances, and even ensure that the body is working at its best capabilities to ward off disease and even obesity. Living in a country that now forces the night to be just as industrially productive as the day, also affects how much each person sleeps, regardless of when they try to go to bed. The body sets its own natural clock by comparing itself to light, be it the sun or now artificial light from light bulbs. As a result, the body can get confused as to when it is supposed to perform the actions necessary during sleep. Before the invention of electricity, the body and brain could easily set their own Rhythm, maintaining themselves and warding off the now apparent physical effects of too little sleep. Now that individuals have more control over their body's natural processes via artificial means, it is more important than ever to realize that sleep does not just effect the mind but also the body.
1)http://www.sciencedaily.com/releases/1999/03/990316063522.htm
2)http://home.attbi.com/~rnagle557/dream_sleepscience.htm
3)http://www.fda.gov/fdac/features/1998/sleepsoc.html
4)http://abcnews.go.com/sections/2020/2020/2020_010330_sleep.html
5)http://home.attbi.com/~rnagle557/dream_sleepscience.htm
6)www.ivillagehealth.com
7)http://www.sciencedaily.com/releases/1999/03/990316063522.htm
Smallpox: Vaccination Decisions Name: Brie Farle Date: 2002-12-12 17:19:31 Link to this Comment: 4062 |
Everywhere you turn, there exists foreboding speculations about smallpox. Smallpox may be the biggest threat to Americans concerned about bioterrorism. The last case of smallpox in the United States was in 1949, and routine vaccinations ended in 1972. Therefore, most Americans born after 1972 are completely unprotected, and completely uninformed about smallpox. (1).
On Friday, December 13th, President Bush will announce plans to vaccinate Americans for smallpox. Most Americans are quick to accept new vaccinations, such as the ever popular flu shot, in order to prevent getting sick; so why is it different this time?
Why has smallpox re-emerged as a target for vaccinations, even though the deadly disease was eradicated a quarter-century ago? If it poses even a potential threat, why aren't we immediately going back to the days of total vaccination? (1).
What is Smallpox?
It is believed that smallpox originated over 3,000 years ago in India or Egypt. For centuries, repeated epidemics swept across continents; demolishing populations. As late as the 18th century, smallpox killed every 10th child born in Sweden and France. During the same century, every 7th child born in Russia died from smallpox. (2). Historically, smallpox is known for killing 30 percent of its victims and leaving survivors with permanent scars over large areas of their body, especially the face. (1)
Smallpox is an acute contagious disease caused by variola virus, a member of the orthopoxvirus family. Variola virus is relatively stable in the natural environment. It is transmitted from person to person by infected aerosols and air droplets spread in face-to-face contact with an infected person. The disease can also be transmitted by contaminated clothes and bedding, though the risk of infection from this source is much lower. In a closed environment, the airborne virus can spread within buildings via the ventilation system and infect persons in other rooms or on other floors in distant and apparently unconnected spaces. (2)
In the absence of immunity induced by vaccination, human beings appear to be universally susceptible to infection with the smallpox virus. (2) Infection can be prevented if a person is vaccinated within four days of exposure to smallpox, before symptoms even appear; but after that, there is no treatment.
Re-Emergence of Smallpox
Thanks to a worldwide immunization program, the last naturally acquired case of smallpox was recorded in Somalia in 1977. However, while smallpox was being eliminated, U.S. and Soviet laboratories were developing the virus as a biological weapon. Experts worry that scientists shared weaponized strains of the virus with nations such as Iraq and North Korea. (1)
During the Iraq crisis in 1990-91, U.S. military personnel were inoculated against a variety of biological threats; but not against smallpox. "There wasn't a concern in the first Gulf War," said Dr. Sue Bailey, former assistant secretary of defense for health affairs. Now, she said, "there is intelligence that tells us this is a higher risk." (1)
Bioterrorism experts paint frightening scenarios like these: Terrorists release weaponized smallpox into the air in crowded places, or a dozen people on a suicide mission infect themselves with smallpox and, when they are at their most contagious, walk around airports, infecting others. (1)
Vaccination Information
Edward Jenner's demonstration, in 1798, that inoculation with cowpox could protect against smallpox brought the first hope that the disease could be controlled. He believed that successful vaccination produced lifelong immunity to smallpox. (2) In the early 1950s, 150 years after the introduction of vaccination, an estimated 50 million cases of smallpox occurred in the world each year. This figure fell to around 1015 million by 1967 because of the vaccination.
When the World Health Organization (WHO) launched an intensified plan to eradicate smallpox, in 1967, smallpox threatened 60% of the world's population, killed every fourth victim, scarred or blinded most survivors, and eluded any form of treatment. (2)
Smallpox was finally pushed back to the horn of Africa and then to a single last natural case, which occurred in Somalia in 1977. In 1978, a fatal laboratory-acquired case occurred in the United Kingdom. The global eradication of smallpox was certified, based on intense verification activities in countries in December 1979 and subsequently endorsed by the World Health Assembly in 1980. (2)
Vaccination Concerns:
Jenner's work was monumental is helping the eradication of smallpox, but his predictions about the vaccine's potency were incorrect; vaccination usually prevents smallpox infection for just over ten years.
In December 1999, a WHO Advisory Committee on Variola Virus Research concluded that, although vaccination is the only proven public health measure available to prevent and control a smallpox outbreak, current vaccine supplies are extremely limited. (2)
A WHO survey conducted in 1998 indicated that approximately 90 million declared doses of the smallpox vaccine were available worldwide. Storage conditions and potency of these stocks are not known. (2)
Furthermore, existing vaccines have proven efficacy but also have a high incidence of adverse side-effects. Scientists say the smallpox vaccine, based on decades-old technology, presents a risk of side effects that include death. Based on studies from the 1960s, experts estimate that 15 out of every 1 million people vaccinated for the first time will face life-threatening complications, and one or two will die. Reactions are less common for those being revaccinated. Using these data, vaccinating the nation could lead to nearly 3,000 life-threatening complications and at least 170 deaths. (2)
There are four main complications are associated with vaccination, three of which involve abnormal skin eruption. Eczema vaccinatum occurred in vaccinated persons or unvaccinated contacts who were suffering from or had a history of eczema. In these cases, an eruption occurred at sites on the body that were at the time affected by eczema or had previously been so. These eruptions became intensely inflamed and sometimes spread to healthy skin. Symptoms were severe. The prognosis was especially grave in infants having large areas of affected skin. (2)
Progressive vaccinia occurred only in persons who suffered from an immune deficiency. In these cases the local lesion at the vaccination site failed to heal, secondary lesions sometimes appeared elsewhere on the body, and all lesions spread progressively until the patient died, usually 25 months later. As vaccination ceased in most countries prior to the emergence of HIV/AIDS, the consequences of the currently much larger pool of persons suffering from immunodeficiency were not reflected in recorded cases of progressive vaccinia. (2)
Generalized vaccinia occurred in otherwise healthy individuals and was characterized by the development, from 69 days after vaccination, of a generalized rash, sometimes covering the whole body. The prognosis was good. (2) Postvaccinial encephalitis, the most serious complication, occurred in two main forms. The first, seen most often in infants under 2 years of age, had a violent onset, characterized by convulsions. Recovery was often incomplete, leaving the patient with cerebral impairment and paralysis. The second form, seen most often in children older than 2 years, had an abrupt onset, with fever, vomiting, headache, and malaise, followed by such symptoms as loss of consciousness, amnesia, confusion, restlessness, convulsions and coma. The fatality rate was about 35%, with death usually occurring within a week. (2)
Historically, the live virus in the smallpox vaccine killed one or two people out of every 1 million who were vaccinated, and many others suffered debilitating side effects. The risk is heightened for those who have suffered from eczema or other skin diseases, as well as those whose immune systems have been compromised, such as HIV patients, transplant recipients and cancer patients. There are more people with those conditions today than there were 30 years ago, so mortality rates could be higher. This means that if every U.S. resident were vaccinated, 300 or more might die as a result. (2) Predictions are nearly impossible to make because of the high number of people with weakened immune systems, and the amount of individual undiscovered cases of HIV and AIDS.
Conclusion
President Bush's plan has three phases. First, a half million members of the armed forces and another half million healthcare workers will get vaccinated. Next, 10 million emergency response workers: emergency-room workers, police, firefighters, and ambulance crews will be vaccinated. The final phase, offering the vaccine to the public, will occur as soon as the FDA can license the vaccine. This will probably be in 2004. In case of a smallpox bioterror attack, the vaccine would be made widely available without licensing. (3)
"Smallpox eradication was a global campaign, and populations were protected by vaccination in every country. However, during the campaign, different forms of smallpox occurred, and different vaccines and vaccination techniques were used. The duration of protection can be influenced by the potency of the vaccine and the inoculation procedure used. These factors make it difficult to give firm, precise estimates that are relevant today, where populations no longer have widespread immunity, either from vaccination or from having survived the disease (patients who survived smallpox were immune for life)." (2)
With President Bush making decisions to vaccinate Americans, it is evident that we should be concerned about an outbreak of smallpox . In the case that smallpox is used as a bioweapon, emphasis must be placed on preventing epidemic spread. In doing so, it should be kept in mind that smallpox patients are not infectious during the early stage of the disease but become so from the first appearance of fever and remain so, though to a lesser degree, until all scabs have separated. Also, immunity develops rapidly after vaccination against smallpox. (1)
Isolation is essential to break the chain of transmission. In the case of a widespread outbreak, people should be advised to avoid crowded places and follow public health advice on precautions for personal protection. (2)
When the smallpox vaccination is licensed and made voluntary to the public, the urge to get the vaccination to prevent the virus may be too hasty. According to the WHO, the risk of adverse events is sufficiently high that vaccination is not warranted if there is no or little real risk of exposure. (2) The decision for each individual to or not to be vaccinated will not be easy. It would be beneficial to know how eminent is the threat of smallpox bioterrorism. However, we are not always granted that knowledge due to both governmental restrictions and the potential for a completely unexpected attack.
That we have not immediately turned back to the notion of total vaccination, demonstrates that the world has changed since the first Gulf War. This change is not only due to the rise of terrorism, but also the fall of the Soviet Union, the spread of AIDS and a fuller appreciation of medical risks. (1)
1)Smallpox: What you Need to Know, An informative site explaining how Bush's decisions will affect Americans; includes a great visual guide to the virus.
2)Communicable Disease Surveillance and Response, WHO Fact Sheet on Smallpox, More information than you know what to do with regarding Smallpox. Lists of facts and detailed information.
3)President Offers Smallpox Vaccine to All, An informative article about Bush's plan.
The Sun: A Silent Killer That We All Indulge In Name: Anastasia Date: 2002-12-13 13:04:49 Link to this Comment: 4085 |
Sunburn is the inflammation of the skin caused by actinic rays from the sun or artificial sources. Moderate exposure to ultraviolet radiation is followed by a red blush, but severe exposure may result in blisters, pain, and constitutional symptoms. As ultraviolet rays penetrate the skin, they break down collagen and elastin, the two main structural components of the skin, a process that results in wrinkles caused by sun damage (7). In addition, the sun damages the DNA of exposed skin cells. In response, cells release enzymes that excise the damaged parts of the DNA and encourage the production of replacement DNA. At the same time, the production of melanin increases which darkens the skin. Melanin, the pigment that gives skin its color, acts as a barrier to further damage by absorbing ultraviolet light. A suntan results from the skin's attempt to protect itself (8). Light-skinned people and infants are extremely susceptible to ultraviolet rays because they lack the sufficient skin pigmentation that protects the skin from continuous damage.
The ultraviolet radiation in sunlight is divided into three bands: UVA (320-400 nanometers), which can cause skin damage and may cause melanomatous skin cancer; UVB (280-320 nanometers), stronger radiation that increases in the summer and is a common cause of sunburn and most common skin cancer; and UVC (below 280 nanometers), the strongest and potentially most harmful form. Much UVB and most UVC radiation is absorbed by the ozone layer of the atmosphere before it can reach the earth's surface (8). The depletion of the ozone layer is increasing the amount of ultraviolet radiation that can pass through it. The radiation that does in fact pass through the ozone layer is mostly absorbed by window glass or impurities in the air.
Even though it is dangerous, sunlight has good qualities. A small amount of sunlight is necessary for good health. Vitamin D is produced by the action of ultraviolet radiation on ergosterol, a substance present in the human skin and in some lower organisms, like yeast (3). The treatment or prevention of some skin disorders often includes exposure of the body to natural or artificial ultraviolet light. The radiation also kills germs and is widely used to sterilize rooms, exposed body tissues, blood plasma, and vaccines.
Ultraviolet radiation can be detected by the fluorescence it induces into certain substances. It may also be detected by the photographic and ionizing effects it has. The long-wavelength, "soft" ultraviolet radiation, which lies just outside the visible spectrum, is often referred to as black light (3). Low intensity sources of this radiation are often used in mineral prospecting and in conjunction with bright colored fluorescent pigments to produce unusual lighting effects.
The knowledge of ultraviolet radiation and the effects it has on skin has greatly increased in recent years. Repeated sunburn is now considered a major risk factor for melanoma. Melanoma is the most virulent type of skin cancer and the type most likely to be fatal, and its incidence is increasing around the world. There also appears to be a hereditary factor in some cases. Although light-skinned people are the most susceptible, melanomas are also seen in dark-skinned people. Melanomas arise in melanocytes, the melanin-containing cells of the epidermal layer of the skin. In light-skinned people, melanomas appear most frequently on the trunk in men and on the arms or legs in women. In African Americans, melanomas appear most frequently on the hands and feet (1). It is recommended that people examine themselves regularly for any evidence of the characteristic changes in a mole that could raise a suspicion of melanoma. These include asymmetry of a mole, a mottled appearance, irregular or notched borders, and oozing or bleeding or a change in texture (2). Surgery performed before the melanoma has spread is the only effective treatment for melanoma.
Basal and squamous cell carcinomas are the most common types of cancer. Both arise from epithelial tissue. Light-skinned, blue-eyed people who do not tan well but who have had significant exposure to sun rays are at the highest risk. Both types usually occur on the face or other exposed areas. Basal cell carcinoma typically is seen as a raised, sometimes ulcerous nodule. The nodule may have a pearly appearance. It grows slowly and rarely spreads, but it can be locally destructive and disfiguring. Squamous cell carcinoma is typically seen as a painless lump that grows into a wart like lesion. It may also arise in patches of red, scaly sun-damaged skin called actinic keratoses (1). If it spreads, it can lead to death.
Basal and squamous cell carcinomas are easily cured with appropriate treatment. The lesion is usually removed by scalpel excision, freezing, or micrographic surgery. Micrographic surgery is the most complicated of them all where thin slices of the lesion are removed and examined for cancerous cells under a microscope until the samples are clear. If the cancer arises in an area where surgery would be difficult or disfiguring, radiation therapy may be an option.
The National Weather Service's daily UV index predicts how long it would take a light skinned American to get a sunburn if exposed, unprotected, to the noonday sun, given the geographical location and the local weather. It ranges from 1 (about 60 minutes before the skin will burn) to a high of 10 (about 10 minutes before the skin will burn) (7). Before going out into the sun, whether it is to walk the dog or lay out on the beach, it is important to know what degree of sun intensity you are up against. Also, no matter what you might be doing, if you are going to be in the sun it is essential to use some form of protection.
The easiest and most successful strategy for protection from the harmful effects of sunlight is avoidance. Studies of UV intensity have concluded that 30% of the total daily UV flux hits the earth between 11 AM and 1 PM (4). A good strategy would be to plan activities, trying to avoid this peak exposure time. "A useful rule of thumb is that if your shadow is shorter than you, the risk of sunburn is substantial," (4).
A second extremely important skin damage prevention method is applying sunscreen to exposed body parts before sun exposure occurs. Sunscreens block or absorb UV light. Zinc oxide, the white opaque cream that most lifeguards wear on their noses, is an excellent form of sunscreen that blocks UV light entirely. The first commercial sunscreen was developed in 1928, and contained benzyl salicylate and benzyl cinnamate (10). The most common absorption chemical in sunscreen during the 1950's and 60's was PABA (para-aminobenzoic acid) (9). It has since fallen out of favor because of its inefficiency to absorb the wavelengths of UV light compared to more recent active ingredients (10). Today salicylates and cinnamates are found in most UVB protectants.
All sunscreens are labeled with an SPF. The SPF acts like a multiplying factor. If your skin would normally be fine after spending ten minutes in the sun, then you should apply an SPF ten sunscreen to any exposed body parts. Your skin should be fine for one hundred minutes in the sun. In order for sunscreen to work, it must be applied evenly, enough must be used, and it must stay on the skin. It should be applied about half an hour before going out into the sun, in order for it to bind to the skin (10).
Another strategy that should not be taken for granted is wearing protective clothing. Clothing is generally a good UV blocker, although lighter fabrics, worn in the summer in order to stay cooler, may not have a great protective value in comparison to heavier fabrics such as denim (5). Jevtic showed that a cotton T-shirt has an SPF value of around 15, which decreases when the fabric becomes wet. "Interestingly, a cotton T-shirt may actually increase in SPF value when it is washed a few times due to shrinkage in the hole size of the fabric mesh," (1). In order to test your clothes to see just how protective they really are, hold a clothing item up to a strong light source such as a light bulb. If you can see images through it, most likely the SPF value of the item is 15. If light gets through, but you can't really see through it, it probably has an SPF value between 15 and 50. If it completely blocks the light, it probably has an SPF value of over 50 (1). Hats are also a great option for protective clothing. They cover not only the head, but the neck as well, which gets almost continual sun exposure, even in the winter months. Hats have even been proven to reduce the risk of multiple skin cancers.
More than half of all new cancers are skin cancers. More than one million new cases of skin cancer will be diagnosed in the United States this year. About 80% of the new skin cancer cases will be basal cell carcinoma, while only 16% are squamous cell carcinoma and 4% are melanoma. An estimated 9,600 people will die of skin cancer this year, 7,400 from melanoma, and 2,200 from other skin cancers. One person dies of melanoma every hour. In 2002, 7,400 deaths will be attributed to melanoma. Melanoma is the fifth most common cancer in men and the sixth most common cancer in women (2). With statistics like these, which were taken from the American Cancer Society's 2002 Facts and Figures, I hope you think twice before going out into the sun unprotected.
In conclusion, it is true that the greater the skin pigmentation the better as far as sun protection goes. It does not follow that intentional tanning specifically to achieve an increase in protective pigmentation is the best sun protection strategy. Recent evidence suggests that tanning only occurs after DNA has been damaged. DNA damage is the trigger for the tanning response, meaning that a person does not begin to tan until after they have already caused damage to themselves. In addition, tanning with high intensity UVA, which is used in tanning parlors, is more harmful to the skin than tanning with natural sunlight (6). From this, one can conclude that there really is no safe level of sun exposure.
1)Sun Damage and Prevention, helpful hints on protection
2)Skin Cancer Fact Sheet, important skin cancer facts everyone should know
3)Hidden Sun Damage, things you might not know
4)Protect Yourself From the Sun, how to do it right
5)Think the Sun is Less Dangerous in Winter than in Summer? Think Again!, did you know
6)Tanning Salon Exposure Can Lead to Skin Cancer, the real truth about tanning salons
7)Malignant Melanoma Fact Sheet, what you need to know about melanoma
8)An Introduction to Skin Cancer, a basic overview on skin cancer
9)How Sunscreen Works, behind the science of sunscreen
10)Sunscreens & Sunburns, one helps, one hurts
Fire and Ice ... and Darkness Name: Laura Bang Date: 2002-12-13 16:37:30 Link to this Comment: 4089 |
"Astronomers have dark imaginations." (6) Throughout the past century, as new technology and new theories gave science a new view of space, astronomers became aware that their conceptions of the universe did not agree with new observations. When astronomers tried to determine the mass of the universe, they found conflicting answers. To solve this problem, scientists imagined a kind of matter that we cannot see, and they decided to call it "dark matter." (1) In addition, while trying to measure the rate at which the expansion of the universe was slowing down, scientists found that instead the universe's rate of expansion was speeding up. This led scientists to imagine a kind of "dark energy." (6) Dark matter and dark energy are both important in determining the mass and density of our universe-and these, in turn, are important in determining the fate of our universe. (5)
One of the most intriguing mysteries for astronomers today is that approximately 90% of our universe is invisible. Astronomers decided to call this invisible matter "dark matter." (1) It all began when astronomers were trying to determine the mass of galaxies. There are two possible methods for this calculation: a) by using the brightness of a galaxy to calculate the mass, or b) by looking at the how fast the stars in a galaxy are moving-the faster a galaxy is spinning, the more mass it contains. (1) When astronomers in the 1930s actually calculated these numbers using both of the above methods, however, their answers were different-even though both methods should have yielded the same answers. (1) This would not have been so much of a problem if the difference between the answers had been small, but the fact is, the answers were hugely different, leading astronomers to come to the conclusion that there must be a lot of "dark matter" in our universe that we simply cannot see (other than its gravitational effects). (1)
The idea of dark matter may seem incredible at first. How can nearly 90% of our universe be invisible to us? To help clear this up, close your eyes and picture a city at night. In this city you are looking at a skyscraper. Several of the windows are lit, but most of them are not since it is after normal office hours. You can only see the windows that are lit, yet you are sure of the existence of the other windows that make up the rest of the building. This is our universe: the lit windows are the stars and other matter that we can see, while the dark windows are the dark matter of our universe. (1)
What exactly is all this dark matter, anyway? There are three possible contributors to the dark matter problem: MACHOs, WIMPs, and neutrinos (strange names, but fascinating things).
MACHO stands for "Massive Astronomical Compact Halo Object." (2) MACHOs are "halo" objects because they exist on the outer rims -- or "halos" -- of galaxies. These are the "heavyweight" components of dark matter and they consist of "massive dark bodies such as planets, black holes, asteroids or failed stars (brown dwarfs)." (2) These objects do not give off their own light and are not near enough to reflect the light of light-emitting stars, so they appear "invisible" when viewed from large distances. (2) According to recent speculations, however, MACHOs could account for about 20% of the dark matter. (2) The rest, then, is left up to WIMPs and neutrinos.
"A WIMP is a Weakly Interacting Massive Particle." (3) Scientists have not found any actual WIMPs as of yet, however, but they think that there are millions of WIMPs flying around all the time. In spite of the fact that they are labeled "weak," scientists believe that they are actually quite strong-able to pass through solid objects without slowing down or stopping. Because of this ability to pass through solid objects, several research teams around the world are looking for WIMPs in underground laboratories. Why underground? Since most particles flying through the air are not able to pass through solids, scientists have a better chance of finding WIMPs underground where, after passing through the earth's rocky surface, they will in theory be able to see evidence of WIMPs without the interference of other particles. Right now, however, scientists are still looking for these elusive particles, which they speculate could account for about 90% of dark matter. (3)
The third possible contributors to the "missing matter" problem are neutrinos. A neutrino is "a tiny elementary particle, smaller than an atom with no electric charge and no mass. All this particle [does is] carry energy as it zip[s] along at the speed of light." (4) These particles have an interesting origin: a scientist by the name of Wolfgang Pauli made them up in order to make his calculations work out. After further speculation, however, scientists agreed that neutrinos do in fact exist. A 1998 study also discovered that a specific type of neutrino did in fact have a small mass, which allows this particle to be a possible contender for the missing matter of our universe. (4) Astronomers believe that an abundance of neutrinos could account for around 25% of dark matter. (4)
Astronomers are still working on fully understanding the dark matter of our universe, but while working on this problem they discovered another problem: the universe is still expanding. After so many billions of years of expansion since the Big Bang that created it, the universe should be slowing down, but it's not -- what's more, it's speeding up. Astronomers were mystified when they first discovered this, and in order to make sense of it they are now speculating on the existence of a kind of "dark energy." (6)
According to the laws of gravity and the gravitational pull exerted by each object in the universe, after expanding for billions of years the universe should slow down. Going along with this idea, astronomers attempted to calculate the rate at which the universe's expansion was slowing down. (6) Instead, while looking at the light produced by two distant supernovas, they found that the expansion rate is increasing. In order to make sense of this new information, astronomers came up with the idea of dark energy. (6)
The possibility of dark energy came as a surprise to scientists. Michael S. Turner of the University of Chicago summed it up: "For 70 years, we've been trying to measure the rate at which the universe slows down. We finally do it, and we find out it's speeding up." (6) Yet as with most new discoveries, finding out that they are wrong just adds to the scientists' fun. Andreas J. Albrecht of the University of California, Davis, stated, "This is the most exciting endeavor going on ... right now." (6) Scientists have only just begun to study dark energy, but they do know that dark energy plays a key role in how our universe will end and other such mysteries of deep space. (6)
Dark energy could be said to be a kind of "antigravity," but a more accurate way to describe it is to imagine it as "the flip side of ordinary gravity." (6) One property of dark energy would be a property called negative pressure. Something that has negative pressure would resist being stretched, "as a coiled spring does: pull on the spring and it pulls back." (6) Therefore, since normal gravity would pull things together, dark energy would push things outward, causing the increased expansion rate of the universe. (6)
There are two possibilities of what dark energy could be. One is called "vacuum energy" which has to do with complicated theories of physics and empty space (also called a vacuum; hence, the name, "vacuum energy"), and the other form is called "quintessence" and has to do with other dimensions contributing to dark energy. (6)
A brief look at the properties of vacuum energy reveals that it could be related to quantum theory. (6) Quantum theory holds that a vacuum "seethes with energy as pairs of particles and antiparticles pop in and out of existence." (6) In addition, the Russian astrophysicist Yakov B. Zeldovich found in 1967 that "the energy associated with this nothingness [a vacuum] has negative pressure." (6)
Quintessence, on the other hand, has to do with
multiple dimensions. We live in four dimensions that we can perceive: the first
through third dimensions which have to do with how we perceive depth and our
world around us, plus the fourth dimension of space-time. (6)
Andreas J. Albrecht and Constantinos Skordis of the University of California,
Davis, proposed that "the repulsive force [of dark energy] may come from
other, unseen dimensions or even from other universes beyond our own."
(6)
Since all of this has yet to be confirmed, there are several current studies
hoping to discover that a) the universe is actually expanding at a faster rate,
and b) whether vacuum energy or quintessence is responsible for this acceleration.
(6) With their imaginings of dark matter and dark energy, and
how they relate to the end of the universe, scientists seem quite morbid. So
how exactly do dark matter and dark energy relate to the end of the universe?
There are three possibilities for the future of the universe. The first possibility is the "Big Freeze": the universe will continue expanding forever, which would eventually cause all the planets and stars to freeze because they would be so far from the life-giving heat of the Sun. The second possibility is the "Big Crunch": gravity will eventually pull the universe back together, resulting in all the planets and stars eventually colliding with each other. The third possibility is that the universe will reach equilibrium and come to a halt, neither expanding nor contracting. This all depends on how much of a force dark energy is exerting on the universe, and also on the density of the universe. (5)
In order to determine the density of the universe, astronomers need to determine how much dark matter there is. The symbol scientists use for the density of the universe is the last letter of the Greek alphabet, omega (which means "the end"). The critical density is omega=1; this is the density needed for the universe to come to equilibrium. If omega<1, then the universe will continue expanding toward the Big Freeze. If, on the other hand, omega>1, then gravity will pull the universe back inward ending in the Big Crunch. "So our destiny depends on our density" (5) -- it is interesting to note that "destiny" and "density" are anagrams of each other (isn't language awesome?). The most recent estimate of the universe's density is omega approximately equals 0.3, which means that, as far as we know right now, the universe is heading toward the Big Freeze. (5)
However, as it turns out, none of that really matters because in approximately four billion years the Sun will expand and obliterate Earth; and at about the same time, the nearby galaxy Andromeda will crash into our Milky Way galaxy. (5) So right now it seems that our "world will end in fire," "but if it had to perish twice" it looks as though "ice ... would suffice." (7)
Some say the world will
end in fire,
Some say in ice.
From what I've tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.
~ Robert Frost (7)
1) Dark Matter
2) MACHOs
3) WIMPs
4) Neutrinos
The Science of Attraction Name: Mahjabeen Date: 2002-12-14 16:35:13 Link to this Comment: 4093 |
Attraction. Such a powerful word. There is something so incredibly attractive about this word or maybe it's just growing up knowing the significance of this word that makes the word itself so attractive. So what is attraction? Butterflies in the stomach, racing of the heart, goose bumps on the skin or shivers down the spine? This paper will look at the "scientific" factors that make a man feel attracted to a woman or vice versa.
What makes human attraction so fascinating are all the elements associated with it. It is not the simple mating ritual performed by most, if not all, living and reproducing creatures on planet earth. The incidents leading to human attraction will often lead to feelings of love and hence companionship, emotions limited to mainly the human realm.
Scientists are trying to break down this enormously complex phenomenon of attraction by coming up with a number of feasible reasons to explain why we are attracted to some people while we don't even spare a glance to others. Theories have included proportionate figures, facial symmetry, pheromones, upbringing and genetics. Some theories such as men tending to feel attracted to women depending on their level of fertility are quite chauvinistic but to some extent true. Such a theory can easily apply to men as well. Women, like men, might tend to go for men who are more fertile.
Evolutionary fitness might be a criterion for attractiveness. According to evolutionary psychologists, many of the traditional and universal qualities which we link to sex appeal are grounded not merely in assimilated social cultural traditions as we have been told, but are deeply rooted in our basic physiological make-up: our unconscious innately drives to do our fair share for survival of the species. If so, is it possible that features which draw us to dates and mates appear to reflect reproductive and parenting potential? If so, how might we differentiate between those which are inborn and those instilled by our cultures? (1)
"Judging beauty has a strong evolutionary component. You're looking at
another person and figuring out whether you want your children to carry that person's
genes," says Devendra Singh, a professor of psychology at the University of Texas. The scientific properties of attraction (to whatever extent they are involved) can be explained by the simple will to produce viable offspring, also know as healthy kids.
Beyond this underlying principle of attraction, one begins to wonder how, and on what level, one can judge the fitness of another person. Certainly, a person smitten for the first time at a bar doesn't ask for a genetic sequence and specifics about that special
someone's immune system before approaching him or her. Yet some of that information is received and interpreted at a sub-conscious level. (2)
Lust leads to attraction. Lust is governed by testosterone and estrogen, says anthropologist Helen Fisher of Rutgers University. Testosterone is not confined only to men. It has also been shown to play a major role in the sex drive of women. Although the reproductive parts are often ascribed credit (or blame) for human sexual attraction, many scientists believe that sexual attraction begins in a pea-sized structure called the hypothalamus deep in the primitive part of the human brain. This tiny bundle of nerves sets off an exciting chain of events when one person perceives another to be sexually attractive. The hypothalamus instantly notifies the pituitary gland which rushes hormones to the sex glands. The sex glands in turn promptly react by producing estrogen, progesterone, and testosterone. Within seconds, the heart pounds, muscles tense; he or she feels dizzy, light-headed, and the tingling of sexual arousal. This chemical driven high induces moods which swing from omnipotence and optimism to anxiety and pining. A malfunctioning hypothalamus can have bizarre effects on one's romantic love life, including irrational and distorted romantic choices, obsessions, idealization, and separation anxiety. The height of romantic passion creates illusions of well being, feelings of possessiveness, and happily-ever-after fantasies within the psyche of the new lover. (3)
Fisher believes the volatile phase of romantic attraction is caused by changes in signaling within the brain involving a group of neurotransmitters called monoamines, which include dopamine which is activated by cocaine and nicotine, norepinephrine or adrenaline that makes the heart race and the body sweat and serotonin. Serotonin can actually send us temporarily insane. Next comes the hormones oxytocin and vasopressin which forge the bonds of attraction by bringing attachment into the picture. Oxytocin is released by the hypothalamus gland during child birth and also helps the breast express milk. It helps cement the strong bond between mother and child. It is also released by both sexes during orgasm and it is thought that it promotes bonding when adults are intimate. The theory goes that the more sex a couple has, the deeper their bond becomes. Vasopressin is an important controller of the kidney and its role in long-term relationships was discovered when scientists looked at the prairie vole. In prairie vole society, sex is the prelude to a long-term pair bonding of a male and female. Prairie voles indulge in far more sex than is strictly necessary for the purposes of reproduction. It was thought that the two hormones, vasopressin and oxytocin, released after mating, could forge this bond. In an experiment, male prairie voles were given a drug that suppresses the effect of vasopressin. The bond with their partner deteriorated immediately as they lost their devotion and failed to protect their partner from new suitors. (4)
When it comes to choosing a partner, are we at the mercy of our subconscious? Researchers studying the science of attraction draw on evolutionary theory to explain the way humans pick partners. It is to our advantage to mate with somebody with the best possible genes. These will then be passed on to our children, ensuring that we have healthy kids, who will pass our own genes on for generations to come.
When we look at a potential mate, we are assessing whether we would like our children to have their genes. There are two ways of doing this that are currently being studied, pheromones and appearance. (5)
Human pheromones are a hot topic in research. They are odorless chemicals detected by an organ in the nose. Pheromones are known to trigger physical responses including
sexual arousal and defensive behavior in many species of insects, fish and animals. There
has long been speculation that humans may also use these chemicals to communicate instinctive urges. Women living together often synchronize their menstrual cycles because they secrete an odorless chemical in underarm sweat.
Pheromones are already well understood in other mammals, especially rodents. These animals possess something called a 'vomeronasal organ' (or VNO) inside their noses. They use it to detect pheromones in the urine of other rats and use this extra sense to understand social relationships, identify the sex of fellow rats and find a mate.
In human embryos these organs exist but they appear to perform no function after birth. Now, scientists at Rockefeller University in New York and Yale University in Connecticut believe they have found a gene which may create pheromone receptors. A receptor is an area on a cell that binds to specific molecules. Called V1RL1, the gene resembles no other type of mammalian gene and bears a strong similarity to those thought to create pheromone receptors in rats and mice. (6)
In 1995, Claus Wedekind of the University of Bern in Switzerland, asked a group of women to smell some unwashed T-shirts worn by different men. What he discovered was that women consistently preferred the smell of men whose immune systems were different from their own. This parallels what happens with rodents, who check-out how resistant their partners are to disease by sniffing their pheromones. So it seems we are also at the mercy of our lover's pheromones, just like rats.
At the University of Chicago, Dr Martha McClintock has shown in her own sweaty T-shirt study that what women want most is a man who smells similar to her father. Scientists suggest that a woman being attracted to her father's genes makes sense. A man with these genes would be similar enough that her offspring would get a tried and tested immune system. On the other hand, he would be different enough to ensure a wide range of genes for immunity. (7)
Alarmingly, scientists have found that the oral contraceptive pill could stop a woman producing pheromones and undermine her ability to pick up the right chemical
signals from men, hence making women choose men with whom they cannot produce children. Scientists believe pheromones may help people choose biologically compatible mates. (8)
Appearance could be another indicator of the quality of a person's genes. Research suggests that there are certain things we all look for, even if we don't know it.
It is thought that asymmetrical features are a sign of underlying genetic problems. Numerous studies in humans have shown that men in particular go for women with symmetrical faces. The preference in women for symmetry is not quite so pronounced. Women are also looking for a man's ability to offer food and protection. This might not be indicated in their genes, but in their rank and status, for example. (9)
Consistent with the evolutionary theory, many of these sex-stereotypical traits reflect what visually appear to be signs of reproductive potential. The small jaw preferred in females, and the heavy jaw and chin in males reveals the effects of female hormone estrogen, and male hormone androgen respectively. With evolutionary theory in mind, it should not be surprising that men find visual cues to attractiveness more relevant in selecting a mate than women do.
Studies have shown men prefer women with a waist to hip ratio of 0.7; this applies whatever the woman's overall weight. This ratio would seem to make sense as an indicator of a woman's reproductive health. When women age their waist tends to become less pronounced as they put on fat around the stomach. This coincides with them becoming less fertile. The "hourglass" figure, research shows, is dominantly preferred in a woman rather than any other body shape.
Interestingly, scientists have found that female reproductive capacity shows a positive
correlation with the sharp contrast between waist and hips. Preferred female facial features; wide-set large eyes, small nose and jaw; are imitative of youth and untapped reproductive potential. Similarly, the muscular, angular T-shaped male figure, assertive behaviors, and deep voice most universally preferred by women, are visually indicative of higher levels of the male hormone testosterone. (10)
It's interesting how many married couples look quite similar. Studies have shown that more than anything we prefer somebody who looks just like we do. Research has uncovered that there is a correlation in couples between their lung volumes, middle finger
lengths, ear lobe lengths, overall ear size, neck and wrist circumferences and metabolic rates. The latest studies indicate that what people really, really want is a mate that looks like their parents. Women are after a man who is like their father and men want to be able to see their own mother in the woman of their dreams.
At the University of St Andrews in Scotland, cognitive psychologist David Perrett studies what makes faces attractive. He has developed a computerized morphing system
that can endlessly adjust faces to suit his needs. Students in his experiments are left to
decide which face they fancy the most. Perrett has taken images of students' own faces and morphed them into the opposite sex. Of all the faces on offer, this seems to be the face that subject will always prefer. They can't recognize it as their own, they just know they like it. Perrett suggests that we find our own faces attractive because they remind us of the faces we looked at constantly in our early childhood years - Mom and Dad. Even the pheromone studies are now showing a preference for our parents' characteristics, where we prefer smells which remind us of our parents. (11)
Perhaps such a genetic affiliation for feeling attracted to people with similar facial structures or those resembling features of our parents explains why the majority of humans tend to stick to their own races, cultures and backgrounds. I would also conclude that attraction is not only an integration of chemistry and genetics but feelings and emotions which can sometimes be quite inexplicable. While science attempts to answer all these questions it must be taken into account that there will still be queries to which there might not be a scientific explanation. It is not surprising at all that such extensive research has been done on attraction, since it is one of the governing factors of everyday life. Attraction however should not be confused with love, since love takes up an entirely different though related dimension. While elements such as pheromones, facial symmetry and genetics may be able to explain attraction, much more research should be conducted if the emotion of love is to be explained and even then researchers might find themselves quite empty-handed. After all, love is not based on physical attraction alone, mental and emotional attraction must also be considered if research is to be conducted in the field of love. While most scientists deal with heterosexual attraction, the field of homosexuality or bisexuality and circumstances behind it is still open to interpretation and discussion.
Nevertheless, ongoing research on the chemistry and biology behind human attraction and love will continue to make new discoveries and shed some light on why the boy
next door suddenly seems more appealing than Hugh Jackman.
(1) Evolutionary theory of Sexual Attraction
(3) Evolutionary theory of Sexual Attraction
(4) Manipulating the Chemistry of Attraction
(6) Secrets of Human Attraction
(8) The Magic of Sexual Attraction
(9) What makes you fancy someone?
(10) Evolutionary theory of Sexual Attraction
Pheromones Name: Elizabeth Date: 2002-12-14 20:57:18 Link to this Comment: 4094 |
Often, animals wish to send messages to one another without making a sound.
One method of transmission for such messages is through pheromones, strong chemicals
signals received by nerve cells in the nose and interpreted by the vomeronasal organ,
another structure within the nose (2). Pheromones are only detected by members of the
same species, which are interpreted by the hypothalamus region of the brain. The
presence of pheromones has been confirmed in many types of insects and other animals,
but researchers are still unsure of whether or not mammals, and in particular humans,
transmit or respond to pheromones. Also, researchers are still working to pinpoint the
exact function of pheromones in humans. Although also linked to communications
regarding territory or food location, scientists believe pheromones in most animals are
primarily linked to sexual attraction. Therefore, the discovery of certain types of
pheromones in humans has allowed a huge market to develop based on these mysterious
chemicals. Such products promise to make the wearer more attractive to the opposite sex.
However, as the research on pheromones is relatively young, it remains to be seen
whether one can manipulate their sexual attractiveness with the aid of bottled
pheromones.
The discovery of the first type of pheromone took place in 1956. A German team
of researchers identified and isolated a powerful sexual attractant in female silkworms
which caused curious effects in the males of the species. When sensing the presence of
the pheromone, named bombykol after the species name of the silkworm moth, a male
moth would begin a frenzied mating dance. Researchers determined that this pheromone,
although odorless, must communicate a strong signal of sexual availability from the
female to the male, thus initiating reproduction. Scientists studied the chemical makeup
of bombykol extensively, determining that the substance consists of a primary alcohol,
unlike other moth pheromones, which were chemically similar to fatty acids. Females
have a reserve of the chemicals which produce the bombykol pheromone in their sex
glands and, when hoping to attract a mate, they release part of their reserve (1).
Pheromones are extremely powerful. In fact, researcher Lewis Thomas estimated "it has
been soberly calculated that if a single female moth were to release all the bombykol in
her sac in a single spray, all at once, she could theoretically attract a trillion males in the
instant" (3). Moths are not the only creatures to communicate by pheromones.
Pheromone secretions of the same compound produced by silkworm moths have been
found in samples of elephant urine. These pheromones only appear in a female
elephant's urine just before ovulation, announcing her fertility to the surrounding males.
Of course, human scientists were curious to see if such a powerful sexual
attractant was part of their reproductive ritual. If indeed it was, many hoped to
manipulate the effects of pheromones to improve their love lives. The first evidence of
pheromones in humans came in 1971, courtesy of a ground breaking study by
biopsychologist Martha K. McClintock (3). McClintock ran a study of women living in
college dormitories, through which she discovered that groups of women living together
gradually develop a synchronized menstrual cycle. Some have theorized that this
synchronism was intended to foster genetic diversity, as one man would be unable to
impregnate every woman in a prehistoric tribe it those women were only fertile at the
same times. During a series of follow-up tests to this study, McClintock attempted to
determine whether this curious effect was triggered by pheromones, and if so, whether
pheromones could affect the length of a woman's ovulation and menstrual cycle. In
order to do so, McClintock devised a complicated experiment which required test
subjects to wear a gauze pad in their armpit. From these pads, McClintock harvested
perspiration, masked its odor, and dabbed the solution under other test subject's noses.
The results showed that this mixture did indeed affect the menstruation cycle of the
subject, but only if administered within a few days prior to ovulation. If the perspiration
came from a woman who had yet to ovulate that month, the solution shortened the
subjects' period by a couple of days. However, if the sample came from an ovulating
woman, the test subjects' period was delayed by a day or so. The control group exhibited
no change. This study seemed to prove the existence of human pheromones, but left
many questions unanswered, especially regarding the chemical makeup of human
pheromones, the function of such chemical messages, and whether or not males emit
sexual signals to their prospective mates (4).
A large cosmetics industry has developed with hopes of cashing in on the
speculation that pheromones act as sexual attractants in humans. Pheromone products
claim to enhance one's popularity with members of the opposite sex by increasing the
amount of pheromones one emits with a simple topical application of concentrated
chemicals whose makeup is similar to the chemical structure of animal pheromones (5).
Encouraged by studies which hypothesize that those who emit an abundance of "sex
pheromones" tend to be more attractive to members of the opposite sex, consumers buy
these products as a new, biological approach to the age old quest to attract mates. Man
has used scent as an aphrodisiac for centuries, but this market becomes a little trickier
when pheromones are involved, due to their inherent lack of scent. Past research has
proven that humans react to strong and distinctive chemical hormones called
androstenones, present in both genders, but primarily associated with males. In turn,
many popular fragrances, such as musk and other perfumes, derive their scents in part
from the scents of androstenones. However, these compounds, unlike pheromones,
derive their power from an identifiable odor (6). Consumers are less likely to buy an
odorless attractant, unless significant scientific research solidifies its value. Nevertheless,
many products have appeared on the market which claim to use pheromones to attract
lovers. Most pheromone products aimed towards attracting men contain the compound
Androstenol, while pheromone products for attracting females contain Andtrostenol.
The effectiveness of such products is debatable, but their cost is uniformly high.
Although no side effects have been recorded from the use of topical pheromones, making
pheromone products seem safe enough for casual human use, it is undoubtedly a huge
waste of money to buy a product which may or may not deliver its intended effect.
Unlike insects and other animals, who exhibit highly predictable behavior, humans are
much less uniform in their reactions to stimuli. There is slight evidence that all humans
react in varying degrees to the presence of pheromones. A low reaction to pheromones
may be possibly due to malfunctions in the veromonasal organ. Effectiveness also
depends on the concentration of the pheromone in the solution. Those products which
boast a higher concentration of pheromones have a better chance of attracting those
members of the opposite sex who react strongly to pheromones. However, the products
with the highest concentration of pheromones also come with the highest price tag.
Although great advances have been made in the study of pheromones, it is still
too early to market effective pheromone products commercially. Before such products
can provide reliable results, scientists must pinpoint the chemical structure of human
pheromones and identify their exact function in human beings. Researchers have been
able to discover such information regarding pheromone in insects and other animals, so,
given the proper amounts of time and funding, they should be able to do the same for
humans. While humans often exhibit unique behavior, as compared to the fairly
uniform reaction patterns of insects, which makes it difficult to predict the exact reactions
of every human to pheromones, it would not be impossible for scientists to devise a
theory regarding the likely outcome of exposure to certain pheromones. Such a discovery
would help regulate the cosmetic pheromone industry, which in turn would make their
products more useful for humans.
1)About Pheromones
2)Study finds proof that humans react to pheromones
3)Secret Sense in the Human Nose: Pheromones and Mammals
4)Nailing Down Pheromones in Humans
5)Pheromones (Human Pheromones)
6)Scent as Aphrodisiacs
Lou Gehrig's Disease Name: Kathryn Ba Date: 2002-12-15 10:01:29 Link to this Comment: 4098 |
Most people have a "clumsy day" every now and then, when no matter how hard the person tries, he or she cannot avoid tripping or dropping things. What if "clumsy days" happened on a regular basis, and in addition to dropping and tripping over everything, the person experienced severe muscle fatigue, cramping, slurred speech, and/or periods of uncontrollable laughing or crying? This situation merits a visit to the doctor's office, and for a little over 5,600 people in the United States every year, a diagnosis of Amyotrophic Lateral Sclerosis (ALS), commonly known as Lou Gehrig's disease. The baseball player brought national attention to the disease when he was diagnosed with it in 1939 (1). This essay will examine the symptoms associated with ALS, the three types of ALS, and possible causes and treatment options for the disease. A discussion will follow, examining the applicability and actual benefits of current treatment options.
"Amyotrophic" literally means "no muscle nourishment." "Lateral" refers to the area in a person's spinal cord where portions of the nerve cells that nourish the muscles are located, and "sclerosis" defines the scarring or hardening in this region. The muscles are not nourished because as motor neurons degenerate, they cannot send impulses to the muscle fibers that normally result in muscle movement. If muscles do not receive messages to function they begin to waste away, or atrophy, leading to a variety of complications, paralysis, and ultimately death. Because ALS only attacks motor neurons, the sense of sight, tough, hearing, taste and smell are not effected. Many people are not impaired in their minds and thoughts, which remain sharp despite the progressive degeneration of the body (1). All patients diagnosed with ALS eventually die, although the mortality rate differs. Half of all ALS patients die within 18 months of diagnosis, 80% die within five years of diagnosis, and only 10% live more than ten years. Patients with ALS have a higher chance of surviving for five years if they are diagnosed between the ages of 20 and 40. The average age of onset is 55 years (2).
Many complications arise because of an ALS patients' immobility. These include, but are not limited to: joint stiffness and pain, shortening of muscles or connective tissue around the joint that prevent the normal range of the movement of the joints, pressure sores or ulcers, poor circulation, urinary tract infections, constipation, and aggravation of respiratory problems. Another symptom, depression, is also very common. People suffering from ALS often are homebound or embarrassed about their disease and become socially isolated. In addition, one's response to immobility often includes symptoms of depression, such as feelings of despair, irritability, anger, and constant sadness (3).
Classic ALS accounts for 90% to 95% of ALS cases in the United States. This type of ALS is called sporadic (SALS) because it cannot be traced to ancestors with the disease. Familial ALS (FALS), which refers to the occurrence of the disease more than once in a family lineage, accounts for 5% to 10% of all cases. The third type, Guaminian ALS, was observed in the 1950's when an extremely high incidence of ALS was observed in Guam and the Trust Territories of the Pacific (4).
The cause of all forms of ALS still remains elusive, although a gene has been identified that accounts for only 20% of FALS patients. Several theories attempt to explain what causes this disease for the remaining 98% of ALS patients, and glutamate excitotoxicity is one of the most popular. This theory suggests that an excess of glutamate, a naturally occurring chemical in the brain that accounts for approximately 30% of all neurotransmissions, triggers a series of events that ultimately ends in cell death. Excess glutamate is toxic to neurons because it over-stimulates specific neuronal metabolic functions. When this occurs, motor neurons take in too much calcium, which disrupts many cellular functions and leads to cell death. One drug, called riluzole, has been developed in order to help ALS patients reduce the amount of glutamate released when nerve cells signal (2).
A newly identified mutation involving a protein called EAAT2, which normally deactivates and recycles glutamate, may contribute to or cause almost half of SALS cases. Researchers first found that many ALS patients have little or no EAAT2 in certain areas of the brain and spinal cord, causing an excess of glutamate which leads to the death of motor neurons. Further study indicated that the mutation occurred when the nerve cells were translating the DNA code for EAAT2 into RNA. Problems in the RNA happened because when useless bits of DNA were cut and active parts of DNA were pasted together, it occurred randomly instead of in specific spots. This abnormal version of RNA either "produced a useless version of EAAT2 or suppressed production of normal EAAT2." Over half of the ALS patients in the researcher's study has this mutation, and it occurred only in areas where motor neurons were dying: in the spine and muscle control areas of the brain (5).
Damage to an enzyme called superoxide dismutase (SOD1) on chromosome #21, which normally detoxifies free radicals, may result in FALS. Free radicals are highly charged destructive molecules that damage elements of a cell's membrane, proteins or genetic material. Normally functioning SOD1 breaks down free radicals, but when it becomes damaged, it is no longer able to perform this function. It may malfunction as a result of a genetic mutation or because of the chemical environment of the nerve cells (2).
Another theory suggests that the existence of large clumps of proteins, called protein aggregates, on the motor neurons of ALS patients may cause the disease. Protein aggregates have been found both in patients with SALS and FALS, and in animals that have been genetically engineered to have a mutation in the SOD1 gene. It is not clear if the excess protein causes motor neurons to die or if it is the "byproduct from overwhelmed cells attempting to repair incorrectly folded proteins" (2).
In addition to the theories about various internal factors that may lead to ALS, one theory contends that exposure to certain environmental toxins contributes to the onset of ALS. These may include: exposure to agricultural chemicals; environmental lead and manganese; brain, spinal cord, and peripheral trauma; dietary deficiencies or excess; damage to DNA; and exposure to electric shock. Airline pilots and electrical utility workers have also been found to have a higher incidence of ALS. Conflicting results and failure to reproduce these types of studies has lead to criticism of this theory (2). One might wonder if this theory could lead the general public to develop an "ALS phobia." For example, one such popular "phobia" is that using deodorant will cause cancer. Theories that are not supported by concrete data, which has been confirmed by numerous scientific studies, are not only a waste of time to consider, but reckless in that they promote unfounded fears. It would be unfortunate if potential airline pilots and electrical utility workers chose another profession in order to avoid the onset of ALS. One must also keep in mind that although a correlation exists between certain environmental toxins, it does not mean that those toxins cause ALS.
The primary treatment options for ALS involve treating the complications associated with the disease. The drug riluzole is also used, which has been proven to prolong the survival of ALS patients (1). More recently, gene therapy has been explored to delay the onset of ALS. In a study using mice genetically engineered to develop FALS, scientists found that a gene called Bcl-2 may delay the onset of the disease. Two strains of mice were bred, one carrying mutations that produced FALS and the other carrying Bcl-2, which is known to protect against cell death. The offspring of these strains with both ALS and Bcl-2 developed the disease significantly later in life, and actually lived longer, than offspring that inherited only ALS. Offspring that had Bcl-2, regardless of whether they had ALS, had healthier motor neurons than offspring without the it. This study suggests that gene therapy with Bcl-2 may be one possible treatment option for ALS patients (6).
Although advances in determining the cause of ALS and in finding possible treatment options are promising, one must also use caution. For example, before believing that gene therapy with Bcl-2 will be an effective treatment option, a clinical study in which mice with ALS receive gene therapy must be conducted. If gene therapy is effective is delaying ALS in mice, a clinical study must then be completed with humans. It is possible that Bcl-2 may not delay ALS as effectively in humans as in mice. The reality remains that even if Bcl-2 could delay ALS in human patients, there is not a cure for this disease.
In order for researchers to develop an effective treatment for ALS patients, further data must be collected in order to determine what causes this disease. Even though it is important to understand that damage to SDO1 may causes FALS, one must keep in mind that damage to this enzyme accounts for only 2% of ALS cases (1). The cause of ALS for the remaining 98% of patients with this disease must also be determined. SDO1 might provide the link necessary to discover the etiology of ALS for the majority of patients, therefore researchers must continue to search for other explanations in light of this finding. Perhaps the existing theories about the cause of ALS considered collectively might provide a solid foundations from which to reach a more valid explanation of the disease. If and when an explanation is found, researchers will be better equipped to find a treatment and possible cure. Until then, patients suffering from ALS and their families must remain optimistic about finding an explanation of the cause, possible treatments, and cures for this disease.
1)The ALS Association's Website, general information about ALS
2)The ALS Survival Guide, a thorough resource about ALS
3)Preventing and Treating Complications of Immobility, an article by Pam A. Cazzolli, R.N., on the ALS Network Website.
4)Amyotrophic Lateral Sclerosis (ALS or "Lou Gehrig's Disease") , an article from focus on depression.com
5)"Gene-Reading Problem Linked to Lou Gehrig's Disease" , an article from docguide.com
6)"Science Gene Therapy in Mice Delays Onset of Lou Gehrig's Disease (ALS)", an article from docguide.com
Lunar Menstruation Name: Catherine Date: 2002-12-15 13:57:26 Link to this Comment: 4101 |
Every woman goes through the process of menstruation, yet not many know exactly what is going on until they reach the time of wanting pregnancy. It seems simple enough to realize that each lady goes through her own unique experience in dealing with her cycle, but this is not so. This topic is a complicated subject-matter with many unexplained coincidences. One is that women's menstrual cycles perhaps have strong ties to the moon and its phases, and thus they give much insight into the theories of evolution.
What is important about women's natural process in this paper is that for most, around every twenty-eight days, menstruation occurs. This is when "blood and other products of the disintegration of the endometrium are discharged from the uterus." (2) Fourteen to sixteen days before the onset of a period, ovulation has already begun and fertilization of an egg can occur. But by the time menstruation begins, women usually cannot conceive anymore.
Interestingly enough, I had read a few years ago in a book that a group of women were surveyed and asked which type of men they preferred during various times of their female cycles. A majority of women seemed to favor more feminine "pretty boys" during most times in their cycles, but during full moons, they were more inclined to choose "masculine men". This seemed to be a bizarre finding and coincidence, until I recently decided to do some extensive research on the whole topic.
I investigated online for any information about the moon relating to women's menstrual cycles, thinking that the book's findings could possibly have something to do with women's anatomy. What I found were many ideas and theories, dating back to ancient civilizations. The most straightforward relation between women and the moon was that "The two [women's and moon's] cycles last for roughly the same amount of time." (3) But in more complicated conjectures, many made claims that
"In the absence of man-made light, a woman's menstrual cycle will synchronize with the phases of the moon. When this happens, ovulation occurs when the moon is full and menstruation starts with the start of a new moon." (7)
"Think back to when we lived tribally thousands of years ago with no artificial lighting. In these natural surroundings it was highly probably that women ovulated together on the full moon and bled on the dark moon. Thus they usually gave birth at the Full moon, creating more individuals with this particular lunar fertility blueprint." (12)
"But of course these days, we live in the world of artificial light." (12) There seemed to be evidence of analogous ideas about women and the moon across all borders. "Throughout all cultures, the magic of creation resides in the blood women gave forth in apparent harmony with the moon, and which sometimes stayed inside to create a baby." (6)
"It has been shown that calendar consciousness developed first in women because their natural body rhythms corresponded to observations of the moon. Chinese women established a lunar calendar 3000 years ago. Mayan women understood the great Maya calendar was based on menstrual cycles. Romans called the calculation of time menstruation, meaning knowledge of the menses. In Gaelic, menstruation and calendar are the same word." (6)
There are many more, but it is apparent just from this sample that there was too strong a correlation for the two subject matters to be merely fortuitous. Obviously, it seems, "Woman is fertile during certain phases of the moon," (1) as long as there is no artificial light around. Even now,
"... the body seems to prefer that it stay in sync with the Moon's lunar synodic cycleeven to the point that it will alter its own menstrual cycle in order to do so." (8)
"... women tend to menstruate in the full of the moon with a diminishing likelihood of menses onset as distance from full moon increases." (4)
But what lead people to this conclusion about artificial lighting affecting women's menstrual cycles?
"In the days before electricity, women's bodies were influenced by the amount of moonlight we saw. Just as sunlight and moonlight affect plants and animals, our hormones were triggered by levels of moonlight. And, all women cycled together. Today, with artificial light everywhere, day and night, our cycles no longer correspond to the moon." (6)
That sounds entirely possible, because humans must have strong ties to our environment. At the same time, this seems impossible to test out because of other factors. Women do get exposed to artificial lighting all of the time, and even if they do not, they are influenced by other women around them.
"Women who live together experience a synchronization of menstrual cycles as a result of being exposed to chemicals contained in their sweat. A study found that if the sweat of one woman was placed under the nose of another woman on a regular basis, their periods would synchronize within three months, even if the women did not physically meet or come near each other." (7)
Hormones, underweight and overweight problems, and stress can also influence menstruation. The theory may have been applicable in the days before man-made lighting and weight-fashion trends, but now it is nearly not testable.
Out of all of this information, one person supposed that perhaps
"the human female, being more intelligent and perhaps aware of her environment, adapted to a cycle close to that of the moon, while lower animals did not." (10)
"The corresponding estrus cycles of some other mammals are twenty-eight days for opossums, eleven days for guinea pigs, sixteen to seventeen days for sheep, twenty to twenty-two days for sows, twenty-one days for cows and mares, twenty-four to twenty-six days for macaque monkeys, thirty-seven days for chimpanzees, and only five days for rats and mice." (10)
I have a slightly different concept to introduce. Merging all of the information I gathered, perhaps human females have a more expansive reasoning for cycling the way they do. If women are really more attracted to "manlier men" during the full moon, when they are naturally supposed to conceive best, perhaps this is because of evolution. In accordance with the idea of "survival of the fittest", women would choose the most capable and strongest men, for better maintenance and protection during their pregnancy by their mates. Their offspring would also be more likely to inherit their fathers' innate traits and learn such characteristics which would help them survive in the world of fierce competition. With everything tying in together so perfectly, this seems entirely possible.
The human race has come a long way since the time of ancient civilizations with unexplained ideas being set aside as magic and supernatural. But it seems as though perhaps artificial lighting, dietary trends, and other modern aspects have diminished the human population's escalating chances of survival of the fittest. Perhaps our technology and inventions, which tamper with our natural biological rhythms, are not doing us as much of a service as we believe they are. The lunar-menstrual cycle idea is only one of many areas which illustrate this point.
2) Hormones of the Reproductive System: Females
3) La Luna
4) Lunar Influences on the Reproductive Cycle in Women
5) Menarche, Menstruation, Menopause
6) Menstrual Cycles: What Really Happens In Those 28 Days?!
7) Menstruation and Sex
8) Pregnancy, Conception, Birth Timing and the Moon
Bubonic Plague Name: Diana Fern Date: 2002-12-16 13:55:08 Link to this Comment: 4111 |
I recall sitting on the couch at my home in New Mexico, back in high-school not so long ago, turning on the local news, and hearing "In tonight's news, a Taos man was diagnosed with the bubonic plague today, and is in critical condition." As the news reporter presented the empirical data for how many plague afflictions had occurred that year, I thought to myself: "GOOD GOD! Bubonic plague, in this day and age? In my state?" I began to worry thinking about each dead mouse I had to extract from my dog's mouth, or sweep out of the garage. Bubonic plague conjured images in my head of entire European villages, succumbing to a gruesome death during the dark ages, as I think it does in most people's minds.
Although deaths associated with bubonic plague are rare today, it still exists, both in the third world and in the southwestern United States. Despite the fact that bubonic plague takes fewer lives then many other, airborne diseases, no one single epidemic has effected the human imagination as did and does the bubonic plague, or "black death" as it was called. This epidemic spawned which trials and religious fanaticism. It was associated with the ending of the world, in 14th century Europe, yet is nonexistent in Europe today. What caused the plague? What exactly is the plague? How did this worldwide epidemic become mostly eradicated? Could the plague be reintroduced on a mass level by bio-warfare? As it has for hundreds of years, the bubonic plague has, in its long existence, tried the medical community as it reappears around the world.
The bubonic plague has been prevalent in historical memory for hundreds of years. Most notable in the western hemisphere, was the spread of plague in Europe around 1349. Bubonic plague, or "black death" originates from Asia, hence it is speculated that the plague traveled from the Silk Road in china, and caravans transferring the plague to Europe (1).
. The horrific result of the plague was the decimation of two thirds of Europe's population in the period of two years. With the panic, chaos, and desperation that came with the widespread epidemic came variations of proposed explanations for the existence of the plague. Among some of the more unfortunate explanations was viewing the plague as a punishment from god. Women, Jews, and lepers as the harbingers of the plague due to their inherently poor standing in good's eyes, these people not only suffered the plague, they suffered the accusations and violence of their fellow countrymen. The plague instigated whole works of literature, art, and even saints, as the European population tried to make some sense of the horrible deaths that permeated life and society, it inflamed the imagination, and tried the limits of medieval science and medicine. Only in the sixteenth century was there a correlation of plague to sanitation hazards, finally adding some discredit to the divine pollution theory.
With the awareness that bubonic plague was spread due to unsanitary conditions came the slow dissipation of the plague. Rats were associated with plague, even from the earliest times, as villagers noted that large amounts of rats were found dead, following with an outbreak among humans. The works of Yersin and Kitasato were essential in the discovery that fleas, which engorged with the infected rat's blood, would transfer the plague to a human carrier, spreading the bubonic plague. Yersin is cited as the discoverer of the bacterium, giving it its' name: Yersinia pestis (2). For additional information on the bacteria and images go to:( http://www.cdc.gov/ncidod/dvbid/plague/bacterium.htm).
The initial symptoms of the bubonic plague are flu like in nature and include: chills fever. The initial stages occur 2 to 6 days after being bitten. After this stage is a painful swelling of the lymph nodes, or buboes, hence the origin of the name bubonic plague. Lesions usually appear at the site of the fleabite, the skin becomes encrusted and at times full of puss. The victim is usually weak, disoriented, and nauseous(3).
Fortunately, there are preventative measures the plague; a vaccine is available for those working in the field or in close contact with the plague. If someone does get in contact with the plague and is not previously vaccinated, there are antibiotics called tetracyclines, or chloramphenicol (2). Although these preventative treatments exist, the plague, untreated has proved fatal in many cases in the United States and abroad.
In 1996 there was a resurgence of plague in India that terrified the south Asian country. When plague affects such large densely populated areas, where rat populations surge due to unsanitary conditions, the plague can be seemingly unstoppable. Yet the plague is not only restricted to the third world, the northern areas of New Mexico, and Arizona have also seen cases of the plague. On November 7th 2002, in New York City a New Mexican couple was diagnosed with the plague. Fortunately the couple was diagnosed early as having been infected in New Mexico; hence no one in New York was at risk for contracting the plague bacteria (4).
Although the natural cases of Bubonic plague are rare, the plague may be making as terrifying a debut as it did hundreds of years ago. The bubonic plague is being viewed as a weapon in biological warfare. Biological warfare is seen as an unfortunately effective and low-cost tool for zealot groups who wish to inflict as high a cost to death ratio as possible. The detectability of biological agents is also a lot lower than trying to use arms, or nuclear weapons in terrorism (5). The plague has also gripped the imagination for hundreds of years and the general populace would fear an outbreak of the plague, hence the panic factor of the plague would also make it a desirable weapon of choice. Yet the fact that it is a bacterium would make it a less advantageous weapon of choice than other communicable diseases. Yet the U.S Department of Energy has seen is at enough of a threat to study the plague as a potential weapon in order to prepare for a worst-case scenario.
Although the plague does not have the same grip on mankind that it did in the medieval Europe, and Asia it still retains the ability to frighten and conjure up horrific images. Bubonic plague has made resurgence in bio-warfare, which is an unfortunate tool to use. The panic that it would cause among an unsuspecting population would be devastating. One only needs to look at a modern example such as India, to see the desperation and horror that the plague can cause. Fortunately the preventative and treatment measures are effective if one is diagnosed in time. Yet the plague is rare and if one is not aware and leaves the disease untreated it can be fatal. I, being a resident of Northern New Mexico have learned to take precautions, in this area we know the plague is nothing to be trifled with, I can only hope the rest of the nation is aware of this fact as well.
1)The Role of Trade in Transmitting the Black Death, a source documenting the infiltration of the plague in Europe during the 14oo's
2)Plague HOme page, at Center for DIsease Control and Prevention, The CDC is a government run informational website on rare communicable diseases
3)Bubonic Plague, at "National Organization for Rare Disorders, The NORD group is an informational website to educate the public about rare diseases and prevention.
4) Bubonic Plague suspected in NYC Visitors, at CNN/Health, CNN newsgroup
5)5. BIOLOGICAL TERRORISM: LEGAL MEASURES FOR PREVENTING CATASTROPHE, at Encyclopedia Britannica., an educational source with links to journals and newsgroups
Magic Mushrooms Name: Roseanne M Date: 2002-12-16 18:11:53 Link to this Comment: 4113 |
What's so 'Magic' about Magic Mushrooms?
Roseanne Moriyama
Biology 103 12/16/02
Prof. Grobstein
"I began to have the sensation that trees were sucking me in via the wind and I was drawn into this grove of trees. The day was absolutely beautiful and everything looked fresh and new. I said something about how good this stuff was and there really had to be something bad to it or everyone would be on it. I wondered around the house with the feeling that there was something I had to do but couldn't quite figure out what it was. I became catatonic and couldn't relate to anyone. I also pulled my shirt away from my body a bit and my stomach seemed to come out with it. I began to pray for my sober mind back and was experiencing muscle contractions and tremors. I wished for someone to take me to the hospital, but I couldn't talk. I took a drink from the gatorade bottle I felt myself being sucked into the opening. It slowly faded after about 8 hours and I was euphoric in my sobriety." 1
This was written by a teen 'tripping'- a term used when taking the drug known as the 'Magic Mushroom,' 'Shrooms,' or 'Liberty Caps.' He (and many others from this site) gives a thorough description of the effect when taking this drug...
From the Aztecs to the Native Indians to the Chinese, men have been using numerous kinds of drugs as medicine, leisure, tradition, etc. Long since history was recorded till up to this modern 21st century, drugs have been prevalent in society. In recent times 'intense' drugs have been strictly prohibited- even legal drugs have been used mainly for medical purposes. However, especially during the sixties, drugs have been widely used amongst teens and young adults for a psychedelic experience. Many smoked weed as a 'common everyday drug,' but for people who wished for a 'trip beyond life,' Magic Mushrooms was a popular choice. Their supposed intention lies in Classic Hinduism stating four possibilities: a) Increased personal power, intellectual understanding, sharpened insight into self and culture, improvement of life situation, accelerated learning, professional growth. b) Duty, help of others, providing care, rehabilitation, rebirth for fellow men. c) Fun, sensuous enjoyment, esthetic pleasure, interpersonal closeness, pure experience. d) Trancendence, liberation from ego and space-time limits; attainment of mystical union. 1 Reading the list, taking this drug may seem very enticing however, along with these 'dope trips' people voyage, there are risks of going on 'bad trips' that are supposedly nothing less than a dreadful nightmare. I am conducting this research in hopes of learning what effects these mushrooms have on people and what makes it the most popular and amusing drug people recommend by constantly saying: "...if you're going to try any drug in this world, its got to be Magic Mushrooms." 1
Shrooms are known to be intense and dangerous due to such intense reactions and long term negative effects if taken regularly (2-3 times a month is all right 3 ) therefore, you would think, "Obviously it is illegal to sell, purchase, or consume Magic Mushrooms." Yes, this is true in the United States. However, in certain areas in Tokyo, Shrooms are quite visibly sold out on the streets. I was asked many times in Japan whether I was interested in purchasing shrooms. This shocked me knowing it was a drug. I immediately looked into this situation and found out that in Japan Shrooms were legal to sell or purchase -until last year. The catch is however, it is illegal to CONSUME the drug but legal to sell or purchase them. Why buy and sell and not consume? This is all still a mystery to me. Therefore as my last webpaper, I have decided to do research on this intense and very popular drug that I COULD'VE purchased in Tokyo.
From reading personal diaries and stories by various consumers, I was increasingly terrified of the effects. In the extroverted transcendent experience, the consumer is ecstatically fused with external objects (e.g., flowers, other people). In the introverted state, the consumer is ecstatically fused with internal life processes (lights, energy waves, bodily events, biological forms, etc.). This effect, or state, may be negative rather than positive, depending on the consumer's setting. For the extroverted experience, the consumer would bring to the 'trip' candles, pictures, books, incense, music, or recorded passages to guide the consumer's awareness into the desired direction. An introverted experience requires eliminating all stimulation: no light, no sound, no smell, and no movement. The most common hallucinating effects are as follows: a) Red/green/blue blips (CEV or OEV) The basic idea is that a layer of red, green, and blue blips (like looking at a TV set from real close) is superimposed on everything. b) Pixelization (OEV) Everything is composed of separate little bits, like pixels on a computer screen. c) Tracers (OEV) Moving objects that contrast sharply with their background (tip of lit incense stick against a dark room, ball flying against the blue sky, etc) leave colorful trails. d) Red shift (OEV) Everything looks like you're looking at it through glasses with their lenses dyed red. e) Melting (OEV) Objects start acting as if it were made of plastic; as if being heated, therefore distorts and flows downwards. f) Entities (CEV, rarely OEV) Encounters with other beings are a recurring feature of high-dose trips. Some common types include: "mantid," an alien-looking insect-headed creature that tends to appear extremely intelligent and aware and neutral/negative towards the tripper- it can be green or grayish-white. "DMT elf," a gnome-like playful, funny, and usually friendly entity. 1 These are only a few of the listed hallucinating effects; it is also noted that hallucinations vary from person to person and the dosage taken.
"Angie and I had the greatest trip ever...they kicked in after an hour. We went outside to sit on a bench and we started sharing our trip...the clouds! They became overwhelming, powerful...the sky was all blue and there was this one big black cloud that totally zoomed in on us, it was beautiful since we saw rays of light coming from behind it...then we started talking and laughing that didn't stop for the next 5 hours. It was like life didn't matter. The trees, the sky, the grass- they all looked so different, so much more amazing. The purple/pink sky and the GREEN grass... everything was just beautiful...it was just great... I will do mushrooms again." 2
Although most 'trips' are known to be mind-blowing experiences (such as quoted above), some consumers voyage a 'trip to hell' which can seemingly be a horrific one. The mushrooms can cause physical or psychosomatic interference and some of the negative effects include: nausea in the beginning (which invariably wears off by the time the hallucinations start), an odd and often scary physical sensations like liquid skin or distorted body-proportions, trouble breathing, severe anxiety and paranoia, the feeling of having just excreted in your pants, and/or the feeling of sinking into the ground or even into yourself. The consumer may start to feel as if there were worms crawling inside their stomachs, the roof is collapsing, and/or the sheets that cover them is trying to eat them.
In conclusion, Magic Mushrooms have scared me, but intrigued me with its effects both at the same time. I personally don't have the guts to try shrooms knowing the effects of 'bad trips.' However reading personal experiences I was curious to feel 'out of this world' sensations and see colors so vibrant it makes the world ever more beautiful. The feeling of never wanted to go back to reality because it feels like you're dreaming- everything is an illusion but is reality at the same time. Hallucinating and seeing 'aliens' or altruistic figures seems interesting too. However, reading the first quotation in this essay among the many on the web, it is frightening to be conscious (for what feels like forever) in a nightmare. Just the thought of worms slithering inside of me is horrific- I almost faint on campus every time it rains and the worms come out from the soil. Nevertheless (according to the voyagers), the chances of going on a 'bad trip' are slim. Hence people continuously take the chance of going into a beautiful and joyful dream, whereas I would never risk tripping knowing I could be interacting with worms!
Some of the Types of Magic Mushrooms 4 :
-Psilocybe stuntzii: a 'magic' mushroom and -Galerina autumnalis, a deadly poisonous mushroom growing in wood chips.
-Psilocybe stuntzii: (classic) a species indigenous to the Pacific Northwest of North America.
-Psilocybe cyanescens:: a potent species widespread through western Europe and prolific in the Pacific Northwest of North America.
-Psilocybe cyanofibrillosa: a mild species common in rhododendron gardens from Northern California to British Columbia.
-Psilocybe azurescens: (a new species) contains up to 2% psilocybin, elevating it to the status of the most potent species in the world. Native to the Pacific Northwest of North America.
-Psilocybe semilanceata: (the Liberty Cap) a species common throughout the British Isles, France, Germany, Holland and Italy. Favoring sheep and cattle pastures.
-Psilocybe pelliculosa: a relatively weak woodland Psilocybe which favors abandoned logging roads in the Pacific Northwest of North America.
-Psilocybe silvatica: (rare) a close relative of P. pelliculosa, reported only from Washington and Oregon.
Foot Notes:
1. 1)All About Magic Mushrooms good personal experiences on this site
2. 2 Magic Mushrooms Net affects of taking mushrooms
3. 3 Magic Mushrooms mushroom species
HGH: Cure for Depression? Name: Diana DiMu Date: 2002-12-17 13:20:08 Link to this Comment: 4124 |
HGH, Human Growth Hormone, is often first and foremost, associated with treating growth disorders or problems associated with aging. While there are several medical conditions dealing with a deficiency of HGH and improper growth development, the majority of links that pop up when typing "Human Growth Hormone" into a web browser all deal with "anti-aging benefits." I began my research on HGH after receiving a spam email offering "Free injections of HGH!" I wondered why on earth someone would want such a thing and decided to do some more research. Many websites seem dubious, advertising "Real HGH! Don't Be Fooled!" or "FDA Approved HGH!" I continued to do some research on what specifically is HGH and how it affects the human body, to gain a better understanding of why it has become a popular commodity. While there is plenty of information to research on the uses of HGH to help reduce the effects of aging, my research led me in another direction. After reading many of the "benefits" of HGH, I became curious whether HGH or any synthetic type of HGH was ever used to treat depression. Could HGH be used to reduce symptoms of depression when some of its benefits already included: loss of fat, increase of muscle mass, and elevation of mood, higher memory retention, and improved sleep? In my research, I hope to find whether HGH could be used as a means to treat depression and whether there are any treatments using HGH or similar synthetically made medications already being used. Before delving into that scientific pursuit, I felt it important to do some background research on HGH itself and why it is so important.
Before explaining what HGH is, it will help to first understand what a
hormone is:
Hormones are tiny chemical messengers that help our body do different tasks. Hormones are made up of amino acids. Hormones are produced by the endocrine glands and then sent all over the body to stimulate certain activities. For example, Insulin is a well known hormone that helps our body digest food. Our growth, digestion, reproduction, and sexual functions are all triggered by hormones.(3)
What is HGH?
HGH is produced in the anterior section of the pituitary gland deep in the brain. It is made up of 191 amino acids - making it large for a hormone. In fact, it is the largest protein created by the Pituitary gland. Chemically, it is somewhat similar to insulin although it is secreted in short pulses during the first hours of sleep and after exercise; it only remains in the circulation for a few minutes.
What is IGF-1?
IGF-1 stands for Insulin-like Growth Factor 1. IGF-1 is also known as Somatomedin-C. As important as HGH is, it does not last long in our bloodstream. It is extremely difficult to measure HGH in blood serum. However, the body binds most of the growth hormone in the liver and converts some into Somatomedin-C, another protein hormone also called Insulin-like Growth Factor- I (IGF-I). IGF-1 is the most important growth factor that is produced. Since Somatomedin-C remains in the blood stream for 24-36 hours, a blood sample identifying Somatomedin-C will be a more dependable indicator of competent HGH production. Normal Somatomedin-C blood levels in adults range from 200 to 450 ng/ml (nanograms per milliliter). Yet, one-third of individuals over 50 years of age show abnormal levels less than 200 ng/ml. During the growth spurt of youth, HGH levels are maximum and the Somatomedin-C will be measured well over 600- 800 ng/ml. Yet for normal men and women under 40, less than 5% have levels below 250 ng/ml! After 40 many men and women have the same amount of HGH as an octogenerian.
When one's Somatomedin-C level falls below the adult normal range, his/her muscle and bone strength and energy levels most likely will decrease. Tissue repair, cell re-growth, healing capacity, upkeep of vital organs, brain and memory function, enzyme production, and revitalization of hair, nails, and skin will also diminish. While aging and decreasing growth hormone levels go 'hand-in-hand' those who lose their pituitary production of HGH due to surgery, infection or accident, instantly suffer many profound, ill effects. In those who have no pituitary function, there is a shift in body composition whereby body fat increases by 7-25% while lean body mass decreases similarly. Muscle strength and muscle mass are noticeably reduced. Bone density studies indicate long bone density and spinal bone density decrease as significantly as if the individual had aged 15 years. Pronounced weight gain of 30-50 pounds occurs when HGH wanes. Furthermore, there are negative effects on cholesterol; triglyceride levels increase while high-density cholesterol (HDL), a 'good cholesterol', decreases. Increased risk of cardiovascular disease may be related to vascular wall thickening and changes associated with decreased cardiac output. Such insufficiencies may contribute to these people reporting a rapid decline in exercise capacity and early deaths from heart disease. They also report an impaired sense of well being and symptoms of fatigue, social isolation, depression and a lack of the ability to concentrate. (2)
What is Recombinant Growth Hormone (GH)?
Recombinant Growth Hormone is growth hormone that is synthesized in the lab. It is a biosynthetic hormone that is identical to human growth hormone, but it is synthesized in the lab. Creating an exact replicate of HGH was not an easy task.
First scientists needed to isolate HGH. Once they achieved this step, they could study the DNA make-up of the protein. Scientist quickly realized making recombinant GH would be no easy task since they had to accurately reproduce a 191 amino acid hormone. Eli Lilly, in 1986 created a 191 amino acid hormone that was an identical match to the HGH produced by the pituitary gland. The drug is called Humatrope and is the most widely used recombinant growth hormone today. (3)
Bone Density
One of HGH's most dramatic effects is on the connective tissue, muscle, and healing potential of the skeletal system. Fragile skin with ulcers, fractured bones that do not heal, and profound gains in muscle strength have been noted. Not only does the skin look younger with less wrinkles, some report a re-growth of hair on the head. For growth hormone, DHEA, and testosterone are clearly anabolic hormones: they build tissue. And with increased age, our bodies break down tissue faster than we can repair them. This is called catabolism. Therefore, HGH tends to reverse the catabolic state. The potential role of HGH in the maintenance of the skeleton is its ability to make and repair these tissues. HGH stimulates osteoblast (bone) and fibroblast (supporting tissue) proliferation.
Other anabolic effects include a gain of muscle and renewed appetite, better exercise capacity, increased lung capacity, and faster wound healing. Many report their "old age spots," disappear within two months of HGH therapy.(2)
Numerous
scientific studies have shown that restoring levels of HGH in aging individuals
can have dramatic effects. One landmark study, published in 1990 in The New
England Journal of Medicine, found that 12 men who took HGH had an increase
in lean muscle and bone density and a decrease in fat, while nine men who
didn't take it experienced none of these changes.(1)
Positive Effects of HGH Replacement
If you look at all, the studies that have been done on HGH injections you
get the following list of benefits:
Are there any negative aspects of taking HGH injections?
Possible Negative Side Effects
Anytime you introduce a large amount of a foreign hormone into the body there
is the risk of side effects. In one study, it was found that some of the
patients suffered from carpal tunnel syndrome and gynecomastia (enlarged
breasts). (3)
Side Effects with Low Dose HGH Replacement
The dose of recombinant HGH is an important consideration in the therapy of
acquired HGH-deficiency. Large, pharmacological doses of HGH are often
associated with the clinical signs of HGH excess, including fluid retention,
carpal tunnel syndrome, and hypertension. However, by incorporating smaller
doses, physiologically such symptoms are not noted. At a dose of
0.03mg/kg/week, many demonstrated only minor side effects including slight
fluid retention and mild joint pain. There was only one reported incident of
carpal tunnel syndrome. In all cases, further reduction of the HGH dosage
resulted in the elimination of side effects. In another recent study in which a
smaller dose of HGH was used, 0.01 mg/kg was administered three times per week
without any reported side effects. Multiple studies support the conclusion that
low dose HGH replacement is associated with minimal side-effects.
Is
it possible to take HGH orally?
Many people look for a way to take HGH without getting an injection. However, HGH is a delicate and complex 191 amino acid hormone. This brings up the second problem with the above claim - you cannot take HGH orally. Therefore, even if a company wanted to break the law and sell HGH as a pill/spray or powder - it would not work because the HGH would break down before it ever reaches the bloodstream. (3)
Now,
a new generation of products sold via the Internet and in health stores as
dietary supplements, and therefore not regulated as drugs by the FDA, claim to
produce the same effects at a fraction of the price -- about $1,000 per year.
These are formulations of amino acids that allegedly trigger the release of HGH
in the body. One such product is GHR-15, although there are many other
"growth hormone releasers" on the market. One such internet
advertisement for "growth hormone releasers" reads as follows:
Research
indicates that the best way to elevate HGH levels is to stimulate the body to
produce more HGH. Studies have shown that an old pituitary gland has the same
capacity to produce HGH as a young pituitary gland. If we can find a way to
stimulate our pituitary gland, we will have the best of all worlds. You are not
introducing a foreign GH, so you eliminate the side effects. Also, our body is
very good at self-regulating, it will not produce an excessive amount of HGH
which could be harmful. In effect, your body knows best what the correct dosage
of HGH is to release for your body. (3)
The
idea behind these growth hormone releasers is actually based on scientific
studies showing that certain amino acids can trigger the production of HGH from
the pituitary. However, consumers should be cautious, says Ronald Klatz, MD,
president of the American
Edward
Lichten, MD, senior attending physician at
After reading the benefits of HGH and HGH Therapy, I became curious whether any form of HGH or other kinds of hormone therapy had been used in treating symptoms of depression. In order to gain a better sense of where my questions might lead, I first looked into already established treatments of depression and their side effects. After researching what is typically used to treat depression and how people react to such treatments, I hoped to learn how Hormone Therapy could be placed into discussion.
Treatments for Depression:
The
most common treatment for depression includes the combination of antidepressant
medicine and psychotherapy (called "therapy" for short, or
"counseling").
Psychotherapy is sometimes called "talking therapy." It is used to treat mild and moderate forms of depression. A licensed mental health professional helps people with depression focus on behaviors, emotions, and ideas that contribute to depression, and understand and identify life problems that are contributing to their illness to enable them to regain a sense of control. Psychotherapy can be done on an individual or group basis and include family members and spouses. It is most often the first line of treatment for depression.(6)
How
are Medications Selected?
The
type of drug prescribed will depend on your symptoms, the presence of other
medical conditions, what other medicines you are taking cost of the prescribed
treatments, and potential side effects. If you have had depression before, your
doctor will usually prescribe the same medicine you responded to in the past.
If you have a family history of depression, medicines that have been effective
in treating your family member(s) will be considered. Usually you will start
taking the medicine at a low dose. The dose will be gradually increased until
you start to see an improvement (unless side effects emerge).
Examples
of effective and safe medications commonly prescribed for depression or
depression-related problems are listed in the table below:
Type of
medication |
Drug Name |
Brand Name |
Conditions it
Treats |
Selective
serotonin reuptake inhibitors (SSRIs) |
fluoxetine paroxetine sertraline fluvoxamine citalopram |
Prozac Paxil Zoloft Luvox Celexa |
Depression --
Serotonin is a brain chemical thought to affect mood states, especially
depression. SSRIs help increase the amount of serotonin to level the
patient's mood. |
Tricyclic
antidepressants (TCAs) |
Mitriptyline desipramine nortriptyline
protripyline clomipramine imipramine doxepin trimipramine |
Elavil Norpramin Pamelor Vivactil Anafranil Tofranil Sinequan Surmontil |
Depression (Clomi
-pramine is used to treat OCD) |
Monoamine
oxidase inhibitors (MAOIs) |
tranylcypro-mine
phenelzine isocarboxazid
|
Parnate Nardil Marplan |
Depression --
MAOIs increase the concentration of chemicals in particular regions of the brain
that aid communi-cation between nerves. MAOIs are
usually prescribed for people with severe depression. |
Azapirones |
Buspirone |
BuSpar |
Anxiety,
generalized |
Benzodiaze-pines
|
Alprazolam Lorazepam Diazepam |
Xanax Ativan |
PMS, panic
disorder |
Lithium |
|
|
Bipolar
disorder, recurrent depression |
Mood
stabilizing anti - convulsants |
Carbamaze-pine
Valproate Lamotrigine Gabapentin |
Tegretol Depakote Lamictal Neurontin |
Bipolar
disorder |
Other
medications |
amoxapine buproprion venlafaxine nefazodone mirtazepine trazodone maprotaline |
Asendin Wellbutrin Effexor Serzone Remeron Desyrel Ludiomil |
Depression |
What are the Side Effects?
Keep in mind that sometimes the benefits of the medicines outweigh the potential side effects. Some side effects decrease after you have taken the drug for a while.
Some common side effects of SSRIs include:
Agitation
Nausea or vomiting
Diarrhea
Sexual problems including low sex drive or inability to have an orgasm
Dizziness
Headaches
Insomnia
Increased anxiety
Exhaustion
Some common side effects of tricyclic antidepressants include:
Dry mouth
Blurred vision
Increased fatigue and sleepiness
Weight gain
Muscle twitching (tremors)
Hand shaking
Constipation
Bladder problems
Dizziness
Increased heart rate
It is important to note that you should not drink alcoholic beverages while
taking antidepressant medicines, since alcohol can seriously interfere with
their beneficial effects. (7)
Hormone replacement therapy (HRT) in women: Depression is more common in women than in men. Changes in mood with premenstrual syndrome (PMS) and premenstrual dysphoric disorder (PMDD), after childbirth and following menopause are all linked with sudden drops in hormone levels. Hormone replacement is a treatment currently used to relieve symptoms of menopause such as night sweats and hot flashes. By using HRT, women can help prevent osteoporosis and possibly reduce memory loss. There are many advantages to using HRT for relieving symptoms of menopause, and while they may, in the future, be found to help depression in some women, these hormones can actually contribute to depression. (6)
Discussion:
My question then is if Hormone Replacement Therapy (HRT) is used for women could it also be used for men? HRT is typically used to treat women's symptoms of menopause, however, in many cases, HRT results as a beneficial treatment for female depression. (8) With this in mind, is it too far-fetched to consider HRT for men as a viable treatment for serious depression? Could low doses of HGH be used to treat both men and women with depression? In theory, the many negative side effects commonly associated with SSRI's and other types of antidepressants far outweigh the number of negative side effects associated with HGH therapy. The use of injections of HGH have yielded such positive results as: mood elevation, higher energy levels, enhanced sexual performance, superior immune function, increased exercise performance, increased memory retention, and improved sleep. (3) However, there is currently little research on whether men and women suffering from depression exhibit lower levels of HGH. There is also little research on the effects of low dosage HGH therapy on patients under the ages of 40. It would be extremely interesting to conduct research on whether patients suffering from severe depression exhibit lower levels of HGH or other essential hormones. Sufficient research on the effects of HGH injections on "middle-age" or "younger" patients would also need to be conducted before further research could be done. While older patients exhibit significantly lower levels of HGH, making HGH therapy a more viable option, treatment for younger patients using the same therapy could prove more harmful than beneficial. Would younger patients (patients between 20-30 years old or younger) exhibit the same positive effects from HGH therapy as patients 30 years old and older? There is currently little research to answer this question. Nor is there enough research to back support of using HGH or other hormones as definitive treatment for depression in either men or women. Currently, Hormone Replacement Therapy is most typically associated with women as a viable treatment for menopause and osteoporosis. (8) Treatment for depression is a less accepted benefit of using hormone therapy for women. In fact, some HRT, such as the use of Estrogens (examples include Premarin and Prempro) are linked to causing depression in women. Hormone replacement therapy for men is often discussed in terms of testosterone therapy. Signs of low testosterone in men may include decreased sex drive, erectile dysfunction (ED), depression, fatigue, and reduced lean body mass. Men may also have symptoms similar to those seen during menopause in women hot flashes, increased irritability, inability to concentrate, and depression. If prolonged, a severe decrease in testosterone levels may cause loss of body hair and increased breast size. Bones may become more brittle and prone to osteoporosis, and testes may become smaller and softer. (9) While Hormone Replacement Therapy is used for both men and women to help relieve and counteract the symptoms of aging, menopause, and osteoporosis, it is not typically associated with treating depression. Is this because depression is still viewed as a stigma by current society? Growing research in the field of depression and its on-going acceptance as a part of life may yield new research and acceptance in different fields of its treatment. Further research into Hormone Replacement Therapy, and perhaps more specifically, use of the Human Growth Hormone, may in time help establish new practices for the treatment of depression. The possibility of future research into this field may yield more information, not only in the broader treatment of depression, but into the specific treatment of depression among the sexes and among varying age groups. HRT may yield new treatments for men and women of various age groups that result in better or less side effects than the currently used SSRI's and antidepressants.
WWW Sources:
1.) Growing Younger with Hormones?
2.) US Doctor Growth Hormone
3.) Advice HGH
4.) HGH MD
5.) Your Guide to Depression: Medical Information from the Cleveland Clinic
Why we can't walk pass the refrigerator! Name: Melissa Af Date: 2002-12-17 13:26:52 Link to this Comment: 4125 |
When I am hungry, I eat. When I am not hungry, I eat. When I am tired, I eat. When I am energetic, I eat. When I am happy, I eat and when I am sad, I also eat. Food is one of man's viscerogenic needs but I believe that in present times, eating has moved beyond a necessity to a pastime. In the early development of the human race, humans were hunter-gatherers. These early humans had periods of scarcity which forced them to eat a lot so that they had reserves of fat in periods of scarcity(4). Although mankind has evolved and the problem of scarcity is not as great as it was millions of years ago, present-day humans seem to still have the hunter-gatherer mentality: to eat as much as possible in case there is no food later. As always, we have turned to science to explain why we behave in a way that we think is unacceptable. Can science help us curb our eating habits and if so can we really alter these habits without any harm to ourselves and future generations? In this paper, I will use obesity as an example of one of the numerous problems that we have turned to science to solve at any expense.
A billion people in the world are overweight and 22 million of these individuals are children under the age of 5. The World Health Organization (WHO) lists obesity and problems associated with obesity like heart disease and high blood pressure as one of the Top 10 global health risks(4) In Economics, students learn that man has unlimited wants but limited resources. Since 1 billion of the world's population is overweight, I wonder about the rate at which we are using our resources to produce food and whether we are replacing the resources that we have used to satisfy our appetites. Furthermore, we must consider not only the land, trees, plants and animals that we are using to produce these commodities but also the negative effects of the production processes such as deforestation and air, water and soil pollution. When we eat our next Mc Donald's Happy Meal we will not consider the health problems that we are embracing as we open our mouths to bite and chew, problems like heart disease, high blood pressure and in the African-American community the illness of diabetes.
We think of Cancer and HIV as the major illnesses that are affecting mankind. However, we do not think of obesity as a major killer of humans. Obesity itself does not kill but problems that arise from being overweight such as heart attack kill many people yearly. People believe that everyone is preaching about being thin because "thin is in", that is to say, that we equate beauty with thin thighs and small waists in women and a slender figure in men. Although it is true that the standard of beauty is someone who is slender, we cannot ignore that obesity places a person at risk for heart attack, diabetes, discomfort in physical activity and so on. Also, the alarming rates of illnesses such as anorexia nervosa and bulimia that affect people who use their weight to have some kind of control in their lives can not be ignored. People wish to be accepted by others because of their looks and people have a fear of dying early from the complications of obesity. As a result of these concerns over obesity, researchers are looking into finding some component in our body that can be used to control how much a person eats because it will improve the physical, mental and emotional health of millions worldwide, make them instantaneously famous and make drug companies even wealthier.
Researchers are fuelled with dreams of fame and success so they devote much time and resources into studying obesity, mainly its causes so that it can be controlled. In 1994, researchers at Rockefeller University identified a hormone, Leptin that they hoped would be the key to controlling appetite. The researchers believed that fat people might lack Leptin so Leptin would make the obese people lose weight. However to the great disappointment of many this is not the case. After more research, scientists discovered that obese people had lots of Leptin and that giving them more had little effect. Having too little Leptin was more important than having too much Leptin. Fat cells make Leptin, which tells the brain when fat stores are too low and more should be manufactured. The brain sends signal to decrease the metabolism if the fat stores are too low and so weight is gained.(4) This is why people have problems dieting. When people diet they usually reduce their fat intake so their stores of fat are depleted. When this occurs in the body, the fat cells make more Leptin which tells the brain that it should inform the metabolism to slow down. However, a person who is dieting is trying to counteract the urges of the body to compensate for the loss of fat which makes dieting so difficult. The fact that dieting is difficult because our body creates hormones to make us eat should encourage mankind to consider that obesity may not be as abnormal as society has led us to believe. Instead of looking for a way to completely stop people from gaining weight, we should probably look for a way to create food and make meals that are lower in empty calories and rich in the nutrients that we need. I suggest this because people who are inclined to eat more because of their bodies may eat more unhealthy foods because of our "fast-food and junk-food" society could actually be made to be healthier if we had food that satisfied the nutritional needs of our bodies.
Now that Leptin has been found to not be the wonder drug for decreasing appetite for which so many people had hoped, undeterred scientists have embarked on a new pursuit to find the real component to alter a person's appetite. Dr. Bloom and his research team at Hammersmith Hospital at the Imperial College School of medicine in London discovered Peptide YY3-36 (PYY). It is a gut hormone that is made in response to the consumption of food. PYY circulates to the brain where it stops the urge to eat. Food in the intestine starts PYY production. PYY is absorbed into the blood stream and goes to the brain. The arcuate nucleus in the hypothalamus of the brain sorts the signals it receives from the body to determine whether the person should eat more or stop eating. PYY triggers 2 types of neurons: neurons that make you feel full or neurons that make you feel hungry. PYY turns on the neurons that make you feel full and turns off the neurons that make you feel hungry.(4) This is why PYY is such an interesting hormone: it tells the body not to eat. However, we should not think that we have solved the problem of obesity. First of all, PYY was only discovered in August which means that it is barely 4 months old. Also, there has not yet been a published article in a medical journal on PYY which would allow experts to analyze the observations made about PYY to see if they are steadfast or faulty. Hopefully the Leptin experience has taught us that we should not prematurely praise and accept a discovery because it gives the answer that we hope to hear.
We are ready to welcome with open arms any drug that combats obesity. Why are we so ready to adopt PYY or Leptin? I believe that humans wish to control obesity not only for the health risks that it poses to individuals but for the lack of control over ourselves that obesity points out to us. People used drugs like Fen-phen, Meridia and Metabolife's dietary supplements to combat obesity. These drugs all contained ephedra which is an herb similar to amphetamine which has been connected to the deaths of over 100 people. Fen-phen has caused hundreds of cases of deadly primary pulmonary hypertension and heart-valve damage. Meridia increases the likelihood of the person's suffering from high blood pressure and stroke. Obesity sufferers use diet drugs to avoid problems like high blood pressure and stroke but Meridia increases the occurrence of these illnesses. Meridia has been linked to 19 deaths.(1) People use these diet drugs to change their lifestyles in a quick and painless fashion. Since so many people are overweight and they are desperate to lose the weight, drug companies are preying on this vulnerability to sell products that have many side effects because people are willing to accept the risks in order to lose the weight. We are faced with 2 of the 7 deadly sins: gluttony and greed. More should be done to monitor these wonder drugs that are immediately available after they are discovered. The public should be skeptical about how well these products have been tested and for how long because the drug companies seem to be in a great rush to get these products on the market. People should consider the side effects that are related to taking these drugs because these side effects are a high price to pay for losing weight.
People are looking to researchers and drug companies to help curb their obesity. However, they are not considering another approach: a change in lifestyle. Fast food companies market that fast food is a quick meal that won't interfere with your daily life. As much as the fast food industry markets, those who are concerned with a healthy lifestyle should also market the healthy life as a desirable and attainable life. Television stations should broadcast as many anti-junk food ads as there are fast food ads. This could be done like the campaign against smoking in the 1960s that was so effective that the tobacco industry willingly took off their cigarette ads on the television. Another potentially effective method of campaigning for a healthier lifestyle is to put warning labels on food about the number of calories that a person would gain by consuming a particular product. .(1) These methods may not change person's lifestyle permanently but it will make them more aware of what they are putting into their bodies.
If people begin to think about what they are eating, they would be at the "pre-contemplation" stage of the "Stages of Change" Model. This model groups the 5 levels of motivational readiness that a person must pass through to successfully change a health behavior. The 2nd stage is the "contemplation" stage when you intend to change but not anytime soon. The 3rd stage is the "preparation" phase when you say that you will change next month. In the "action" stage you have recently changed your behavior and the "maintenance" stage is when you have continued the changed behavior for at least 6 months. People should consider this model when they are deciding to change their eating habits. .(2) However, if overeating is a biological disorder then it is possible that no psychological endeavor will bring about a change in eating. It must be noted that one cannot underestimate the power of the mind.
As with all things in life, we look to science to provide us with a reason for our conduct and to tell us how to fix our behavior if we find it unacceptable. We want researchers to spend long hours and expend lots of energy to find a way to curb our eating habits. Then we want drug companies to make these products readily available for our use. However, we never stop to consider that perhaps "Mother Nature" has a plan for us: to eat and by trying to go against what is engrained in us we run the risk of upsetting the course of evolution and affecting future generations of humans. Another question we need to ask ourselves is since there is diversity in life, shouldn't there be diversity in how we look so some of us may be fat and some of us may be thin. We all have to die someday; why don't we consider heart disease, hypertension and diabetes as different methods of dying. Another consideration is that if we alter one hormone to curb our eating, is it not possible that we could develop another hormone to compensate for the change in our body. I don't believe that obesity can be solved by taking a pill or an injection so that we don't eat. Perhaps instead of looking for a gene that causes us to eat we should consider looking for a gene that causes us to appreciate what we have.
Web Resources
1) Alternet.org
2)
3)
Non-Web Resources
4) Denise Grady.Why we eat and eat (and eat and eat). New York Times, Science Times, Tuesday November 26, 2002.
Infant Iron Deficiency Name: Katie Camp Date: 2002-12-18 10:15:17 Link to this Comment: 4134 |
Iron Deficiency is caused by an inadequate source of iron in a person's system. Iron, in the form of hemoglobin in red blood cells, carries oxygen to the brain and throughout the body. In infants, iron deficiency poses many threats. Infants still developing vital connections in their brain and between systems in their body require a significant presence of iron. Diagnosing, treating, and preventing iron deficiency in infants is vital to these developments. Recent studies have shown that children with iron deficiency in the first year of life "lag behind their school peers" (1) in "mental and physical development" (7). Iron deficiency is easily prevented and treated. It is necessary to be educated about its symptoms and threats. Although there are many different ways to treat iron deficiency, the most sensible is prolonging the period of breast feeding after birth and establishing a balanced diet. In this paper I will tackle the general issues of infant iron deficiency and the many treatments available, as well as provide support to the claim that "breastfeeding is best."
Iron deficiency is sometimes present in newborn infants or develops after the first four or six months of life. Infants are born with reserves of iron for their first few months of life. These reserves are related to the amount of iron of the mother. Since about "50% of pregnant women" are iron deficient (8) some infants are born without sufficient iron stores. Normally when the mother is not iron deficient, in term infants are born with 75 mg of iron per kilogram of body weight. During the first year, infants "almost triple their blood volume...and...require the absorption of 0.4 to 0.6 mg of iron daily" (5). If their diets do not then support such iron absorption their stores are depleted and the blood is not able to carry enough oxygen to the brain. In the case of premature infants, their iron store is about 64 mg per kilogram of body weight. Because of this inadequacy, their diet then requires about 2.0 to 2.5 mg of iron per day in order to supplement their stores. Obviously, the rate at which an infant becomes iron deficient is proportionate to whether or not they are carried to full term depends on their mother's iron content. It remains important, however, in every situation that the infant somehow establish stable stores, so as to promote basic development.
It is clear that in the infancy stage of life it is necessary to absorb a certain amount of iron in order to support the body iron stores. The primary lack of iron absorption by infants is the cause of an adequate diet. Since breast milk is the general diet of an infant it is necessary by six months to begin supplementing the diet. "Breast feeding without complementing with iron rich foods becomes an increasing risk factor for iron deficiency" (4). Often though, the mother stops breast feeding her child completely, thus depleting that specific source of easily absorbed iron from the infant's resources, it is mostly replaced with cow's milk that is not normally high in iron. Some cases of iron deficiency are "due to parasitic infections and repeated attacks of malaria" (3). Most often these cases occur in developing countries in regions of Africa, Asia and Latin America. Specifically, the "parasitic infestations causing iron deficiency are hookworm (Ancylostoma and Necator) and Schistosoma" (3). Parasites prevent the absorption of iron into the blood by absorbing it for themselves.
Iron deficiency has a variety of detrimental results to infants. The lack of iron often means a lower birth weight and slower weight gain due to a state of anorexia that can ensue (5). Because iron is in the form of the hemoglobin in red blood cells responsible for transporting oxygen throughout the body, low iron means the infant may develop a low exercise and physical activity tolerance. Their immune system is also weakened with iron deficiency and it is important to limit an iron deficient infant's exposure to infections because they have an increased risk (6). Finally, a recently probed issue of iron deficiency is its effect on the mental and intellectual development of a child. Studies have shown that iron deficiency "during the first few sensitive months of life can lead to long-term delays in mental and physical development" (7). Dr. Shabib's article mentions a correlation between the lack of iron and a decreased attention span (5). Attention Deficit Disorders are often combined with slower and increased difficulty in learning. These infants grow up to be participants of society and take on leading and involved positions. This "mental impairment" (2) is a convincing reason to treat iron deficiency.
It is extremely important to quickly treat iron deficiency and prevent further anemia so that such results do not affect children after their infant stages. Immediate treatments after birth, prescribed supplements, and dietary decisions all play a role in the treatment of infant iron deficiency. "Late clamping of the cord" (3) immediately after birth allows additional blood flow from the placenta to the child after birth. This method supplements an infant's reserves with approximately 50 mg of iron to the infant's reserves, thus dealing with iron deficiency at birth. After birth and after the depletion of the infant's body iron stores, a tool for treating iron deficiency is regulating the child's diet. First, foods rich in iron are important, like liver and dark green leafy vegetables. More importantly, however, is the intake of foods that "enhance iron absorption" (3). Examples are animal products and fruits and vegetables that contain vitamin C. It is necessary to prevent the ingestion of products like "tea or coffee, and calcium supplements...or [take] 2 hours after meals" (3) which counteract the bioavailability of the iron. In addition to introducing iron rich foods into the diet of infants, iron supplements can be prescribed to fill in the gaps of a diet. In developing countries where they are more likely to experience cases of iron deficiency based on parasites and malnutrition of the child and mother, supplemental treatments make sense.
In areas of high prevalence of iron deficiency anemia, 400 mg ferrous sulphate (2 tablets) per day or once a week, with 250 ΅g folate for 4 months is recommended for pregnant and lactating women. In areas of low prevalence 1 tablet of ferrous sulphate daily may be sufficient, but in these areas another approach is to give iron therapy only if anemia is diagnosed or suspected. (3)
These treatments listed require additional care to infants and are useful in cases where a mother cannot provide her child with adequate iron resources either during birth or after. However, the most popular solution to iron deficiency seems to come from breast feeding an infant. Often it has been noted that there is a strong connection between when an infant is weaned and their development of iron deficiency. Much is dependant on when a mother deems it appropriate to cease feeding her child breast milk in relation to the time period in which many infants deplete their body iron stores. This is because there is a huge difference between the content of breast milk and the cow's milk or soy-based product that is substituted. First, breast milk "is a developmental fluid...[that] changes with baby's needs" (2). In terms of the infant's need of iron, it contains .3 to .5 mg of iron per liter. Fifty percent of this iron is absorbed by the infant; whereas only ten percent of the 1.0 to 1.5 mg/liter of cow's milk iron is absorbed. In the case of soy milk or fortified milk products that contain 12 to 13 mg of iron per liter, only four percent is absorbed. The higher absorption of breast milk is due to its increased "bioavailability." This may be different from cow's milk because the cow's milk contains "high concentrations of calcium, phosphorous, and protein in conjunction with the low concentration of ascorbic acid" which may be "responsible, in part, for the poor absorption of iron from cow's milk" (5).
Infant iron deficiency obviously has a variety of effects, some of which cause longer term problems for infants. The most convincing of these issues in developing an argument for increased awareness and more comprehensive treatment of iron deficiency is the theory that infant iron deficiency leads to diminished mental development. Although I have suggested many solutions to the problem of depleted body iron stores in children, breast feeding is the most universal and natural treatment for iron deficiency. In cases of developing countries, malnutrition of the mother may prevent her from providing enough iron to her child during pregnancy and during breast feeding. In general though it is a simple solution to breast feed an infant for as long as possible before weaning it to other milk sources. In addition to a breast milk diet, adding iron rich foods is perhaps the most complete solution to establishing stable iron body stores during the first stages of an infant's life.
1) Iron Deficiency in Babies, brief overview of infant iron deficiency.
2)Introduction to Clinical Science-Newborn Nutrition, additional syllabus notes that detail specifics about infant iron deficiency.
3) Postpartum care of the mother and newborn: a practical guide-Chapter 4 Maternal Nutrition, information about causes of iron deficiency, including effect of parasites, as well as detailed list of iron rich food sources and supplement information.
4) Iron Deficiency in Children, general information about parasitic malabsorption and advantage of breast feeding.
5) Meeting the Iron Needs of Infants and Young Children, editorial by Saudi Dr. Shabib detailing symptoms and threats of iron deficiency, body iron stores, and difference between breast milk and substitutions.
6) Iron Deficiency Anemia, Yale-New Haven Hospital general information website on iron deficiency, explains what iron is used for, etc.
7) Preventing Iron-Deficiency Anemia During Infancy, another source that mentions the long term affects on the mental and physical development of a child because of iron deficiency.
8) Iron Deficiency Anemia, general description and information.
Once an Addict, Always an Addict?: Understanding Name: Lydia Parn Date: 2002-12-18 21:55:24 Link to this Comment: 4138 |
Remarkable advances in the neurosciences are creating the ability to predict and alter human behavior in ways that would have been unimaginable a few years ago. A current goal of neuroscience is to understand the mechanisms that act as a go-between from occasional, controlled drug use to the loss of behavioral control that defines chronic use (1). The question of addiction concerns the process by which drug-taking behavior, in certain individuals, can quickly evolve into drug-seeking behaviors that take place at the expense of most other activities (2).
Addiction, also known as substance dependence, can be defined as repeated self-administration of a substance, despite attempts to refrain from use and knowledge that the substance abused has effects detrimental to health or social concerns (2). Many factors contribute to the development of addiction; drug dependence does not just happen. A person's initial decision to use a drug is influenced by genetic, psychosocial and environmental factors. Once the drug has entered the body, however, the drug can promote continued drug-seeking behavior by acting directly on the brain. Drug addiction appears to be the result of a series of neurochemical changes in the activity of neurons of the brain (3).
Understanding the neurochemical changes in the brain that drugs induce provides insights into the causes of drug abuse, its neurochemical basis, and foundations for improved treatments. The brain is composed of millions of neurons and a large number of chemicals. Specific areas of the brain are involved with the control of physiological and psychological functions. The activities of neurons and their interactions with different areas of the brain control our homeostatic functions, our very being. Neurotransmitters alter in response to changes in physiological and psychological functions, so subtle changes in neuronal activity can result in major changes (4).
Neurons release neurotransmitters that act on specific proteins called receptors, activation of these receptors then elicit a response. Small disturbances in the activity of particular neurons can cause considerable effects on the mood of a person. The level of neurotransmitters can be increased either by being released from nerve endings or by inhibition of a procedure that terminates its action (4). The major neurotransmitters spent in the actions of drugs of abuse are primarily dopamine and serotonin. Most drugs of abuse increase the release of dopamine in areas of the brain that are believed to play in a role in the rewarding properties of drugs (5). Drugs of abuse are pharmacologically diverse and alter neurotransmitter dynamics in various ways. Stimulants such as cocaine and amphetamines induce the release of dopamine from dopamine-containing neurons and block the uptake of neurotransmitters into neurons. Opioids such as heroin fuel opioid receptors on neurons and result in dopamine release. Ecstasy increases serotonin release. Crack induces a greater degree of psychological dependence than its parent compound cocaine as it has better access to the brain than cocaine and its properties result in it being absorbed faster (4).
So drugs do cause short-term surges in the dopamine levels and other messengers in the brain that temporarily signal pleasure and reward. But the brain quickly adapts to this rush; pleasure circuits become desensitized, to the extent the brain can suffer withdrawal after a binge (5). In recent years however, researches have strayed away from the study short-term effects and begun to focus on the long-term consequences of drug use in order to further understand issues of drug relapse. After long-term use, many drugs do not produce feelings of euphoria, in part from desensitization or tolerance. So why do drug addicts continue to use drugs after they no longer produce feelings of pleasure and they are trying to abstain from use? To find out researchers are seeking out changes in parts of the brain that help control motivation, looking for changes that happen weeks and years after last drug exposure.
Addiction seems to rely on some of the same neurological mechanisms that underlie learning and memory, and cravings are triggered by memories and situations associated with past drug use. Recent studies have revealed a "convergence between changes caused by drugs of addiction in reward circuits and changes in other brain region mediating memory" (5). Both learning and drug exposure reshape synapses, initiating surges of molecular signals that turn on genes and change behavior in lasting ways. Understanding these processes could help relate to the core clinical problem of drug addiction and help conquer relapse in ex-addicts. In order to focus on the clinical issues, "understanding how associative memories are laid down that change the emotional value of drugs and created deeply ingrained responses to those cues [that trigger relapse]" (3).
When it comes to kicking the habit, the process of withdrawal is the easy part; it is only after the body detoxifies itself that the real challenges begins. Ex-addicts with the strongest resolve, and plenty of external support, still struggle to refrain from use and experience cravings years after the last hit. Even though each drug of abuse has its individual effects, all specialize in attacking the brain's dopamine reward circuit. Long-term abuse reduces the number of receptors that respond to dopamine. Since dopamine fuels motivation and pleasure, as well as being crucial to memory and learning, the loss of transmitters correlates with memory problems and lack of motor coordination. Once the brain becomes less sensitive to dopamine, it becomes "less sensitive to natural reinforces" (6). The pleasure of seeing a friend or taking a walk do not always have the same value heavy drug users once felt, sometimes the only stimuli still strong enough to activate the motivation circuit, are more drugs. Understanding how drugs rearrange one's motivation priorities can help explain why addicts often partake in senseless activities and show how ex-addicts often relapse back into chronic drug taking (6).
Better understanding the neurological changes that occur in the brains of drug addicts may help find improved ways in which to treat addicts. However viewing addiction as a brain disease or a chemical imbalance makes the assumption that the addict does not have the role of choice or will in their actions. While it is necessary to further study the long-term effects drug abuse has on the brain, it is also crucial to understand that in order for an addict to recover successfully, one's own resolve and self-determination to stop drug use is central.
1) Is Drug Addiction a Brain Disease? , Satel, Sally L. and Frederik K. Goodwin, Program on Medical Science and Society, Ethics and Public Policy Center, Washington D.C., 1998.
2) The Psychology and Neurobiology of Addiction: An Incentive-Sensitization View. , Robinson, Terry E. and Kent C. Berridge, Department of Psychology, The University of Michigin, 2000.
3) The Neuroscience of Addiction., Koob, George F., Pietro Paolo Sanna and Floyd E. Bloom, Department of Neuropharmacology, The Scripps Research Institute, 1998.
4) Beyond The Pleasure Principle. , Helmuth, Laura. Science Magazine, Volume 294, 2 November 2001.
5) The Neurobiology of Addiction: An Overview , Roberts, Amanda J. and George F. Koob. National Institutes of Health, National Institute on Alcohol Abuse and Alcoholism, 1997.
6) Neuroscience: Implications for Treatment , Petrakis, Ismene and John Krystal. National Institutes of Health, National Institute on Alcohol Abuse and Alcoholism, 1997.
Gender: Biological or Cultural? Name: Anne Sulli Date: 2002-12-19 02:20:54 Link to this Comment: 4143 |
What does it mean to be male or female? How, if at all, do males and females exhibit different behavioral activity? Can these differences be adequately measured? Are they culturally or biologically influenced? These questions attempt to locate the true nature of sex and gender, challenging the accepted notion that the two naturally correlate. Sexa biological conceptalludes to the different reproductive functions exhibited by males and females (1). Gender, which is often mistakenly used as a synonym for sex, is a psychological concept (1). It points to the behavioral differences between men and women. Recent studies, particularly within the feminist community, have explored gender identity and roles in society. The current, popular views within many scholarly circles assert that gender differences are merely cultural and societal constructions (2). Theorists such as Judith Butler and Adrienne Rich claim that a true and inherent gender does not existthat accepted gender norms are compelled by traditional structures, attitudes, and institutions (2). Scholar Anne Fausto-Sterling, in fact, argues that, "Male and female babies are born. But those complex, gender-loaded individuals we call men and women are produced" (4). Such arguments are well-informed, and in many ways, they are viable and legitimate. Still, others would argue that biology also plays a significant role in gender development. It seems that the best and most thorough way to understand gender is to consider all possible influencesboth biological and environmental. Biology indeed contributes an important and inimitable voice to the discussion of gender development.
There are obvious external differences between male and female anatomydifferences in genitalia, for instance. Important internal distinctions exist as well such as separate gonadal tissues (ovarian or testicular), hormonal balances (estrogen and androgen), and reproductive organs (3). In addition to these primary sex traits, a person will also later develop certain secondary sex characteristics (facial or bodily hair, i.e.) (3). Sex is essentially determined at the moment of conception when a female egg, bearing an X-chromosome, is fertilized by a sperm carrying either an X or a Y chromosome. Except for this single chromosomal difference, male and female embryos remain indistinguishable from one another (3). Yet it is this small difference that will, after approximately seven weeks of growth, ignite a chain of biological developments that differentiate the two sexes (3). Some of these differences will gear the developing being toward certain gendered behaviors.
A plethora of gender differences exist between males and femalesdifferences which are often assumed to be inherent and natural. Clearly, not all of these distinctions are innate, yet it seems that some are more valid and consistent than others (1). Indeed, biological influences and contributions to sex differentiation can offer a foundation for these divisions. The other contributing factor is the environmental component. Social and cultural influences that enforce gender norms are frequently imposed at birth (1). Direct teaching, observation, treatment by parents, toys and activity assignment, clothing, and enforced personality traits all manipulate the development of a child's assumed gender role (1). It is the biological force, however, that dictates a person's physiological, neural, and hormonal makeupand many of the distinct behavioral differences among males and females emerge from these factors.
Men and women clearly function in accordance with their own unique physiology. Women, for example, have both a lower metabolic rate (10% lower after puberty) and a higher percentage of body fat than do men (2). Men possess more muscle massbecause they tend to convert food to energy rather than fatand denser and sturdier bones, tendons, and ligaments (2). Men have more sweat glands and can thus release heat quickly. Women, rather, have more insulation, energy reserves (and thus greater endurance) due to their higher content of subcutaneous fat (2). Men can circulate more oxygen than women due to their larger windpipes, lung capacity, and hearts (2). These characteristics signify a higher potential for activity, especially movement that involves short bursts of strength, in males. The limitations in locomotive activity which women experience can also be attributed to their bodies' capacity for pregnancy (5). Because they are programmed for child-bearing, women are equipped with a smaller range and capacity for physical performance (5). Furthermore, females have mammary glands (which provide nutrition and immune system codes to children) that also hinder locomotive movement (5). Such distinct physical makeup undoubtedly programs men and women for different activities. These characteristics also provide support for several of the gender stereotypes against which many argue; namely those involving physical size, strength and the abilities that arise from these traits.
Additionally, differences among the male and female nervous systems may also offer insight into seemingly controversial gender differences. Men seem to have less sensory nerve endings on their skin and therefore possess a higher tolerance for pain (2). The common "two-point discrimination test," in which the subject is to differentiate between two closely positioned pricks on the skin, also proves that females are more sensitive to touch (2). Along with this sense, women prove to be more responsive and sensitive in their senses of hearing, smell, and taste (2). What do these results imply? Perhaps the female's highly sensitive nervous system and her acute senses confirm the stereotype which claims that women are more perceptive, aware, and responsive than men (2). These qualities, then, suggest that women possess better communication skills and can maintain more successful social interactions with others. The different physical makeup which men and women possess may clearly validate several of the personality and physical gender "stereotypes" that exist today.
The differences in male and female hormonal types and proportion are also important factors in gender behavior. Both sexes possess androgens and estrogen, but at different levels (2). With the onset of puberty, male and female hormonal makeup becomes drastically distinct. After puberty, the male testosterone level is fifteen times greater than that of a female; and females possess approximately eight to ten times the male level of estrogen at this time (2). The existence and ratios of these hormones affect all organ systemsheart and respiratory rates, for example, are particularly influenced (2). Hormones also play a key role in the distinct responses to stress which men and women exhibit (2). Initially, both sexes display the same reaction: bursts of adrenaline which increase heart rate, blood pressure, responsiveness, alertness, and energy level (2). But prolonged and chronic stress elicits disparate responses between men and women. Females begin to release more estrogen (which in large amounts can sedate one's system) and cortisol, while reducing serotonin levelswhich are crucial for normal sleep patterns (2). Women also experience a reduced level of neroepinephrine, which is necessary for one's sense of well-being (2). These responses suggest that, under heavy stress, women are more likely to suffer depression. The male system, conversely, reacts by increasing testosterone levels (2). In addition, androgen compounds affect the male system in a way that causes it to be hyper-reactive (2). Aggression and sexual impulses are consequently heightened (2). These responses, dictated by gender specific hormones, also provide evidence for consistent and distinct gender behavior.
In order to understand more fully the differences between biologically and culturally driven influences on gender, it is useful to observe the deviations from "standard" gender and sex patterns. For example, males inflicted with Klinefelter's syndrome (a case when males possess an extra X chromosome) and other disorders that lower one's testosterone level will often assume typical "female" traits (2). These characteristics include a longer life expectancy and a higher verbal aptitude (2). Additionally, males whose mothers were treated with DESa synthetic estrogentend to be less aggressive, with a lower tendency to exhibit stereotypical male attributes (2). Likewise, some women who are given androgens during pregnancy (so as to prevent miscarriage) have female babies that become "tomboyish," more active, displaying results on aptitude tests that are similar to those obtained by males (2). These cases prove that hormonal makeup is extremely influential, and it can often be the harbinger of one's future gender roles and identity.
Evolutionary psychology, a field which attempts to connect evolutionary development with current behavioral patterns among the sexes, also contributes an important voice in this dialogue. Because women are the child-bearers of any population, for example, they are more "evolutionary important" than men (5). A loss of males in a given population, therefore, would not be extremely detrimental to the survival of the next generation. This theory argues that because men are not burdened by an "evolutionary pressure," they can behave more "courageously," displaying "risk-taking" behavior (5). Women, on the other hand, are more valuable to the population and tend to behave more conservatively (5). Another aspect of this theory attempts to locate the origin or reason that women exhibit compassion and self-sacrifice more often than men. According to theorist Daniel Pouzzner, women display "other-centeredness," while men are more "self-centered." Female pregnancy is the root of this difference. Pouzzner argues that pregnancy is "an arrangement in which the female is parasitized by a separate organism" (5). This situation forces women to focus their concern and attention onto others, namely their children. Conversely, in evolutionary history, a male was often uncertain as to which offspring was his own (while the female was, of course, always aware) (5). A man's instinct to protect and care for others is thus far lower than that of a woman. Evolutionary forces clearly play a role in gender development.
Although cultural and societal forces undoubtedly manipulate our notions of gender, it is important to also consider the biological contributions to gender development. That is, while gender stereotypes may certainly be false, some are perhaps more consistent and can be traced to a biological or evolutionary origin. It is also vital that we view biology as only one voice in a larger conversationthat biology does not enforce stereotypes or imprison individuals within certain categories. It merely adds another perspective to the mysterious and ambiguous nature of gender. Indeed, we cannot study gender or any other issue without considering the influences and contributions provided by all fields. To gain the deepest understanding, it is crucial to approach the ideas of gender development and construction with an open mind, observing the matter from all lenses.
2)The Biological Basis for Gender-Specific Behavior,
3)Deciphering the Language of Sex,
Nonverbal Learning Disabilities Name: Kyla Ellis Date: 2002-12-19 10:14:55 Link to this Comment: 4147 |
"Learning Disabilities" is a term that gets used a lot these days. Even young children are familiar with the phrase, because it is used to reason why someone may not seem as smart as they are, or might have behavioral problems. The truth of the matter is, however, that learning disabilities are more common than we think they are, and that often times they go unnoticed or un diagnosed because the person who is suffering from them does not have behavioral problems, or has learned to "get by" in school work and so blends in with the academic community. There are many different kinds of learning disabilities, ranging from dyslexia, to spatial development disorder, to Attention Deficit/Hyperactive Disorder. Statistically, children with these conditions are more likely to do poorly in school and less likely to pursue higher education. Though we don't know fully the cause or fool-proof treatment of many of these disorders, we are learning, and have been able to develop many processes to help students with these disabilities have the same opportunities to learn as children not affected. In this paper, I wanted to find out the definition of a non-verbal learning disability, as well as the cause and possible treatment.
Children with nonverbal learning disability (NLD) are most often assumed to be precocious as toddlers because they have a very easy time developing their vocabulary, memory skills, and apparent reading ability. As the child starts pre-school, parents may notice that he or she has trouble interacting with other preschoolers, learning how to help him or herself, or adapting to new situations. These problems are often dismissed with little thought. The child usually ends up floating through early elementary school without much problem in the academic realm; maybe occasionally mixing up an addition sign and a subtraction sign, or some other small details.
When all children enter upper elementary grades, they are all expected to be able to handle more things on their own. This is where the child with NLD begins to have trouble. They get lost, forget to do homework, seem unprepared for class, have difficulty following directions, struggle with math, can't read their textbooks, can't write an essay, continually misunderstand both their teachers and their peers, and are often anxious in public and angry at home. Teachers will complain that the child is lazy, rude and uncooperative, but in reality the child is frustrated because the classroom is not suiting their needs because they, in fact, have a learning disability (1).
Children with nonverbal learning disorders (NLD) often seem awkward and have a very hard time in both fine and motor skills. Riding bikes, kicking soccer balls, and tying shoes are all very challenging tasks, almost impossible to master. Children will often "talk their way through" the simplest of motor activities. They learn little from experience or repetition and cannot generalize information (2).
Students frequently "shut down" when faced with academic pressures and performance demands that require more than they feel they can do. Comprehension skills are weaker than those of the other children in their grade
(6). Many words that they hear, read and use are "empty" in that they don't have meaning to them; they are simply repeating what they have heard.
NLD is a neurological syndrome that affects the right side of the brain. There are certain assets and deficits associated with this syndrome. Assets include early speech and vocabulary development, excellent memory skills, attention to detail, early reading skills development and superior spelling skills. These individuals have the ability to eloquently express themselves verbally, and the have a very strong auditory retention. There are four categories of dysfunction that appear with the disorder: Motoric dysfunction gives them a lack of coordination, severe balance problems, and difficulties with graphomotor skills. Visual-spatial-organizational means the have a lack of image, poor visual recall, faulty spatial perceptions, difficulties with executive functioning (decision-making, planning, initiative, problem-solving, etc) and problems with spatial relations. Social dysfunction meaning they have a lack of ability to comprehend nonverbal communication, difficulties adjusting to transitions and novel situations, and deficits in social judgment and social interaction. And, finally, sensory dysfunction which makes them sensitive in any of the sensory modes: visual, auditory, tactile, taste or olfactory (3).
There is no known cause of NLD so far, however, brain scans of individuals with NLD often confirm mild abnormalities of the right cerebral hemisphere. A number of children with NLD have histories of (a) sustained a moderate to severe head injury, (b) received repeated radiation treatments on or near their heads over a prolonged period of time, (c) congenital absence of the corpus callosum, (d) been treated for hydrocephalus, or (e) actually had brain tissue removed from their right hemisphere. All of these affects of the brain cause destruction of white matter (long myelinated fibers in the brain that serve as channels of communication between the brain and the rest of the body (4)) and connections in the right hemisphere, which are important for communication in the body. Hence, current evidence and theories suggest that early damage (disease, disorder, or dysfunction) of the right cerebral hemisphere and/or diffuse white matter disease, which leaves the left hemisphere (unimodal) system to function on its own, is the contributing cause of the NLD syndrome (definitely not dysfunctional home lives). Clinically, the classification of this learning disorder resembles an adult patient with a severe head injury to the right cerebral hemisphere, both symptomatically and behaviorally(2).
Early consultation done by a school psychologist or family physician typically only serves to incorrectly dismiss the child's problem as being "just a perfectionist" or "immature" or "bored with the way things are normally done" or "a bit clumsy." This only serves to soothe a parent or teacher's fears, and then later on the child is no longer able to function given the limitations of his disability and/or, in some cases, the child suffers a "nervous breakdown" (2).
The sooner a child can be treated, the better it will be for all parties involved. Though there is no "cure" for nonverbal learning disability, it is important to note that there have been major steps within the past few years to allow people to be trained to be able to help those that suffer from this disorder so they can function in a normal educational setting. NLD is different from language-based learning difficulties. Educators and all involved must be specially trained to deal with the cognitive, behavioral and social issues unique to NLD. Effective intervention programs include things such as Language based therapy, Modified Grading, testing, and homework assignments, and Social Skills Training (5).
To be effective, interventions need to directly address the problem areas. The ideal environment would be for the student to be involved in any and all planning. The student is taught new behaviors in training exercises and then must be willing to apply such behaviors to similar tasks and situations beyond the training exercises. Tutors work with the student to develop six areas:
(a) Clarifying language concepts: the student works on developing accurate and flexible interpretations of vocabulary that describes space and spatial relationships. (b) Developing verbal reasoning: the tutor and student develop concepts for similarity and difference, classification and categorization, part/whole relationships, time order, cause-effect relationships, and spatial relationships (6). (c) Increasing comprehension and written output: actions are taken to gain clear meanings for the "empty" words I mentioned above. (d) Improving cursive handwriting- practice to improve aspects such as letter formation, slant, spacing, alignment at the baseline, and overall control and fluency. And (e) improving social cognition. Since these kids are so awkward socially, they need to practice social interactions like they would anything else. The tutor helps them through instructing explicitly, practicing, and encouragement. (6)
Non-verbal learning disabilities are frustrating for those that have them because there is not a lot known about them. Educators and teachers need to become aware of the symptoms and warning signs so that this disability can be diagnosed and steps can be taken to remedy it. There have been successful cases where the students were able to function in the "real world" and went on to do exceptionally well in higher education. But too many times, people with this disorder fall by the side of the road, dismissed as being "lazy" or "slow." We need to educate ourselves on this topic so that that does not happen.
1)NLD On The Web,
2)Nonverbal learning disorders ,
3)NLDLine,
4)White Matter,
5)Interventions,
6)Nonverbal Learning Disabilities and Remedial Interventions,
7)Definition,
Kidney Failure Name: Laura Silv Date: 2002-12-19 12:04:57 Link to this Comment: 4150 |
The kidneys are bean-shaped organs about the size of half a fist and are located in the middle to lower back on either side of the spinal cord. Its most important function is to filter harmful chemical substances out of the blood, such as alcohol and other toxins, and turn them into urine. The urine is then transferred to the bladder through the ureters. At sixteen ounces, the bladder is full and one feels the need to urinate. The kidneys also work to keep necessary chemical substances in our blood, things like potassium, sodium and glucose. They also have many secondary or coincidental functions. For example, they help to regulate hormones and blood pressure and to produce red blood cells. (Medline Plus).
There are many things that can lead up to kidney failure, since the human body, like most multi-cellular organisms, is intricately designed with every part and function somehow connected to everything else. People with certain kinds of diabetes and high blood pressure are more likely to have failed kidneys as their conditions worsen. Diabetes, which affects the kidney's ability to regulate blood sugar, can cause blood vessels to get thicker and narrower, which does not allow blood to flow as it should throughout the entire body, but noticeably (for the purposes of this paper, anyway) in and out of the kidneys. High blood pressure also prevents blood flow in the kidneys. High blood pressure has its own set of symptoms, but its damage to the kidneys is done without symptoms and therefore, if proper blood pressure is not maintained, damage can be done to the kidneys before there are symptoms that anything is wrong. Polycystic kidney disease is a hereditary disease that makes one subject to the kidneys cysts (little sacs of fluid that fill the kidneys) also can prevent the kidneys from working. Other things that can lead to kidney failure are blood diseases, repeated kidney infections, swelling in the kidneys, lupus kidney stones, an unhealthy diet and the constant use of alcohol or strong medications (HUH Transplant Center, National Institute of Diabetes & Digestive & Kidney Diseases). When the kidneys are not able to function, End-Stage Renal Disease, or ESRD for short, develops.
The symptoms of a failed kidney can vary with its causes, but there are several things that can almost always be counted on. For example, some people experience a change or loss of appetite. Since kidneys dispose of extra water in the body, the body retains water when the kidneys stop working. This leads to swelling in the body, as well as shortness of breath and, as in the case with my uncle, fainting. The toxins that are not filtered out can cause a feeling of constantly being tired or weak, as though the patient suffered from mono. That was what my cousin Wendy thought she had when she went to her doctor. A less common but still noteworthy symptom is pain in the stomach, below the rib cage. With time, patients experience rising blood pressure (which acts as both cause and side-effect of kidney failure) and tendency to urinate less.
There are several methods of treatment available for failed kidneys and ESRD. The most common are various forms of dialysis, a function which performs the jobs usually done by the kidneys, that is, cleaning toxins out of the blood and making enough red blood cells to keep the body healthy. It exists in two forms: hemodialysis and peritoneal dialysis. In hemodialysis, the blood is put through a machine that acts as a kidney and performs all the necessary functions that the kidneys perform. This process usually takes about three to five hours and must be performed three to four times a week (National Kidney Foundation). Peritoneal dialysis (after peritoneum, the name for the lining of the abdominal cavity) actually works within the body, to clean the blood. A tube called a catheter is surgically placed into the peritoneal cavity and fills it with a solution called a "dialysate" which filters out toxins and waste, which then exit the body (again through the catheter) and are discarded. Patients who are hospitalized for kidney problems usually go through this time-consuming process four or five times a day, or can opt to have the process performed at night while the patient is sleeping.
While dialysis is a good method of treatment for people with kidney problems, most people with ESRD might prefer to have a kidney transplant. The waiting period for a transplant is five to ten years, and it is completely coincidental (depending on time, location, compatibility and the gravity of the case) if one receives a transplant before then. There are currently 40,000 people waiting to get kidneys. Sometimes a patient's relative will opt to donate a kidney. The ideal kidney donor is in near-perfect health and between the ages of eighteen and thirty-five (In Focus). If kidney failure is left undiagnosed or untreated, the toxins in the body build up and cause death.
Kidney diseases can usually be prevented if one is free of any ailment which pre-disposes one to complications, such as the afore-mentioned diabetes, lupus or any hereditary diseases. Even hypertension can be corrected, and through it kidney diseases, by maintaining a healthy diet which is low in sodium, potassium and alcohol. Some prescription medicines can effect the liver and kidneys, so make sure to ask your doctor about any effects like this, or about any possible reactions if you are on more than one prescription medicine. Non-smokers and those who exercise regularly (golf doesn't count) also are a step ahead in fending off kidney disease.
I have to admit that I did not know anything about kidney disease when I started doing the research for this paper. I wouldn't have even known what a kidney was if it hadn't been for high school biology. My uncle had been experiencing some of the symptoms of kidney failure, but he hadn't shared these symptoms with his doctor. Perhaps if he had, they might have been able to do something to save him.
Aposematism and Mimicry as Evolutionary Strategies Name: Virginia C Date: 2002-12-19 15:10:26 Link to this Comment: 4153 |
Aposematism, commonly known as "warning coloration," is a fascinating phenomenon found in nature, particularly in animals. Mimicry is the phenomenon organisms that imitate the appearance of aposematic organisms, whether they themselves possess the same traits as the original aposematic organism or not. Aposematism is quite different than the traditional prey survival strategy of camouflage, or of trying to go as unnoticed by predators as possible. While it seems unlikely that making one's markings as obvious as aposematic prey animals do would be an effective deterrent to predators, it works, provided that there are enough aposematic animals in a population to constantly "remind" the predators of their unpleasantness as prey (1). There are many different theories and speculations as to how and why aposematism develops and functions successfully, as well as how mimicry of aposematism works in different populations and to what extent. Aposematism strikes me as a logical strategy for survival, according to the Darwinian evolutionary theory of "survival of the fittest," in that it seems to work to deter predators for the most part. However, when mimicry factors in, we see instances of aposematism that become less effective survival strategies, particularly with Batesian mimicry. In this paper I will attempt to explore these ideas of effectiveness of survival tactics in relation to the phenomena of aposematism and mimicry, and come up with my own conclusions about how these phenomena fit into evolution and survival in nature.
Many animals exhibit aposematic warning coloring to deter predators (7), (10). However, the specifics of aposematic traits are varied. I inferred that there are two basic types of aposematism: there are venomous or otherwise unpalatable (inedible) animals, and there are animals that possess a noxious characteristic that is unpleasant to predators (for example, a bee's ability to sting or a skunk's ability to spray). The first group, which I will call "unpalatable aposematics," seem to use their aposematism to protect themselves from being eaten, by warning predators that they will either cause them harm in poisoning them, or taste bad. The second group, which I will call "aggressive aposematics," seem to use their aposematism to communicate to potential predators that they are likely to provide some sort of discomfort and/or unpleasant reaction prior to being eaten, while they may not actually harm or injure the predator (or even taste bad).
It seems to me, based solely on looking at the Web sources I found, that there are far more instances of unpalatable aposematism than of aggressive aposematism. In fact, one site even claims that only unpalatable aposematism is truly aposematism - it states that "genuine aposematism occurs when the warning color is truly associated with distastefulness or toxicity . . . warning color must evolve after the evolution of distastefulness or simultaneously with it to be truly aposematic" (9). However, I have found other sites that claim animals such as the skunk to be exhibiting aposematism (11), so I chose to include it in my investigation. However, scientists as far as I can tell make no distinction between these two types of aposematism - but I saw them as fundamentally distinct, both in their causes and their effects. For example, a caterpillar that is an unpalatable aposematic will not show any defensive reaction to its predator upon being attacked. It will simply allow its own natural toxins to seep out, releasing a poison and/or a bad taste that will affect the predator. However, this reaction is not the same as that of an aggressive aposematic, such as a skunk, because it is not a defensive move, but rather a move made once an attack has already happened. The skunk's aposematic markings warn that it will spray a mildly painful and foul-smelling urine if provoked as a method of defending itself and discouraging the impeding attack, but once caught, will exhibit no poison and/or bad taste such as the example of the caterpillar. I imagine that an animal that was both an unpalatable and an aggressive aposematic could exist, and would have a high survival rate; however, I have found no example of such an animal in my research.
Since the majority of the available information on the subject seems to deal mainly with unpalatable aposematics, I too will focus on this type of animal. Unpalatable aposematics have shown through various studies to be effective in deterring predators; however, it is not fully understood how (1). It makes sense that predators would be deterred by certain traits such as bright colors or certain configurations (2): however, if we look at human perceptions of these similar traits, we see a new perspective that makes the whole picture more puzzling. For example, let's take the human sign for "biohazard." Humans have learned to associate the particular symbol and coloring (6), with a very specific sort of danger and/or threat. However, the first time any person sees this symbol in isolation s/he will not know what it means, because it is a learned symbol. Therefore, in theory the mind of a predator might work the same way: a predator sees a certain set of markings on a given aposematic prey animal, attempts to prey on said animal, and learns through the negative experience of the attempt to associate that particular marking with that particular unpleasant experience, thus avoiding said species (and possibly some or all species who mimic those markings) in the future. However, the game does not work this simply.
In one experiment (3), predators seemed to learn quickly to avoid aggregate groups of prey displaying similar aposematic markings after tasting just one or two specimens. This is explained by the premise that groups of aposematic organisms together have a stronger survival rate, since only one or two members will be lost before the predator learns his lesson, so to speak (1). Furthermore, this theory goes beyond simply aposematic populations; it applies also to organisms that develop markings similar to those of aposematic species to imitate their warning as a defense mechanism. Many times, this imitation (mimicry) occurs despite the fact that the mimicking organism has none of the undesirable traits (unpalatability, etc) that the aposematic organism has. I find this idea to be absolutely fascinating, that animals can "copy" the apparent defense strategy of aposematic animals, whose unpleasant traits are often very costly and difficult to maintain (2). It almost seems to me that mimicry is a more highly "evolved," if you will, state of being, because these animals seem to receive all the benefits of warning coloring as a defense mechanism without all the costly efforts to upkeep any unpleasant traits. However, mimicry is not as simple as all that.
There are two* basic theories of how mimicry functions in wild populations; Batesian and Mόllerian. Batesian mimicry involves a palatable, unprotected species (the mimic) that closely resembles an unpalatable or protected species in markings, therefore imitating its aposematic warning without actually exhibiting any traits which are unpleasant to the predator. True Batesian mimicry is parasitic in nature with the unpalatable aposematic species deriving no benefit and possible harm. The mimics don't share the aposematics' nasty taste or painful sting, just its appearance and behavior (4). In a way, this type of mimicry seems to me to be a form of evolutionary "cheating," because the mimics are capitalizing on the clever evolution of the aposematics, but in a less costly way; and the fact is, a predator can learn from eating one or two mimics that the markings they see, whether aposematic or not, mean nothing, and will therefore do harm to the true aposematic species. In my opinion, it seems that predators are most likely not capable of reasoning complicated strategies of who to attack or how many - so therefore, if the mimic interferes with the predator's learned avoidance of aposematic and/or aposematic-looking species, then they are actually compromising the effectiveness of the strategy and endangering both populations. This does truly seem to be parasitic behavior, and yet very evolutionarily ruthless and intelligent at the same time (as most parasites seem to be).
Mόllerian mimicry highlights a different aspect of mimicry in populations. The premise of this mimicry is that, if many different species show similar markings, such as, for example, different types of ladybugs which all show the basic trait of black spots on red wings, can protect all the species in a more effective way. The predator need only be exposed to one of the species to learn to avoid all of them, due to their similar markings (5). This seems rather like evolutionary cooperation to me, rather than the dog-eat-dog strategy shown through Batesian mimicry, which eventually harms all the populations rather than helping them.
All in all, the phenomena of both aposematism and mimicry seem fascinating to me, because they display a sort of evolutionary competition in a different strategy than the traditional, camouflage-based strategy that many prey use. However, it seems to me that aposematism is only effective when cooperative - i.e., with Mόllerian mimicry rather than Batesian mimicry - and that when the more parasitic and competitive Batesian mimicry factors in, aposematism stops being an effective survival strategy. This brings up the interesting issue of whether nature is a cooperative or a competitive environment. Evidence of these two very different types of mimicry show us that nature is in fact both, and that evolution must learn to incorporate these two aspects and strategize according to whichever aspect seems to be the most relevant for a given population. The entire process makes me think that evolution is a much more complicated and "intelligent" process than is immediately obvious.
1) MUSE - Natural Selection - Extending the Natural Selection Model
2) The Scoop on Warning Coloring Evolution
4) Most Spectacular Batesian Mimicry
References
Anthropocentrism or Deep Ecology? A Debate Name: Kate Amlin Date: 2002-12-19 16:27:33 Link to this Comment: 4155 |
"What is amiss, even in the best philosophy after Democritus [i.e., after the preSocratics], is an undue emphasis on man as compared with the universe." ~ Bertrand Russell, History of Western Philosophy (1)
The term "anthropocentric" is defined as "1) regarding humans as the central element of the universe" and "2) interpreting reality exclusively in terms of human values and experience" by www.dictionary.com (2). As an ideology, anthropocentrism postulates that human beings are more important than non-humans, including other species and nature in general, are. Western moral philosophies traditionally disregard the value of non-humans (3). "This is often characterized [by] an atomistic conception of humans as discrete and separate interacting units, in contrast to the holistic organic conception of organisms as nodes in complex biotic webs. The sharp separation between humanity and nature is said to be one of the characteristic deficiencies of shallow thought, which is often accompanied by the denial that the nonhuman world possesses intrinsic value" (3). Therefore, anthropocentrism justifies the indiscriminate slaughter of plants and animals to fulfill human needs.
Humans put a higher value on their own needs than the value of the environment or other beings and disregard the effect that their actions have on the environment. This includes a wide variety of actions such as drilling for oil in Alaska, experiments conducted on laboratory animals, destroying the rainforest for paper products, killing exotic animals for sport, and polluting produced air and water during manufacturing. This blatant disregard for natural ecosystems and the impacts that humans have on the environment often go ignored. "Contrast the violence of a strip-mined hillside, or a clear-felled forest with the tranquil majesty of a climax ecosystem such as a tropical rain forest or a coral reef. 'Nature knows best', it is said" (3). The environment would be much better off if humans did not have such a stubborn penchant for interfering with and destroying their surroundings; however, the current dominance of an anthropocentric mindset often prevents the conservation of Nature.
Industrialization and advances in technology have contributed to the demolition of the environment at an alarming rate. In response, philosophers question how humans can prevent the destruction of Nature (3). Biologists are also looking for a solution to anthropocentric actions since ecologists study the complex web of interdependencies that make up the environment and affect all of the earth's species (3). Humans have, relatively, been alive for an extremely short span of time. Yet, they have destroyed the environment at an incredible rate, especially during the last 100 years (3).
Arne Naess, a Scandinavian philosopher and social activist (4), invented an alternative theory to anthropocentrism during the 1970s called deep ecology (1). Naess proposes that shallow, short-term solutions to environmental destruction, such as prohibiting the dumping of chemical waste products in bodies of water so that people will not have to drink contaminated water, are ineffective since they only put a Band-Aid on the problem or create a temporary fix (3). In contrast, Naess's deep ecology includes three basic tenets that will help avoid anthropocentrism and prevent further destruction of our environment by humans. "The first is the idea of ecocentrism, that is, the idea of adopting an ecology-centered (or an Earth-centered) approach. In this view, the nonhuman world is considered to be valuable in and of itself and not simply because of its obvious use-value to humans. The second basic idea is that of asking deeper questions about the ecological relationships of which we are a part, by addressing the root causes of our inter-linked ecological crises rather than simply focusing on their symptoms. The third idea is that we are all capable of identifying far more widely and deeply with the world around us than is commonly recognized, and that this form of self-development, self-unfolding, or 'Self-realization,' as Naess would say, leads us spontaneously to appreciate and defend the world around us" (1). Deep ecology works to end temporary solutions for environmental problems by ending the emphasis that humans have placed on themselves.
Val Plumwood, an environmental philosopher in Australia, further articulated Naess's theory of deep ecology. Plumwood theorizes that humans must reject Descartes' theory of instrumentalism, in which non-humans do not get moral consideration, and replace it by embracing the tenet that all things in Nature have intrinsic worth. In Plumwood's opinion, human values will have to be (re)constructed in order to give moral weight to non-humans (3).
By extending human codes of morality to non-humans, humans escape anthropocentrism by showing compassion towards Nature (5). Therefore, under deep ecology, humans should not recklessly destroy non-humans because that action would be similar to murdering another human being (6). Humans should only manipulate the environment for the purpose of survival (6). Deep ecology is not purely philosophical concept. It means, humans must realize how they are a part of the symbiotic web of Nature that furthers biodiversity and sustains our environment by rejecting anthropocentrism (7).
Environmental diversity helps humans too. "Diversity enhances the potentialities of survival, the chances of new modes of life, the richness of forms. And the so-called struggle of life, and survival of the fittest, should be interpreted in the sense of ability to coexist and cooperate in complex relationships, rather than ability to kill, exploit, and suppress. 'Live and let live' is a more powerful ecological principle than 'Either you or me '" (8).
Aldo Leopold, another environmental philosopher, remarks that humans must recognize non-humans as equals in order to flourish. "Leopold considers that it is important to 'change the role of Homo Sapiens from conqueror of the land-community to plain members and citizens of it'" (7). By continuing to pollute the air, contaminate the water, and poison the soil on the Earth, humans are procuring their own extermination (7). Humans must realize that they are not the only species that depend on the Earth for survival and work to protect the future of many diverse ecological organisms.
However, philosophers have found many problems with the concept of deep ecology. Critics of Naess and deep ecology follow the Kantian tradition of logic which theorizes that only rational beings (humans) should be given due consideration in terms of morality since only rational beings are "something whose existence has in itself an absolute value" (3). Giving non-humans autonomy and moral weight is problematic since autonomy is an infinitely regressive process. Taking the tenets of deep ecology to the extreme-- if humans should not destroy non-humans, diseases would never be eradicated since viral organisms like HIV and the West Nile virus would have the right to remain alive and un-touched by human influences (3). Under this reasoning, humans would also die of things like heart failure and kidney disease since each individual organ would, by deep ecology, be an organism that must be left alone by humans (3).
Additionally, critics of deep ecology believe that humans do not uniquely harm the environment more than other living organisms or nature itself, in such natural disasters as earthquakes and hurricanes. "If the concerns for humanity and nonhuman species raised by advocates of deep ecology are expressed as concerns about the fate of the planet, then these concerns are misplaced. From a planetary perspective, we may be entering a phase of mass extinction of the magnitude of the Cretaceous [period]. For planet earth that is just another incident in a four and a half billion[-]year saga. Life will go onin some guise or other" (3). In fact, natural disasters have caused more harm to the environment than the collective actions of humans. "Consider, first, what Lovelock (1979) has called the worst atmospheric pollution incident ever: the accumulation of that toxic and corrosive gas oxygen some two billion years ago, with devastating consequences for the then predominant anaerobic life forms.... Or the Permian extinction some 225 million years ago, which eliminated an estimated 96 per cent of marine species" (3). All organisms alter the environment. Humans are simply following the evolutionary cycle. "Nature in and of itself is not, I suggest, something to be valued independently of human interests. It could be argued moreover that in thus modifying our natural environment, we would be following the precedent of three billion years of organic evolution, since according to the Gaia hypothesis of Lovelock (1979), the atmosphere and oceans are not just biological products, but biological constructions" (3). Deep ecologists falsely assume that humans are responsible for all of Earth's problems.
Most environmental philosophers condemn deep ecology because total non-anthropocentrism is neither possible nor beneficial. "The attempt to provide a genuinely non-anthropocentric set of values, or preferences seems to be a hopeless quest. Once we eschew all human values, interests and preferences[,] we are confronted with just too many alternatives, as we can see when we consider biological history over a billion year time scale. The problem with the various non-anthropocentric bases for value which have been proposed is that they permit too many different possibilities, not all of which are at all congenial to us. And that matters. We should be concerned to promote a rich, diverse and vibrant biosphere. Human flourishing may certainly be included as a legitimate part of such a flourishing" (3).
Kristin Shrader-Frechette argues that all negative effects that humans have on Nature have negative consequences on humans too. Therefore, anthropocentrism is not totally human-centric since humans can save the environment by saving themselves. Shrader-Frechette argues that if humans act out of their own needs to prevent ozone depletion and global warming they will be helping to save the environment as a whole (9). Realistically, deep ecology has done more harm to the environmental movement than good because individuals are not willing to support a theory that does not privilege human interests (9).
Anthropocentric thought is inescapable. "Eliminating our interests and concerns is impossible if we are to forge a framework which will enable us to articulate an ethic of concern for the non human world" (3). Even Naess agrees that a total rejection of anthropocentric viewpoints would be undesirable since that would increase unemployment by putting loggers and other people who "harm" the environment out of work (10).
An environmental ethic totally void of human perspective is impossible to form and ineffective. "[I]f we attempt to step too far outside the scale of the recognizably human, rather than expanding and enriching our moral horizons we render them meaningless, or at least almost unrecognizable. The grand perspective of evolutionary biology provides a reductio ad absurdum of the cluster of non-anthropocentric ethics which can be found under the label 'deep ecology'. What deep ecology seeks to promote, and what deep ecologists seek to condemn, needs to be articulated from a distinctively human perspective. And this is more than the trivial claim that our perspectives, values and judgements are necessarily human perspectives, values and judgements. Within the moral world we do occupy a privileged position" (3).
Embracing biodiversity, environmental conservation, and an honest concern for Nature can be done within a framework of anthropocentrism if the needs of the environment are honored (3). Humans can take moral consideration into account when debating environmentalism and give value to non-humans without subordinating the needs and wants of human species (11). Naess criticizes this type of environmental philosophy for being "shallow" and only furthering humans at the expense of nature (8). However, humans can take steps to reduce pollution, protect natural resources, and conserve the environment from an anthropocentric mindset if they realize that their selfish actions have a huge, and often detrimental, effect on Nature.
"The preoccupations of deep ecology arise as a result of human activities which impoverish and degrade the quality of the planet's living systems. But these judgements are possible only if we assume a set of values (that is, preference rankings), based on human preferences. We need to reject not anthropocentrism, but a particularly short term and narrow conception of human interests and concerns. What's wrong with shallow views is not their concern about the well-being of humans, but that they do not really consider enough in what that well-being consists. We need to develop an enriched, fortified anthropocentric notion of human interest to replace the dominant short-term, sectional and self-regarding conception" (3). Humans can escape the pitfalls of anthropocentric thought by reducing their abuse of Nature. This can be achieved through such actions as legislative acts to secure environmental conservation, and a reduction in pollution, oil drilling, and logging. Anthropocentrism does not need to be abandoned. Instead, humans must realize that they are not the only species that depends on the existence of the Earth for survival. Humans should take actions that promote biodiversity and species conservation instead of destroying the environment to meet their own selfish, material needs.
1)Warick Fox, "From Anthropocentrism to Deep Ecology"
2) "Anthropocentric" as defined by Dictionary.com
3) William Grey, "Anthropocentrism and Deep Ecology"
4) Harold Glasser, "Selected Works of Arne Naess"
5) Arne Naess, "Sustainability: The Integrated Approach"
6) Mikhail V. Gusev, "On the Problem of Anthropocentrism and Biocentrism"
7) John Seed, "Anthropocentrism"
8) Arne Naess, "Deep Ecology" from Cybermogensland.com
9) Thomas Gramming, "Environmental Ethics"
10) An interview with Arne Naess, "Human Rights Nature's Rights"
11) Heuy-li Li, "On the Nature of Environmental Education"
The Real Leprosy Name: Christine Date: 2002-12-19 17:22:56 Link to this Comment: 4156 |
What thought comes to mind when you hear the word "leprosy"? The word might conjure up an image of a person covered in boils and blisters, or a community of outcasts where everyone is covered from head to foot to hide their disfigurements. The term "leper" is almost synonymous with outcast. I always thought it was a serious skin condition that could lead to dismemberment, something similar to the portrayal of leprosy in "Ben Hur", "Braveheart", and even in the Bible. While researching leprosy it became clearer that there are a lot of common misconceptions about the disease and its effects. The goal of this paper is to unveil the mysteries of leprosy and discover what exactly happens when a person contracts the disease.
The first written mention of leprosy dates from 600 BC, from the Veda books in India, and has been recorded in ancient China, India, and Egypt (1). The growth of communities and interpersonal contact allowed the bacteria to survive and spread. From their travels in India, Alexander the Great and his soldiers introduced leprosy to Europe. The regions surrounding the Mediterranean Sea were the first affected and it was there that the infection got the name leprosy. Leprosy is derived from the Greek word "lepros" meaning "scaly"(1). The spread of leprosy has been linked to the expansion of the Roman Empire, and it reached its peak during the middle ages.
Throughout history leprosy has been associated with acute social stigma and if a person contracted leprosy it was believed to be a curse from God. Leprosy has been the most feared disease for many centuries, and the afflicted often were ostracized from their families and society. In Medieval times, priests gave lepers a mock funeral before taking them to a field and leaving them with specific instructions. They were forbidden to "enter churches or houses; wash [their] hands or clothes in spring or streams; wear anything but a leper's cloak; touch anything [they] wanted to buy...never talk to people unless down wind from them; never touch children or give them gifts; and never eat and drink with people other than lepers"(3). They were even instructed to warn others of their "miserable" presence with a horn, clapper, or bell.
In 1873 Dr. Armauer Hansen of Norway was the first to see the leprosy germ under a microscope and made a revolutionary discovery - leprosy was neither from sin nor a curse, but caused by a germ. Leprosy was given a new name, Hansen's disease, and the germ was named Mycobacterium leprae (4). Mycobacterium leprae, or M. leprae for short, is an "acid-fast, rod-shaped organism with parallel sides and rounded ends" (5). It occurs in large numbers and has a doubling time of twelve to fourteen days, the longest doubling time of all bacteria (6). The incubation period from exposure to onset of illness is usually three to five years, but can be up to ten years (7). However, not much research has been done on the bacteria since scientists have been unable to cultivate it in a laboratory.
What exactly is leprosy? Leprosy is a chronic infectious disease, the germ gathering in the cool parts of the body, particularly in the skin and surface nerves. The nerves in the hands and feet are affected, but leprosy also causes problems in the nose and eyes. There are over five million people still suffering from leprosy in the world, with over 800,000 new cases every year (8). Leprosy can occur at all ages, from infancy to the elderly. It also affects both sexes, although in most parts of the world males are the most afflicted with a ratio 2:1 (9). Since the nerves located near the skin are attacked, if untreated it can lead to a loss of feeling. Consequently, wounds and other injuries go unnoticed and untreated, leading to ulcers. In addition to loss of feeling, the person affected has muscle weakness and neglect of care leads to paralysis and other serious disabilities (8). Rarely is leprosy the immediate cause of death, but people with leprosy have an additional mortality risk due to indirect effects of leprosy.
There are many widely spread myths about the transmission of leprosy. Until recently it was believed that leprosy was transmitted through contact between an infected person and a healthy person. Leprosy cannot be transmitted through pregnancy, sexual contact, or a handshake. In fact, 95% of people have a natural immunity to leprosy, and many people will never develop the disease even after exposure to the bacteria (4). However, long-term contact with an infected person increases the chance of a household member contracting leprosy. Recent research has indicated that leprosy is spread via the respiratory system. M. leprae collects in the nasal mucosa and is discharged by coughing and sneezing. Once leaving the host, M. leprae from the "nasal secretions" can survive thirty-six hours or more, even up to nine days. It then enters the new host through two probably portals: the skin, especially open wounds, and through the upper respiratory tract (9). In the United States, the spread of leprosy has been linked to infected armadillos.
Leprosy is rare in the United States, with only about 5,000 current cases and approximately 200 new cases every year. Ninety percent of all cases involved patients who lived in foreign countries where the disease is widespread. In the United States, leprosy been reported mostly in Texas, Louisiana, and California. In 1975, the nine-banded armadillo was identified as a carrier of the M leprae bacteria, and the animal can be found in the southwestern United States, Mexico, and Central and South America. There have been several cases of leprosy contracted by patients with extensive contact with armadillos. Contact has included hunting, trapping, eating, and wrestling armadillos. The Center for Disease Control and Prevention conducted a study on the correlation between armadillos and people in the late 1970s. They found that between 1976 and 1981, 38% of infected patients reported exposure to armadillos. However, transmission between armadillos and people has remained somewhat a mystery, although there have been several hypotheses. It has been proposed that M. laprae discharged from infected armadillos contaminates the soil, and since the bacteria can survive without a host for a period of hours or days, people can then get infected (2).
The first symptom of leprosy is a lesion, usually appearing on the face, legs, or back. The spot is colored, usually reddish, darker, or less pigmented than the normal skin. The lesion is also characterized by hair loss in that spot and also loss of feeling on the spot (10). However, clinical appearance of leprosy depends on the body's immune response to the bacteria. The manifestations have been broken down into five categories of varying severity (11). One extreme is called tuberculoid leprosy, which involves a strong immune response. The appearance of leprosy is minimal, resulting in a few purplish lesions on the body. On the other extreme is the absence of an immune response, leading to lepromatuous leprosy. This is characterized by extensive growth of the bacteria, resulting in numerous lesions. Patients with middle-range immune reactions fall into one of the three "borderline" categories between the two extremes.
Diagnosis of leprosy is based on symptoms and clinical signs, therefore requiring little laboratory use for a diagnosis. A skin biopsy, where a small piece of skin is taken from the area in question and examined in a laboratory, can determine the presence of M. leprae in the patient's system to confirm leprosy. Another test is a skin smear, where a small incision is made into the skin and tissue fluid is extracted and later examined in a laboratory. There are no blood tests for leprosy (4).
Leprosy has been the number one feared disease for many centuries is because of the horrible physical changes. The effects of nerve damage cause frightening disabilities, and it is understandable why people were so afraid of contracting leprosy. One of the main areas of the body affected is the face. If the bacteria affect the eyes, the nerve damages causes loss of feeling and it becomes challenging to prevent damage. The eyelids become weak, preventing the patient from closing the eye properly. Tear production is also reducing, making it difficult to keep the eyes clean. The patient also loses the blinking reflex. Ultimately, the effects of nerve damage lead to dryness, corneal ulcers, and blindness. Leprosy can also affect the nasal cavity. M. laprae can enter the mucous lining of the nose, and over time, causes scarring and internal damage. After a period of time the nose collapses (12).
The greatest physical changes that leprosy causes concern the hands and feet. The sweat and oil glands cease to function, making the skin on the hands and feet dry and crack. With the damage to peripheral nerves come small muscle paralysis and a loss in strength. Over time, the loss of mobility leads to a "clawing" and curling of the fingers and toes. The damage to the nerves also causes a loss of feeling, and consequently a loss of protective reflexes. Patients cannot feel the effects of sharp objects, burns, and pressure. Wounds and cuts become serious infections without proper care, resulting in amputations and the erosion of tissues and bones and eventually crippling (13).
The good news about leprosy is that in 1960, a new treatment called Multiple Drug Therapy (MDT), a combination of three drugs, effectively kills the germ and prevents future drug resistance. After the first dose of this treatment the patient is no longer infectious, and after twelve months the patient is considered cured. The Multiple Drug Therapy cannot reverses nerve damage, but it can prevent disabilities as long as leprosy is caught in an early stage and the patient takes the medication as prescribed (14).
Leprosy is still feared in many parts of the world, especially in the areas with high rates of infected people. The combination of fear, lasting stigma, and myths surrounding leprosy make it more than a medical condition. People with leprosy fight shame and social isolation because of the deformities, sometimes neglecting treatment at the start of the disease. People affected need more than just medicine, and there are now foundations that not only provide treatment but also by helping the sufferers cope with their diagnosis, disability, and social isolation.
1)History of Leprosy
2)Hansen's Disease
3)A Biblical Curse
4) Leprosy Facts and Myths
5) Microbiology of Mycobacterium leprae
6) Mycobacterium Leprae
7) Nature Genome Gateway: Mycobacterium leprae
8) Leprosy - What is it and Why the Myths?
9) Transmission of Leprosy
10) Diagnosis of Leprosy
11) Symptoms and Classification of Leprosy
12) Medical Examination of Leprosy - Face
13) Medical Examination of Leprosy - Feet
14) Leprosy Fact Sheet
Pretty Please? With MSG on top? Name: Joanna Fer Date: 2002-12-19 18:10:21 Link to this Comment: 4158 |
It seems there are controversies in every area of study, in every topic imaginable. We debate the origins of this, the beginnings of that, where this came from, whether it came before that. Socrates said, "the only true wisdom is knowing you know nothing," and this is very true. The more we learn, the more we learn to question, and the more we learn we know next to nothing of what we could know. It could be said that most of us, however, know about taste. We most of us, when we eat something, can identify, perhaps after some thought, what the food tastes like. If we eat a pretzel, we say, "salt!" and if we eat a piece of candy, we say, "sweet!" Our taste buds react and we feel the pleasureor painof a taste. Of course, taste is not as simple as taste buds reacting to the food we put in our mouths. There is even controversy surrounding our taste buds. The controversy surrounds a debatable fifth taste sensation, umami. The taste sensations that science has discovered, isolated, and determined to be actual taste sensations are salty, sweet, bitter, and sour. Umami is a Japanese word that loosely translates into savory or meaty. It has been a taste acknowledged by the Japanese for hundreds of years; however, it is only recently that umami has been accepted by the Western world as its own separate taste.
A scientist named Kikunae Ikeda, in 1907, discovered that glutamic acid was the cause of the taste defined by the Japanese as umami. Glutamic acid is one of the amino acids from which proteins are built. It is defined by the American Heritage Dictionary (1976) as "an amino acid present in all complete proteins, found widely in plant and animal tissue and produced commercially for use in medicine and biochemical research." According to Thomas Maeder, who wrote an article about umami in 2001, Ikeda said that "an attentive taster will find something common in the complicated taste of asparagus, tomatoes, cheese, and meat, which is quite peculiar and cannot be classed under any of the [other known tastes]. It is often faint, usually overshadowed by other, stronger, tastes, and may easily pass unnoticed. Had we nothing sweet than carrots or milk, our idea of the quality 'sweet' would be just as indistinct as it is...with the peculiar quality." (1)
Glutamateor glutamic acidwas first distinguished from other substances by Ikeda. He took a large amount of a seaweed broth used in Japanese cooking, in which he had noticed the taste he with which he was concerned. He managed to separate this substance from the broth; this substance was glutamic acid. Deciding to make a seasoning from this substance, Ikeda created monosodium glutamate, or MSG. (2) MSG was used in foods, especially of the Asian variety, and continues to be used for flavoring today. However, umami as a taste has only been newly discovered. While Ikeda had hypothesized that glutamate was indeed its own separate taste, it was not until recently that actual evidence of this had been found.
A taste bud consists of anywhere from 50 to 100 taste cells, and each taste bud contains taste cells for each of the taste sensations. Therefore, each part of the tongue can taste each taste sensation that exists. The idea that there are separate regions of the tongue for the taste sensations is incorrect; scientists have known this for decades. Each taste cell has taste receptors on its apical surfacethat is, the surface of the cell that is exposed. According to a website about taste, "Each taste bud has a pore that opens out to the surface of the tongue enabling molecules and ions taken into the mouth to reach the receptor cells inside." (3) The tongue is where most of the taste receptors are; however, they are also found elsewhere in the mouth and throat. Taste cells are associated with papillae on the tongue. The various forms of papillae are fungiform, which are associated with 18% of taste buds; foliate, which are associated with 34% of taste buds; and circumvallate, which are associated with 48% of taste buds. These papillae line the tongue, so we do indeed taste with our tongues, though taste buds are found in other regions of the mouth and throat.
Taste is imperative in many ways to human survival. There are some who believe that taste "drives appetite and protects us from poison." (4) When we taste food or drink that is sweet, we accept it into our bodies; most of us like sweet tastes. When we taste food that is bitter we hesitate to continue eating it; most poisons taste bitter, and we therefore have taste receptors to determine the basic qualities of foods and drinks so we do not die when eating or drinking them. Food that is salty is good, food that is acidic is generally bad. If we had no taste, we would not be able to distinguish bitter foods from sweet foods; we could very well kill ourselves without taste. Umami is different from these other four in that it is something of a general sense when it comes to food. The glutamic acid cannot be tasted when it is bound to a protein. It is only when the proteins begin to break down, or when glutamic acid is produced artificially, that we can detect the umami taste. This is why umami is tasted in aged cheese, or fermented foods. Umami is, partly, the sense one gets when one eats a well balanced mealeach bit of umami in each food is brought out by the other umami present, and this creates a rich umami sense. The need for a well-balanced meal is fulfilled by eating food that tastes good when it is put togetherspaghetti on its own is blandadd tomato sauce it is even better. With meatballs added, and some cheese to top off the dish, there is a wonderfully tasteful meal ready to be eaten. Umami is found in traces in all these ingredients, especially the tomato sauce, meat, and cheese. When these foods are combined, deliciousness occurs. Delicious food is sought afterand delicious combinations are often healthy. Umamior the desire for umamicauses us to create meals that have more than one part to them. As Dr. Stephen Roper said, "This umami taste probably drives our appetite for protein, just as sweet drives your appetite for carbohydrates and saltiness drives our appetite for salt or minerals. The bottom line is that glutamate tastes good and makes you want to eat. That's why adding monosodium glutamate makes food taste better and makes you want to eat more." (5)
Taste cells are like neuronsthey have a "net negative charge internally" and a "net positive charge externally." What we put in our mouthsand therefore tastechanges the charges of the mouth. They use "various means to increase the concentration of positive ions inside taste cells, eliminating the charge difference." This depolarization causes taste cells to release neurotransmitters, which cause nearby neurons to send messages to the brain about the taste of the item. In effect, taste is a way of the mouth to maintain its chemical homeostasis. A substance is introduced to the mouth that causes the chemicals to change, so the cells in the mouth set off to counter-act the occurrence of the chemical in the mouth. The reaction is what we taste. When a substance enters the mouth that is bitter to taste, it is setting off the parts of the taste receptors that react to basic chemicalsthat is, the cells react in a certain way, as with the other taste sensations.
Henry David Thoreau once said, "live each season as it passes; breathe the air, drink the drink, taste the fruit, and resign yourself to the influences of each." Tasting the fruit and drinking the drink can be made that much better if we acknowledge the existence of umami and strive to taste it. There is a theory that our taste buds are to help us eat a balanced diet, which I touched upon slightly before. Umami is the taste that corresponds with proteinif a person is craving something umami, they are craving a protein. It is often that we are craving something we just cannot name. Most often, I am sure, that taste we are craving is umami. I did a small, informal survey of a few people. No one I asked knew what umami was. I asked people who are students at Bryn Mawr, and a few professors, one of whom was born and raised in Japan. This shows how little umami is known throughout the world. Japanese cooks may be aware of the sensation, but most of us have no idea. We are all aware of MSG, but few of us know that MSG is good, not bad. When umami gets its proper recognition as a taste sensation, and the general public is made better aware, all of our palates will be extremely happy.
1)www.redherring.com
2)http://www.glutamate.org
3) Taste Page
4) http://www.cf.ac.uk/biosi/staff/jacob/teaching/sensory/taste.html
5) http://dir.salon.com/health/log/2000/01/24/umami/index.html
The Body Farm Name: Brenda Zer Date: 2002-12-19 20:57:37 Link to this Comment: 4159 |
When I received my driver's license and saw the little part on the back that asks if you are an organ donor, I decided that I wanted to donate my entire body to science (I mean, what is the use of filling my body with toxic embalming chemicals and taking up space in the ground?). Since then, I have often wondered exactly what I'd like them to do with my body; use it for medical research, donate my organs to sick people; I had not really decided until I read about a fascinating little research facility in Tennessee.
I became interested in forensic anthropology in 1995, when I read a book called Dead Men Do Tell Tales by Michael Browning and William Maples. (1). Five years later, I purchased another forensic anthropology book called Bones, by Douglas Ubelaker and Henry Scammell. (2). Although this was a much less interesting book than the first, one part of it stuck with me. Ubelaker has a chapter in which he discusses decomposition: this was the first time that I had ever read about the University of Tennessee's Forensic Anthropology Facility (more commonly know as The Body Farm). (3). In middle school, I wanted to be a forensic anthropologist more than anything (so, of course, everyone thought that I was weird for wanting to spend my life working with dead people.)
This facility is the most unique forensic anthropology research station on the planet. Dr. William Bass III created this forensic facility in 1972 with the permission of the University of Tennessee. (10). Almost all of the available knowledge about human decomposition is based on evidence found at the University of Tennessee. The site was only nicknamed the Body Farm after Patricia Cornwell wrote a mystery novel about it in 1994 (titled The Body Farm). (3).(5). Current workers at the site prefer that people not call it The Body Farm, as that has disrespectful connotations and they have a great amount of respect for the "residents" of the research station. (4).(8).
One of the goals of the research facility is to create "a calendar of decomposition by finding...the half-life of death, so to speak." (3). While William Bass retired in 1999, the current operators of the facility, Professors Richard and Lee Janz, are having their team study the bodies for various features. They are studying the insect predation levels, protein degradation in major organs and gas levels (of such gases as putrescine and cacaverine) over time. (3).(6). In forensic investigations, it is important in solving a crime to have an exact cause and time of death. (6).
The University of Tennessee Forensic Anthropology Facility gets its bodies from many sources. There are unclaimed bodies from the morgue, outright donations from people like myself (although, there is apparently a waiting list!), corpses of criminals whose families either do not claim the body or refuse to pay for its interment. (3).(6).(8). Therefore, unless a person's family claims their body; they are donated to science, and could end up as a resident of the Body Farm.
The three-acre lab is protected from unwanted visitors by razor-wire fences and a wooden privacy wall. (3).(7). As Dr. Bass puts it, "we're not worried about the dead getting out...it's the living we're concerned about!" (8). At any one point, the station will have at least twenty bodies decomposing. The center has had more than 400 bodies on its property since it was founded, the bodies ranging from fetuses to persons up to 101 years old! (10). The bodies are placed in either simulated crime scenes, or natural-death poses: out in the open under the hot sun, in the shade under a tree, left a house or locked the trunk of a car.
They are also placed at the bottom of the lake, buried (in both shallow and deep graves), hung from scaffolding or in various other positions which could be of use to forensic anthropologists. (6).(8). Even the absence of bodies can be an indicator of a crime scene. If there was a body buried there at some point, certain acids will leach into the nearby soil. Investigators can simply scan the soil to determine if there was ever a large mammal buried in that location. (8). It reminds me of something that you might see on such popular television shows as CSI: Crime Scene Investigation or maybe the X-Files!
In cooperation with the Federal Bureau of Investigation (FBI), the forensics center buries some of the bodies under concrete slabs of varying thicknesses so that the FBI may test new GPRs (ground penetrating radars). (6). In the last few years, the FBI had requested further help from the facility, requesting that the team there teach a course on decomposition to top federal agents. (7).
Part of the Body Farm's mission is to help law enforcement on matters of forensic anthropology. Amazingly enough, 90% of people studying law enforcement have never even seen a dead body. (6). Police dogs are sometimes brought to the facility so that they may learn what decomposing flesh smells like; this helps when law enforcement officers are searching for a body, as humans can only smell decomposing bodies from about 30 yards away, so police dogs, with their much keener sense of smell, are useful. (7).(8). The Janzs' are not trying to turn all officers into forensic anthropologists, but rather to teach them to handle the materials in such a way that they can be safely transported to a specialist. In fact, they teach this so well that there have been requests from places such as Turkey and Hungary for the Janzs' team to come and give lectures to Turkish and Hungarian law enforcement on the subject. (6).
Although many people have given this facility great praise for its work, nobody else really wishes to see it recreated elsewhere. Bass originally tried to establish other sites around the country, so that cadavers could be tested in different climates. As he put it, "you decompose much more slowly in Minnesota than you do in Miami." (6). While excellent data may be gathered in Tennessee, the decay-rates are not similar to other climates. In hot, humid environments, maggots can clean a body of all its flesh in less than two weeks. (8). Even in light of this setback, no other university or private research facility wishes to enter into the controversial world of posthumous research, either because they are squeamish or because of religious beliefs. (6).(7). With only 60 certified forensic anthropologists in the country, there is not a wide-range of people who could be willing in the first place. (10).
Although this is a one-of-a-kind research facility and will probably never be recreated anywhere else, I think that the scientific value of this institution is without a doubt. With forensic evidence, it has now become possible to convict murderers, and to help people recover their lost loved ones. I never realized how complex a crime scene really is - even the soil around the body contains vital clues. The information gathered from the University of Tennessee's Forensic Anthropology Facility about natural putrefaction processes has helped further science in innumerable ways.
1)Browning, Michael and William Maples. Dead Men Do Tell Tales. New York: Main Street Books, 1995.
2)Scammell, Henry and Douglas Ubelaker. Bones. New York: M. Evans and Company, Inc., 2000.
Albert Hoffmanthe man who first synthesized LSD in the late 1930s while investigating various alkaloids derived from the ergot fungus, used for centuries in folk medicine to precipitate childbirthwrote a memoir about his discovery of the psychedelic drug that made him famous and forever changed the ways in which many people would view the world: he titled it My Problem Child. (1) And indeed, LSD has been a problem child: its proponents claim that it elucidates the complex, ephemeral elements of the mind, heightens perception, leads to self-discovery and enlightenment; its detractors claim it is dangerous, unpredictable, that it causes unknowable damage to the mind and brain, that it drives people insane. No discussion of counter-culture since its discovery would be complete without it: and while many might consider LSD to be a curiosity of the counter-cultures of the 1960s and 1970s, a by-product of some kind of failed ideology promoted by the likes of Timothy Leary and Hunter S Thompson in a past growing ever more distant, it has not actually disappeared from American counter-culture. According to the DEA, the demand for LSD has been steady for the past few decades, but is now increasing: and much of that demand comes from high school and college-aged students. Close to ten percent of graduating high school seniors reported having tried LSD in a recent national survey. (2)
The difficulty with understanding hallucinogens in a scientific sense derives from the current 'mind versus brain' debate: on one hand are physiologists, reporting the effects of drugs like LSD in terms of "brain", neurotransmitters and chemical reactions, physical structure. On the other hand, psychologists report on hallucinogenic drugs in terms of "mind", are fascinated by a subject's reports of feelings of euphoria or megalomania, reliving repressed memories, effects on personality, and so forth. There is not, at this juncture in time, a lot of middle ground between the two camps: and so, any kind of attempt to describe the mechanics behind hallucinogenic drugs will be, as one researcher put it, "necessarily course."(4)
Mind-altering drugs are such because they are able to fool brain cells into thinking they are chemicals that normally are found in the brain; that is, their molecular structure is similar to that of a real neurotransmitters. LSD is shaped like the neurotransmitter serotonin, and is thus able to fit into the particular groves of the synaptic surface normally used by serotonin, and is shot across the synaptic gap by the electro-chemical processes that normally transmit serotonin. When LSD reaches the other side of the synaptic gap, however, it doesn't carry the electro-chemical impulses in the direction it should (assuming it were the neurotransmitter it is mimicking); instead, the impulse is redirected down less highly conditioned, less familiar pathways. "By redirecting consciousness, as it were, into the unimprinted areas of the [cerebral] cortex, one hypothetically experiences the world anew, thus the variety of interpretations which arise upon questioning psychedelic users about their 'trip'." (5)
In addition, LSD seems to affect the locus coeruleus, a part of the brain it is thought is responsible for channeling sensory input: it speeds up the firing of the locus coeruleus' neurons, possibly explaining the symptom of synethsiahearing colors, or seeing sound, the crossing of sensory perceptions reported by LSD users. (3) Other symptoms reported by and observed in LSD users: dilated pupils, hair standing on end, decreased blood pressure and slowing of the heart rate; increased perceptual sensitivity to one's surroundings, disturbed perception of time, decreased sense of ego-self and increased vulnerability to suggestion.
Many people believe that LSD permanently alters the structure of the brain, or remains in the system forever. All the research done thus far would indicate that this is not the case: rumors that LSD alters chromosome structure, for example, were debunked in the early 1970s(3), and LSD is only detectable in a person's system for one to four days, depending on who you ask. Further, LSD is a crystalline solid, but it is water-soluble: it is not capable of forming deposits. The ubiquitous flashbacks associated with LSD are not caused by changes in neuro-chemical structure, but are actually a normal psychological phenomenon. Any intense emotional experiencethe death of a loved one, a car accident, a mugging, the moment when one realizes he or she is in lovemay subsequently return to the forefront of one's consciousness without conscious prompting. Since the "LSD trip is often an intense emotional experience, it is hardly surprising that it might similarly 'flash back.'" (3) Proponents of LSD note that while there have been incidents of so-called LSD psychosis, those who suffer are generally those with histories of mental and psychological disorders: essentially, if the resulting psychosis had not happened directly after an episode with LSD, it would have happened eventually.
Hallucinogenic drug users subscribe to the notion that the keys to a good experience are setmental preparation on the part of the userand setting: that is, when consciously taking a substance that will make you especially sensitive to outside stimuli, it is extremely important to be sure that those outside stimuli are pleasant ones. The increase in use of LSD and other hallucinogenic drugs by younger people has been linked by many to the rave culture that began to develop in the early 1990s. (5) Raves, for those not in the know about such things, are parties of sorts with electronic music and often elaborately planned light shows or other visual stimuli, designed to bombard the senses of all in attendance: they generally last until late into the night, or sometimes even into early morning. Raves can attract thousands of people for highly publicized events, and are popular not only in America, but also in Europe and elsewhere in the world. Drug use is popular among ravers, apparently, and many public policy makers have attempted to pin at least part of the blame for the increased use of hallucinogenic drugs on rave promoters: the thinking goes that by providing an exciting "setting," rave promoters are increasing the demand for hallucinogens. While ecstasyMDMA, an amphetamine-hallucinogen combination with far more thoroughly documented physically adverse side effects than LSDis the main source of anxiety for those concerned with raves, and while ecstasy is more popular than LSD among young people, LSD is increasingly among the drugs available and consumed by ravers.(2) This summer, a bill was introduced in Congress cutely called the RAVE act (Reducing American Vulnerability to Ecstasy) that proposed harsh sentences for promoters of events whose constituents were found to be in possession or under the influence of illicit drugs.6(6) Questioning the ethics behind holding promoters responsible for the actions of their patrons, and predicting that raving would not disappear but rather become a clandestine and underground activity, thereby increasing the risks to those who chose to attend; many groups have organized to oppose the RAVE act: as yet, it has not become law.
But the question of who or what to hold responsible for the sudden increase in interest in hallucinogens like LSD and Ecstasy is an interesting one: I don't think current public policy makers are going about answering it properly in assigning the blame to event promoters. The effects promised by hallucinogensunpredictable insight into yourself and the world; increased appreciation for beauty; enlightenment; heightened and other-worldly perceptions; empathy for all people and union with the cosmosappeal only to certain kinds of people. Aldous Huxley described the original users and proponents of LSD as "a nation's well-fed and metaphysically starving youth reaching out for beatific visions in the only way they know"(1)that is, through drug use. A tool for redefining consciousness is only attractive to those who feel their consciousnesses are in need of redefinition: the real source of increased demand for hallucinogens is that nagging void that has haunted various sub-groups of Western society for centuries. Counter cultures exist because popular culture has failed to meet the needs or desires of its constituents, and so some people drop outpardon the punfrom that popular culture to find a new solution to the problems they perceive: whether LSD and other hallucinogens and the ideals promoted by their advocates are a part of that solution, or just symptoms of a greater, insolvable problem is a matter I have not resolved. But the issues hallucinogens like LSD raiseideological, physiological, ethical and scientificare likely to continue to shape the neuroscience and ontology of this and future generations.
2) US Department of Justice Drug Enforcement Administration
3) The Vaults of Erowid, Don't be fooled by the kind of ridiculous New Age graphics: this is a serious site with a lot of information.
4) Ian Leicht: Postulated Mechanisms of LSD
5) The Lycaeum , Lots of Information. Particularly sited in this paper were an essay on The Pineal Gland, LSD, and Serotonin and an article on increased use of LSD among high school students.
6)The RAVE act , as outlined by the Electronic Music Defense and Education Fund, a not-for-profit whose goal is to lobby and act as a voice for the electronic music industry. Also worth looking at: Dancesafe
For fun:
Performing in front of an audience is nerve-wracking: especially so when that performance involves playing a musical instrument. Drawing a distinction between the kind of pressure on a musician and, say, an actor or acrobat might seem peculiar at first thought. "An audience is an audience," one might say, "I mean, come on, I get nervous giving presentations in a class with ten people." But given further thought, an important difference can be noted: symphony audiences are well known for being far less tolerant of mistakes and imperfections than their theater-going and circus-frequenting counterparts. An actor trips on a line, a dancer falls during a difficult lift; and an audience is likely to respond with concern for the dancer's well-being, with a gentle laughter at the actor's blunder. The stress on technical perfection, the pressure to play flawlessly experienced by symphony musicians has been studied with some depth. Researchers into stress levels of various jobs rate symphony musician right on par with jet fighter pilots.(1) And a musician's abilityor inabilityto handle that sort of stress can be a making or breaking factor in her career: a musician that can not perform under pressureat auditions, in front of large audiences or with prestigious companyis going to find herself short of available work.
So it should come as no surprise that the musical community has seen a rise over the past thirty some-odd years in the use of a particular prescription drug which alleges to help relieve some of that symptoms of that stress. Propranolol, a beta-blocker more commonly know by its brand name, Inderal, is not approved by the Food and Drug Administration to be used specifically in the treatment of performance anxiety (2). And yet, it has been used with increasing popularity since the 1970s to combat the physical manifestations associated with anxiety over performances: "sweating, tenseness, pounding heart, trembling, and sometimes a dry mouth (a particular problem for wind players.)" (1). . Usually used to lower blood pressure or regulate heart rhythms, beta-blockers prevent certain hormones, like norepinephrine and adrenaline, from being accepted by chemical nodes (beta-receptors) found in the brain, heart, lungs, arteries, uterus, and elsewhere throughout the body. In doing so, beta-blockers thwart the body's ability to follow through with its natural flight-or-fight response to perceived crisis.(3) Beta-blockers are not sedatives, and they can't help anxiety of a purely psychological nature; "Inderal doesn't work against the cognitive, or intellectual, symptoms of anxiety. It's most effective in slowing your heart rate, keeping your hands still, and quelling the jitters."(2). But for a musician nervous over an important audition or performance, taking a pill an hour before a stressful event can allow the performer to do his best, without distraction caused by involuntary trembling or uncontrollable breathing.
Propranolol can cause physical dependence with chronic use; but the beauty of the drug is that chronic use is not required, or even desirable, for musicians to reap the anxiety-quashing benefits. While beta-blockers is not recommended for people who suffer from heart disease, low blood pressure, asthma, diabetes, hyperthyroidism, or a tendency toward depression,(1) beta-blockers are considered among the safest of medications. In 2000, over 18 million prescriptions for beta-blockers were written.(4) A study reported in 1986 of 2,122 musicians in major U.S. symphony orchestras showed that 27% reported taking beta-blockers. Of that 27%, 19% took them daily under a doctors prescription for heart conditions, etc., 11% had a prescription for occasional use (concerts, auditions, etc.) and the remaining 70% reported occasional use, but without a doctors prescription. Among those who reported occasional use, auditions were sited as the most frequent prompter, followed by various kinds of seriously demanding performances. Only 4% of those who reported occasional use found they used Inderal for every performance.(3)
As one might expect, not all musicians agree with using beta-blockers, or that doing so would better their performances. Some musicians claim that increased adrenaline is actually a benefit to their playing: the higher stakes increase the intensity of their playing. As one musician explained, performing on Inderal made him feel as if he were, "cut off from the music."(1) And while there remain some slight stigmas with using beta-blockers in the musical communityindividuals who take beta-blockers for anxiety are sometimes perceived as weaker players, much the same way that some people still view an individual who uses prescription medication to treat depression or Attention Deficit Disorder as weaker-mindedtheir use is so wide-spread that they are considered fairly acceptable.
The use of drugs like Inderal has actually begun to spread into other arenas in recent years, being used by more and varied types of people: surgeons performing delicate surgeries, high-level business executives: one study showed that a third of all cardiologistswho would know beta-blockers better than most peoplereport using them for performance anxiety while presenting lectures at symposiums.(5) Psychiatrists have recently begun prescribing Inderal for students with test anxiety. (6) While the FDA does not officially recognize Propranolol as a treatment for tremors associated with nervousness, The International Olympic Committee banned the use of Inderal in several sports that require especially steady hands in 1985.(2). The Mayo Clinic is currently putting together a study to see if beta-blockers might be useful in helping golfers suffering from the "yips," a condition involving involuntary spasming of muscles while attempting high-pressure, short range shots. (7). A trend that developed in the orchestral music community in the 1970s of misusing a popular drug is slowly spreading into varied and sometimes surprising fields of medical and psychological use and research. It strikes this student as odd that musicians are forever being exposed as creative users of chemical substances to better their performances, popularizing various techniques for improving one's faculties with chemical alterations. Ah well; at least this one is legal.
2)Business 2.0: Courage in a Pill
3) Beta Blockers and Performance Anxiety
5) Social Anxiety Disorder Assessment and Pharmacological Management
6)SVE Guidance Office , Discusses high school students taking Inderal for test anxiety.
All of us are born with a set of instinctive fears--of falling, of the dark, of lobsters, of falling on lobsters in the dark, or speaking before a Rotary Club, and of the words "Some Assembly Required."
Everyone has fears; it's a natural part of being human. When we are young, we fear the dark and the boogieman. When we get older these fears change to fear of what others think about us, how well we do in school. Older still these fears change and become in some ways more complex. This is natural. When these fears become unnatural, when they take over our lives and cause us to take measures to avoid things or situations or they get in the way of us enjoying life, then these fears become a problem. Then we have a phobia.
The word "phobia" is Greek in origin. To correctly name a phobia a Greek prefix should be used but usually Latin terms are used since the medical community is so steeped in the Latin language (1). Phobias are very common, affecting people from all ranges of life, all ages, all regions. In fact, a report by the National Institute of Mental Health states that 5.1%-12.5% of all Americans have phobias. They are the most common mental illness of all women and of men over the age of 25 (2). Why the age stipulation on men and not on women? Probably because women are twice as likely to have a phobia than men are (3).
In a basic definition, a phobia is just an exaggerated anxiety. Much of the population, about 15%, will or has experienced a phobia or some form of anxiety. (4). People with phobias are essentially like Pavlov's dogs. Just as the dogs salivated when they heard a bell, a person with a phobia will get anxious when around or thinking about the trigger or stimulus to their specific phobia (5). What causes these triggers? For the most part, it is unknown, but certain phobias, especially those that seem based in superstition, can come out when a persons overriding insecurity makes them believe that their luck is based solely in circumstances outside of themselves (4). Most people know that their fears have no real base for existence, that they are foolish even, but facing the object of their fears (whether they be physically near or imagined) will cause, at the least, serious anxiety (3). This is what separates a phobia from regular fear: in a phobia, the fear must be extreme and unnecessary and they must try to avoid it to great ends. The fear and the avoidance interferes with the person's life, hindering them from living and having fun and, in sever cases, from holding a job or relationships with others (1). The phobia will force a person to change how they live and behave. These are extreme measures for anyone to take for any reason. When the fear that causes these changes is insignificant in everyday importance, when it doesn't pose a real threat or danger, the fear is essentially not based in reality and thus is irrational (3).
Some symptoms of a phobia are as follows: Feeling of panic, horror, or terror; Realization that the fear is excessive to the threat it poses; Uncontrollable and instant responses to the object of fear; Showing physical signs of intense fear (fast heartbeat, short of breath, desire to flee, trembling); Avoidance by extreme measures (2). Many famous people have suffered from phobias. For example: Napoleon, Herbert Hoover, and Franklin D. Roosevelt all had triskaidekaphobia (fear of the number 13) (4). Many people, too, suffer from more than one phobia. Some people even have "antinomial phobias," that is phobias that are opposites, such as lygophobia (fear of darkness) and photophobia (fear of light) (6)(6).
There are different types of phobias. Agoraphobia is the fear of being in a situation or place in which escape might be difficult or might cause embarrassment. Social phobia is the fear of performance or other social situations (1). A specific phobia, also known as a simple phobia, is the fear that is set off by a certain trigger, such as an animal, a place, or a situation (7). A specific phobia will usually elicit withdrawal whereas a social phobia will more commonly cause intense avoidance. Specific phobias have many subtypes: Animal, Natural environment, Situational, Blood/Injection/Injury, etc. Some of these specific phobias can seem to be more of a joke than real, such as triskaidekaphobia (fear of the number 13), arachibutyrophobia (fear of peanut butter sticking to the roof of the mouth), or hippopotomonstrosesquippedaliophobia (fear of long words). Still others seem almost commonplace, such as odynophobia (fear of pain), arachnophobia (fear of spiders), or novercaphobia (fear of your mother-in-law) (1). Specific and social phobias are considered types of anxiety disorders (3).
Without treatment phobias can affect people for years, most likely for their entire lives (3). Of all emotional problems phobias are in all actuality the most easily treated (4). One way to treat a phobia is through behavioral therapy, which changes a person's reaction to a certain stimulus (3). The standpoint taken in behavioral therapy is that this fear is a normal and healthy fear that has outgrown what is healthy (1). Similar is Cognitive-Behavioral Therapy (CBT). The difference between CBT and behavioral therapy is that, where behavioral therapy attempts to change a person's reaction to a certain stimulus, CBT changes first the way the person thinks about the stimulus and then the reaction to the stimulus (3). Behavioral therapy is conducted in a controlled and safe setting. For instance, it is a healthy reaction to be afraid of poisonous snakes, but when one is extremely fearful of nonpoisonous snakes as well this reaction needs to be changed. If this person is exposed to the non-poisonous snakes over a long period of time with the exposure gradually increasing the fear may lessen and allow them to behave in a normal manner around these snakes. However, this therapy has the stipulation that the person cannot be exposed to the stimulus of a poisonous snake. That would defeat the purpose of the gradual exposure. This has also been called exposure treatment. The exposure itself is usually described as being over a long period of time, but this is not always true. A person can be "flooded" with exposure to the stimulus, under controlled conditions of course, until the fear abates. This immersion can either be done through a physical manifestation of the stimulus (such as an actual snake being in the same room with the person) or through imagined manifestation of the stimulus for people whose fear is too great. If the fear is so great that even imagining the stimulus is too much for the patient, there is something called counter conditioning. In this, the patient is taught relaxation techniques to offset the fear response. The patient can also be put through modeling, which is where the patient will observe people who are not afraid of the stimulus reacting in what we consider a "normal," or more rational manner. Usually, once counter conditioning is completed, the patient will move on to the flooding if behavioral therapy. These steps are known as systematic desensitization. Essentially, a person's ability to deal with fear stimulus is continuously strengthened until it no longer poses a hindrance from life (1).
If the fear persists then medication can be used. Usually a patient is given tranquillisers to relax their anxiety. These are addictive, though, after only a short period of use. Antidepressants are commonly used for long term medication since they are not addictive and can help alleviate anxiety. Regardless of which medication is used, therapy will continue with the person until they are able to function well in daily life (8).
Understanding what a phobia is is very important to curing it, but why do they exist? What causes them? There are many theories about this answer, and most all of these theories are probably right in some sense. It seems as though most specific phobias are rooted in childhood. Sometimes they fade away, such as fear of the dark, but sometimes they won't go away without help (2). Experiences seem to be a major cause of phobias. Freudians think that agoraphobics feared that their indifferent mother would desert them when they were children and this fear later grew into the phobia that now exists. However, due to modern knowledge it is suggested that it is due to painful or embarrassing memories of specific situations that cause this phobia to manifest. Inability to predict or control certain things may also lead to phobias, especially in someone who has been hurt in someway by a force they could not control. Safety seems to be another factor, since fewer people seem to be afraid of driving or riding in a car than falling from a high place. People generally have safe experiences with cars but not many people have safe or fun experiences falling from the roof of a house. However, people who have been in car accidents have developed phobias of driving or riding in a car, something akin to post-traumatic stress disorder. Along these lines, due to evolution it is sensible to think that humans are inherently open to certain phobias, such as of snakes and heights. A fear and avoidance of these things would lead to a longer life most likely and genes being passed on. This implies that phobias can be learned or that, possibly, low self-esteem and failure in coping play a role too. It also seems as if humans are biologically predisposed to develop certain phobias against things that cause disgust (i.e. slugs, maggots, rats, etc.). Interestingly, though, research suggests that most people with a phobia have never had a bad experience with what they are afraid of (1). Many agoraphobics seem to develop their disorder after they have had numerous panic attacks for no reason (2). The impossibility of knowing when the attack will hit is one of the driving forces for their fear, almost as if they are afraid of their fear.
Cultural factors also seem to effect occurrences of phobias. For example, agoraphobia is more common in America and Europe than anywhere in the world while in Japan there is a disorder known as taijin kyofusho, which is a fear of offending others with awkward social behavior or an imagined defect. Agoraphobia seems to be found in cultures where how others judge you is important while taijin kyofusho plays to the modesty and sensitive regard for others that exists and is seen as proper in Japan (1).
Genetics may also produce an inclination towards phobias. It seems as if phobias may run in families (3) and this appears to be because certain people are born with a tendency towards anxiety that is inherited through genes. Regardless of this people who do not have a tendency to be anxious can become anxious under certain pressures (7). Research suggests that a greater flow of blood as well as a higher metabolism in the right side of the brain may be involved in making people more predisposed to phobias (1). Other research proposes that the brain reacts in different ways to different stimuli in people who have phobias than with people who do not. When shown pictures of faces with various expressions, people with social phobias differed from people without when they saw expressions of contempt and anger. One area of the brain that had pronounced activity was the amygdala. This is the area of the brain that regulates feelings of anxiety and fear, and research hints that this region also helps people "read" facial expressions. This particular study suggests that this means that the amygdala and the surrounding areas of the brain may cause social phobias if when they act in a different manner than is normal (9). However, this study really only helps to prove what was previously known: that the amygdala helps to perceive facial expressions. Doesn't it make sense that, if someone has a social phobia, a facial expression that denotes the reactions they are afraid of would cause brain activity to increase?
Fear is a natural occurrence. But why are people unnecessarily afraid of certain things? It could be culture, it could be individual experience, or it could be genetics. In all actuality it is an amalgamation of everything. Fear can be a good thing at times. When it is ruling your life is not one of those times.
1)The Phobia List, an interesting site with a lot of information and numerous links
2)American Psychiatric Association, great site with interesting facts
3)The National Women's Health Information Center, another good site with interesting facts
4)MSN Health Home, interesting stories constantly updated
5)PhobiasCured Home, a commercial site, but interesting material if you look hard
As a linguistics major, I have a keen interest in languages that I try to incorporate into various aspects of my life, such as my non-linguistics classes. Linguistics is such an interdisciplinary field that, even though it is technically classified as a social science, there are connections to linguistic analyses that are found in a number of sciences, such as mathematics, computer science, psychology, and, of course, biology. Bilingualism is increasingly important to getting ahead in today's society; with the advance of the Internet and global businesses, the problem is no longer getting to the other side of the world but communicating with the people there. But how does the brain deal with bilingualism, and what is the significance of the brain regarding bilingualism?
Before actually getting to the issue of bilingualism, we must first understand a little bit about how the brain processes language in general. The two main parts of the brain that deal with language are Broca's area and Wernicke's area. Broca's area is a small region in the left inferior frontal lobe of the brain that handles speech production and semantic processing. It is named after Paul Broca, who first identified it as the speech center of the brain in 1861 (3). In addition to controlling spoken language, Broca's area also controls written and signed languages and programs the motor cortex to move the tongue, lips and speech muscles to verbalize words. More recent studies have also revealed that Broca's area is subdivided into sections that each have different roles in language tasks (1). In cases of Broca's aphasia, where there has been damage to Broca's area, patients can still understand language, but they cannot form their words properly, and their speech becomes slow and slurred (3).
Wernicke's area is in the anterior region of the frontal lobe, farther back from Broca's area. Carl Wernicke discovered in 1876 that damage to that part of the brain also caused language problems (3). Wernicke's area is responsible for language comprehension and semantic processing and stores information needed for arranging the words of a learned vocabulary into meaningful speech (1). Damage to Wernicke's area, called Wenicke's aphasia, causes loss of language understanding in patients. They can still clearly articulate words, but they only produce incoherent strings of random words, and sometimes their language comprehension deficiency is such that they cannot even associate meanings with single words (3). Naturally, since Broca's area handles outgoing speech and Wernicke's area handles incoming speech, the two work together, and they are roughly connected by a bundle of nerve fibers called the arcuate fasciculus (1).
Advances in technology such as Functional Magnetic Resonance Imaging (fMRI) now allow scientists to measure brain activity, and it is with these tools that deeper insights will be made into the way the brain works. fMRI uses radio waves to analyze how blood flow varies over time to various parts of the brain depending on what areas of the brain are being used (1). Activation of an area of the brain causes an increase in blood flow to that area, which results in a net increase in intravascular oxyhemogobin and a decrease in deoxyhemoglobin. Less deoxyhemoglobin because of increase in blood flow causes an overall increase in signal (1). The flow changes and the dynamic quality of the brain are recorded by fMRI using sophisticated image processing techniques.
Some researchers are now working toward answering some long-standing questions about the relationship between the brain and second language learning, storage, and usage. A key question regarding bilingualism is whether different parts of the brain deal with the additional language or languages compared with the first language. Using fMRI, a number of studies have been done that record the brain activity of bilinguals when they are thinking in different languages to see to which parts of the brain the blood flow increases. Generally, the studies have all come to the same basic conclusions about how the brain deals with more than one language, which is... that it depends on the age at which the person learned a second language (4). When adults learn their second language as toddlers, their brains process the two languages as one, and regardless of which language they were speaking, their brains were active in overlapping parts of Broca's area and of Wernicke's area. In contrast, adults who did not learn their second language until they were in high school or college employ a different, though adjacent, section of Broca's area that overlaps negligibly when they switch languages, although the results in Wernicke's area for this group were the same as for early bilinguals (4).
Scientists have suspected something of this age difference causing languages to be stored in different sections of the brain because when children suffer brain damage, they can regain their previous language proficiency with practice, but when adults suffer brain damage, they almost never reach the same level of language proficiency that they had before (2). Another interesting case is that of bilingual stroke victims who lost the ability to speak one, but not both, of their languages after their strokes or whose rates of language recovery were different for each language (5). The fMRI results that are available now, however, provide actual physiological proof for what were previously just educated guesses.
One reason for this delineation in the part of Broca's area may have to do with the critical learning period theory in language acquisition (2). There is considerable evidence of a range of time during which children readily acquire language and after which it is never as easy again. The exact period is disputed, but it is generally agreed that it begins at birth and ends at or before puberty. If children are for some reason not exposed to language during this time, case studies have shown that they will never have command of any language equal to that of a native speaker (2). It could be because the brain becomes hardwired after that time, but there is as yet no solid evidence to form any valid hypotheses. Comparatively, if children grow up monolingual and are not exposed to a second language under after the critical learning period ends, the brain processes the two languages differently. Bilinguals who learn another language later in life will never have the same degree of fluency and understanding of connotation as a native speaker of that language does.
The evidence that languages get separated out by the brain if they are not acquired by a certain age also brings the issue of bilingual education to the forefront. Detractors of bilingual education have mainly based their arguments on not wanting to overload children's brains with more than one language. They have also taken statistics showing that children who grow up bilingual are slower to speak at first as another reason to oppose bilingual education, but they neglect the fact that bilingual children catch up within a few years and then have the immeasurable advantage to knowing another language and, presumably, another culture (6). Given this evidence that learning languages after puberty will not be nearly as effective, however, at least some people might change their minds and support bilingual education to give their children an edge by being fluently bilingual.
There seem to be other benefits to the brain because of the effects of bilingualism, too. Because bilingual children learn that the names of objects are not fixed, they deal with abstract ideas earlier than do other children. They also switch between tasks better and adapt more easily when rules change because they are used to codeswitching and putting aside previously learned information in order to adjust to a new set of rules (6). Being able to disregard distracting or irrelevant information also allows bilingual children to focus their attention for longer periods of time, which gives them an advantage when they start school. All these additional tasks and skills place a heavier burden on the brain, forcing it to develop and mature earlier in order to accommodate everything (6).
There are, of course, a number of questions that remain unanswered about bilingualism's effects on the brain and the brain's effects on bilingualism. For example, why does the brain change after puberty and not allow additional languages to be stored in the same section of Broca's area as the first language? And what exactly is the range for the critical learning period? Linguistics is a relatively new discipline, and neurolinguistics newer still. It seems that scientists-biologists, neurologists, and cognitive scientists, to name a few-have just recently started researching bilingualism with regard to the brain, so with the two academic disciplines working together along with the continual technological improvements and innovations, there should be more answers and more questions that will be turned up in the near future.
2) Neurology for Kids Second Language
3)CNN, program about the Body Farm.
4)Service, Robert F. Body Chemistry: Where Dead Men Really Do Tell Tales. Science Magazine 11 August 2000: 855-857.
5)Patricia Cornwell, she wrote a novel based on the Body Farm.
6)Pedersen, Daniel. Down on the Body Farm. Newsweek 23 October 2000.
7)Daily Beacon, an article about William Bass and the Body Farm.
8)Terry Mosley, Tennessee Farm is a Laboratory of Human Flesh.
9)University of Tennessee, Body Farm Homepage.
10) "Dead Men Talking." 60 Minutes. CBS. 14 August 2002.
Tuning In, Turning On, and Dropping Out Again: LSD
Name: Tegan Geor
Date: 2002-12-19 21:50:59
Link to this Comment: 4160
Biology 103
2002 Third Paper
On Serendip
Lysergic acid diethylamideLSDis the most potent hallucinogenic substance known to date. LSD is generally mixed into a liquid solution, applied to paper and then licked off by the user, although it can be taken subcutaneously or intravenously, or ingested in some other diluted chemical form. Doses of 50 to 100 microgramsmillionths of a gramare described as psycholytic: rushing thoughts, a lot of free association, some visual or auditory hallucinations, and sometimes abreaction (memories so vivid that a person under the influence of LSD feels as if he or she is reliving a past experience). A psychedelic dose, around 500 micrograms produces "total but temporary breakdown of usual ways of perceiving self and world and (usually) some form of 'peak experience' or mystic transcendence of the ego." (3) Effective doses of other illicit chemically derived substances (like cocaine, or heroin) are measured in thousandths, and not millionths, of a gram. And compared to other known hallucinogenic substances, LSD is 100 times more potent than psilocybin or psilocin and 4000 times more potent than mescaline.(2) WWW Sources
Spider webs created under the influence of various drugs (caffeine appearing to be worse than LSD or cocaine as far as spiders' productivity is concerned...)
Examples of LSD blotter graphics Some of these are really beautiful...
The Musician's Underground Drug: Not What You Migh
Name: Tegan Geor
Date: 2002-12-19 22:50:52
Link to this Comment: 4162
Biology 103
2002 Third Paper
On Serendip1) Performance Anxiety
Nothing To Fear But Fear Itself? Yeah, Riiiight..
Name: Diana La F
Date: 2002-12-19 22:56:45
Link to this Comment: 4163
Biology 103
2002 Third Paper
On Serendip
--Dave Barry
References
Bilingualism and the Brain
Name: Sarah Tan
Date: 2002-12-20 00:54:44
Link to this Comment: 4164
Biology 103
2002 Third Paper
On Serendip
References
1) Language and the Brain
Fence Sitting: what causes depression Name: Erin Sarah Date: 2002-12-20 01:41:39 Link to this Comment: 4165 |
Neurobiology of Depression
Depression is currently the leading cause of disability in developed countries, and the fourth leading cause of disability worldwide.1 Most experts disagree about what causes depression. The question goes back to the almost preverbal "nature vs. nurture" debate. A large amount of evidence defends the fence sitters' position: depression is the result of both "nature" and "nurture." Depression is cause by a combination of environmental, sociological, and biological factors. Environmental, sociological, and biological stresses may cause genetically susceptible people to feel depressed. Because it is susceptibility that ultimately matters in clinical depression, this fence sitter's legs are on "nature's" side of the yard.
Moods are regulated by neurotransmitters, chemical messengers within the brain that assist communication between nerve cells.2 Neurotransmitters transmit nerve impulses across the space between nerve cells. Chemical messages are transmitted at the molecular level when neurotransmitters are released from the end, the axon, of a nerve cell into the space between the two nerve cells, the synapse, and taken up by chemical receptors specific to the neurotransmitter in the dendrite on the next nerve cell. Excess molecules are taken back up and reprocessed by the presynaptic cell, the cell from which the neurotransmitter was released into the synapse. Several things might go wrong in this process, potentially causing a chemical imbalance. There may be an undersupplied amount of the specific neurotransmitter. Chemical precursors or molecules that facilitate the production of the neurotransmitter may be in short supply. There may also be an underprovided number of receptor sites. The presynaptic cell may be taking back the neurotransmitter to quickly before it can reach the receptor sites. If there is a breakdown anywhere along the path, neurotransmitter supplies may not be adequate for the brain's needs. Inadequate levels of in mood regulating neurotransmitters lead to the symptoms that we know as depression.
There are three basic neurotransmitters which are thought to play a role in mood regulation, the monoamines: norepinephrine, serotonin, and dopamine.2 The presence of norepinephrine contributes to excitement, happiness, alertness, and motivation. Serotonin promotes and improves sleep, improves self-esteem, relieves depression, diminishes craving, and prevents agitated depression and worrying. Dopamine promotes bliss and pleasure, euphoria, appetite control, controlled motor movements and focus. Dopamine is associated with reward, or reinforcement. It is the agent which causes us to continue participating in or doing something.
Norepinephrine, according to the "catecholamine hypothesis" by Joseph J Schildkraut is the causative factor for depression.2 He proposed that depression arises from a deficiency of norepinephrine and mania from an overabundance. While there is an abundance of evidence to support the "catecholamine hypothesis," changes in levels of norepinephine do not cause mood change in everyone. The "catecholamine hypothesis" neglected to identify the important role of serotonin. In the 1970's, a decade after Schildkrauts "catecholamine hypothesis," Aurthur J. Prange Jr. and Alec Coppen published the "permissive hypothesis."2 They asserted that deficiency of serotonin was the other factor that caused depression. According to their hypothesis, serotonin promotes a fall in norepinephrine levels. New antidepressants account for this cofactor of depression, targeting both serotonin and norepinephrine.
Gender Difference
Even with its high rate of incidence, there exists a social stigma with depression. Depression is often perceived as a sort of weakness in moral character, and often as a women's issue.3 This may be a consequence of the origins and frequency of depression. Depression may be perceived as weakness because two people under the same environmental and sociological stresses may not reacted similarly. If one of these people is genetically susceptible to depression and the other indisposed, the person prone to depression may become depressed while the other may deal with the stresses in another manner. The perception that depression is a woman's issue may come from the fact that women are twice as likely to develop depression as men.4
Men and women are affected by
depression by a variety of different symptoms and at a varying frequency of
incidence. Women are at higher risk
partially due to the stresses from work, family responsibilities, and social
roles.5 The discernible difference between men and
women in diagnosed depression starts around puberty. Before adolescence, the differential of
depression rates between boys and girls is negligible, but by the age of 13
female rates of depression rise dramatically above those of boys their age. By the age of 15, 100% more girls have
experienced a major depressive episode than boys. Kimberly Yonkers, MD, and associate
psychiatry professor at Yale, suggests the higher incidence of teenage
depression in girls is associated with the higher incidence of anxiety in female
children, suggesting anxiety predisposes people to depression.5 Perhaps it is not anxiety that predisposes
people to depression but a gene that predisposes people to depression may also
predispose people to anxiety. Studies
performed at the
In March of this year a research team, headed by Dr. Zubenko, at the
More recently in October, this same research team identified the first susceptibility gene for clinical depression.4 They have linked unipolar mood disorders in women to a specific region of chromosome 2q33-35. The findings suggest that a gene in this region contributes to the vulnerability of women in families afflicted RE-MDD to developing mood disorders of varying severity. Their research concluded that the likelihood of Men with the same genetic background developing mood disorders was no higher than normal. The narrow region of chromosome three that they have identified is home to only eight genes. Among these is CREB1. CREB1 codes for a regulatory protein (CREB) that coordinates the expression of a large number of other genes that play important roles in the brain. Other scientists have discovered altered CREB1 expression in people who have died with severe depression. Neuronal plasticity, cognition, and long-term memory are all associated with CREB1, abnormalities of which commonly occur in patients with major depression.
Conclusions
Complicated biological disorders, like clinical depression,
are unlikely to represent a single disease with a unitary cause. Dr. Zubenko of the
Web Resources:
Smallpox Disease & Bio-terrorism Name: Michele Do Date: 2002-12-20 01:50:35 Link to this Comment: 4166 |
Over thousands of years ago, the variola virus emerged in human populations creating the smallpox disease. Smallpox is a serious, contagious, and sometimes fatal infectious disease. The name smallpox is derived from the Latin word for "spotted" and refers to the raised bumps that appear on the face and body of an infected person. Over the centuries, smallpox outbreaks have occurred from time to time for thousands of years, but the disease is now eradicated after a successful worldwide vaccination program (1). The last case of smallpox in the United States was in 1949; however, the last naturally occurring case in the world was in Somalia in 1977. After the disease was eliminated from the world, routine vaccinations against smallpox among the general public were stopped because it was no longer necessary for prevention. Except for laboratory stockpiles, the variola virus has been eliminated. Nevertheless, in the aftermath of the events of September and October 2001, there is heightened concern that the Variola virus might be used as an agent of bio-terrorism (2). For this reason, the U.S. government is taking extreme precautions for dealing with a smallpox outbreak.
There are two clinical forms of smallpox. Variola major is the severe and most common form of smallpox, with a more extensive rash and higher fever. There are four types of Variola major smallpox: ordinary (the most frequent type, accounting for 90% or more of cases); modified (mild and occurring in previously vaccinated persons); flat; and hemorrhagic (both rare and very severe). Historically, Variola major has an overall fatality rate of about 30%, while flat and hemorrhagic smallpox is usually fatal. Variola minor is a less common presentation of smallpox, and a much less severe disease, with death rates historically of 1% or less.
Generally, transmission of smallpox requires direct and fairly prolonged face-to-face contact from one person to another. Smallpox also can be spread through direct contact with infected bodily fluids or contaminated objects such as bedding or clothing (1). Smallpox is rarely spread in the air of enclosed settings such as buildings, buses, and trains. Humans are the only natural hosts of Variola because insects or animals do not transmit smallpox.
Exposure to the virus is followed by an incubation period, averaging about 12 to 14 days, in which most people do not have any symptoms of illness. Sometimes, a person with smallpox experiences fever, but the person becomes is most contagious with the onset of rash (3). The first symptoms of smallpox include fever, head and body aches, and sometime vomiting. This stage is called the prodome stages and lasts for about 2 to 4 days.
A rash then emerges with small red spots on the tongue and in the mouth. These spots develop into sores that break open and spread the virus into the mouth and throat. This is the period in which the infected person is most contagious. After the sores in the mouth break down, a rash emerges on the skin, usually spreading to all parts of the body within 24 hours. However, as the rash appears, the fever usually falls and the person may begin to feel better. By the end of the second week after the appearance of the rash, most of the sores have scabbed over. The scabs begin to fall off leaving marks on the skin forming pitted scars. Most scabs will have fallen off after the third week after the appearance of the rash. The person is contagious until all of the scabs have fallen off.
There is no specific treatment for smallpox disease, and the only prevention is vaccination. The smallpox vaccine helps the body develop immunity to smallpox. The vaccine is made from a virus called vaccinia, which is a "pox"-type virus related to smallpox (2). The smallpox vaccine contains the "live" virus unlike many other vaccines. Although the vaccine does not contain the smallpox virus and cannot give you smallpox, the vaccination site must be carefully cared for in order to prevent the spread to other parts of the body. Smallpox vaccination provides high level immunity for 3 to 5 years and decreasing immunity thereafter. If a person is vaccinated again later, immunity lasts even longer.
Historically, the vaccine has been effective in preventing smallpox infection in 95% of those vaccinated. In addition, the vaccine was proven to prevent or substantially lessen infection when given within a few days of exposure. It is important to note, however, that at the time when the smallpox vaccine was used to eradicate the disease, testing was not as advanced or precise as today, so there may still be things to learn about the vaccine and its effectiveness and length of protection.
The smallpox vaccine is not given with a hypodermic needle and is not given like a shot. The vaccine is given using a bifurcated (two-pronged) needle that is dipped into the vaccine solution. When removed, the needle retains a droplet of the vaccine. The needle is used to prick the skin a number of times in a few seconds. The pricking is not deep, but it will cause a sore spot and one or two droplets of blood to form. Usually the vaccine is given in the upper arm.
If the vaccination is successful, a red and itchy bump develops at the vaccine site after about three or four days. In the first week, the bump becomes a large blister, fills with pus, and begins to drain. During the second week, the blister begins to dry up and form a scab. The scab falls off in the third week, leaving a small scar. People who are being vaccinated for the first time have a stronger reaction than those re-vaccinated.
After vaccination, it is important to follow care instructions for the site of the vaccine. Because the virus is live, it can spread to other parts of the body, or to other people. The vaccinia virus may cause rash, fever, and head and body aches. In certain groups of people, complications from the vaccinia virus can be severe. However, the smallpox vaccine is the best protection you can get if you are exposed to the smallpox virus. Anyone directly exposed to smallpox, regardless of health status, would be offered the smallpox vaccine because the risks associated with smallpox disease are far greater than those posed by the vaccine.
Approximately 140,000 vials of vaccine are stored at the Centers for Disease Control and Prevention. Each vial contains doses for 50-60 people, and an additional 50-100 million doses are estimated to exist worldwide (2). This stock cannot be immediately replenished, since all vaccine production facilities were dismantled after 1980, and renewed vaccine production is estimated to require at least 24-36 months.
Routine smallpox vaccination among the American public stopped in 1972 after the disease was eradicated in the United States. Until recently, the U.S. government provided the vaccine only to a few hundred scientists and medical professionals working with smallpox and similar viruses in a research setting. After the events of September and October 2001, however, the U.S. government has taken further actions to improve its level of preparedness against terrorism.
One of many such measures, designed specifically to prepare for an intentional release of the smallpox virus, is a smallpox response plan. Moreover, the U.S. government has ordered the production of sufficient smallpox vaccines in order to immunize the American public in the event of a smallpox outbreak. The U.S. government currently has access to enough smallpox vaccines to effectively respond to a smallpox outbreak in the United States.
On December 13, 2002, the President announced a plan to better protect the American people against the threat of smallpox attack by hostile groups or government (3). Under the plan, the Department of Health and Human Services (HHS) will work with state and local governments to form volunteer Smallpox Response Teams who can provide critical services to their fellow Americans in the event of a smallpox attack. In order to ensure that Smallpox Response Teams can mobilize immediately during an emergency, health care workers and other critical personnel will be asked to volunteer to receive the smallpox vaccines.
At this time, the federal government is not recommending vaccination for the general public. However, the Department of Defense (DOD) will vaccinate certain military and civilian personnel who are or may be deployed in high threat areas. Some United States personnel assigned to certain overseas embassies will also be offered vaccination.
Although there is no reason to believe that smallpox presents an imminent threat, the September and October attacks of 2001 have heightened concern that terrorists may have access to the virus and attempt to use it against the American public. But our government has no information that a biological attack is imminent. Moreover, there are significant side effects and risks associated with the vaccine. Yet, the HHS is in the process of establishing an orderly process to make the vaccine available to those adult members of the general public without medical contraindications who insist on being vaccinated either in 2003, or in 2004.
Muscle Memory Name: Margaret H Date: 2002-12-20 02:39:47 Link to this Comment: 4170 |
I spend about 6 months of the year training and competing for Bryn Mawr's Track and Field team. Specifically, I throw the shot put, javelin, and discus. This athletic activity requires a great deal of weight training and repetitive practice. Having thrown for 5 years, I am familiar with what I consider a 'great throw'. And whenever a great throw happens, it occurs when my mind is clear and I simply go through the motions without overanalyzing them. My coaches and I refer to this as 'muscle memory'. This is a familiar term amongst athletes, usually referring to a state where the natural rhythm of the body takes over. As our Indoor season has begun, and I am trying to let my body override my overactive mind, I thought I'd do a little research in the phenomenon alternatively called motor memory.
Contrary to popular belief (or perhaps my own), movements are learned and stored in the brain, not the muscles. The body doesn't remember the motions - the brain does. Motor memory is stored in a similar method to what scientists call the working memory or procedural memory (1). There are commonly three types of memory: short term memory, long-term memory and working memory. Short-term memory is what you can recall immediately after perceiving it. Long-term memory is the mind's database of all information. And the working memory bridges the two: it must be actively present during thinking in order to process thoughts (5). There are two types of long-term memory help to explain how the brain codes different kinds of experiences: declarative and procedural. Generally, declarative memory deals with specific facts, and procedural memory is memory developed as a result of repeating certain procedures over and over again. Declarative memory breaks down into episodic and semantic memory. Episodic memory represents our memory of events and experiences in a serial form. Semantic memory, on the other end, is a structured record of facts, concepts and skills that we have acquired," (6).
Memory itself is technically neuromuscular facilitation (3). Neuromuscular means that the process involves both the nervous system, but also muscle tissue (5). In order to create an action, the neurons must transmit an impulse to the muscles through neurons and synapses. "Neurons have specialized projections called dendrites and axons. Dendrites bring information to the cell body and axons take information away from the cell body. Information from one neuron flows to another neuron across a synapse," (7). The actual area of the brain that is thought to format, sort, and store memories is the hippocampus (8).
The process works to help the brain recognize the movements it is telling the muscles to execute. For example, the brain sends impulses through the arms to empower the batter to swing his arms. By repetition, the brain begins to recognize the impulses that it has to send out in order to produce the desired muscle movement. Thus, a muscle memory is created, by enabling the brain to remember the direct path of the impulses to transmit through the neurons and synapses.
Interestingly enough, sources suggest that memory consolidation, or the "enduring changes in adaptive behavior," continues long after the event occurs (4). "During the hours that follow completion of practice, representation of the internal model gradually changes, becoming less fragile with respect to behavioral interference," (4). Critical amounts of practice improve the performance of the muscles, even in the absence of training. Sleep particularly helps the memorization process. During the off time, the brain continues translating the movements into memory. The brain works even while the muscles are relaxing, or the body is sleeping.
Numerous studies suggest that some disturbances of short-term memory during practice enhances long-term retention of motor skills (2). Changing variables such as different lifting patterns, changing an established routine, or breaking up a monotonous practice may help. Possibly, by challenging the short-term memory, the brain begins to recognize the continuing pattern as something it has processed before. By taking a break, the body can concentrate on something else. Since other studies have shown that time is necessary for the memorization process to occur, these breaks or 'disturbances' may allot the time needed to make the transfer from short-term memory to long-term.
In light of this information, it becomes apparent that the process by which a body remembers a muscular action is solely retained in the brain. The body's memorization of the actions it performs is a result of programming the movements in the brain's long-term memory bank. "Exercisers rely upon the body's ability to assimilate a given activity and adapt to training. The body's ability to remember a given exercise, repetition after repetition, produces it's own form of muscle memory, and adapts to the training by increasing its physical fitness in preparation for the next training session," (3).
However, a few questions arise from this information. Athletes say that when the mind is relaxed, the muscles 'do their thing'. Since we have learned that it's the mind that is telling the muscles what to do, the question is then, how does clearing the mind help the motor retention process? By not thinking of anything in particular, does that allow the brain faster access to the "memory bank" known as the hippocampus? Does thinking two things at once slow down the entire process? Can humans remember muscle movements while concentrating on a completely unrelated matter? As with every question we've examined this semester, my queries need a great deal of research before finding possible solutions. Because a few of my sources also suggest that focusing on an activity increases the chances of better memory, I am going to hypothesize that limiting brain function helps the brain remember events with more clarity. But that's a whole other web paper.
1) Building Mental Muscle Memory chapter intro.
2) The multiple facets of motor memory; From sensori-motor processes to symbolic encoding;,Thon, B,. & Ille, A.
3)Do Muscles Really Have Memory?,
4) On Sleep and Memory Processes; , Science Week
5) Online Dictionary ,
6) Human Memory ,
7) Making Connections: The Synapse;, Neuroscience for Kids website
8) Memory and the Hippocampus; , Neuroscience for Kids website
Altruism - Cultural or Biological Phenomenon? Name: William Ca Date: 2002-12-20 03:57:57 Link to this Comment: 4174 |
Picture yourself walking along an iced-over pond when you hear a crash and a scream for help from nearby. You rush over to see what has happened and you see a young boy has fallen through the ice. Your first instinct is to get on the thin ice to save the boy. What is it that drives us to feel this way? Why are we willing to put our own lives at risk in order to help others?
This self-endangering, selfless instinct is called altruism a phenomenon that has both cultural and biological origins. Just as one can find the explanation for this behavior in cultural and biological accounts, one can define altruism through these two avenues as well. The biological definition is as follows: "Instinctive cooperative behavior that is detrimental to the individual but contributes to the survival of the species." (1) One cultural definition describes altruism as the belief "that man has no right to exist for his own sake, that service to others is the only justification of his existence, and that self-sacrifice is his highest moral duty, virtue, and value." (2) The two definitions have many parallels such as self-detrimental behavior and a contribution to others. However, the definitions oppose each other in their explanation of the driving force behind this behavior. The biological definition uses the term 'instinctive' and implies that altruism is a biologically inherited trait, whereas the cultural definition intimates that each man may choose to be morally righteous (altruistic) or not.
When defining altruism, one must be careful not to mistake it for traits such as compassion or helpfulness. Altruism is an action and requires a certain level of aggressiveness and engagement in high-risk situations. (3) These other traits do show sympathy for other human beings, but do not necessarily lead to altruistic actions. Likewise, altruism can occur without compassion. (4)
The basis of the biological explanation of altruism lies in the 'selfish gene theory'. This theory views evolution through natural selection on the genetic level rather than the level of the organism. (5) Because an organism will never last longer than one generation, the organism itself has no interest vested in the passing on of its genes. In fact, it is the genes that benefit from reproduction and the inheritance by the offspring. Therefore, the genes that increase the likelihood of reproduction, and therefore the passing on of those genes, are more likely to proliferate throughout the population. (5) By benefiting the reproductive process of the organism, a gene is increasing its chance of being passed on to the next generation, hence acting as a selfish gene. (5)
The connection of the selfish gene theory to altruism is found in an organism's kin. Because similar genetic material is found among related organisms, selfish genes are also interested in the passing on of relatives' genetic information. Therefore, a gene that enhances the survival and reproduction of relatives will also thrive in a population. The gene is most likely present in the relatives and therefore if the genetic trait ensures that the relative's genetic information is passed on, that gene will be passed on to the next generation. (5) This is the first form of altruism, altruism towards family.
This also broaches the subject of "suicidal altruism" where an organism gives its life in order to save another one. This is difficult to explain due to the loss of all chances of passing on one's own genetic information. The idea of degrees of kinship arises from this. This is a formulation that determines how many of one's genes are maintained through suicidal altruism. (4) Richard Dawkins, author of The Selfish Gene wrote that "the minimum requirement for a suicidal altruistic gene to be successful is that it should save more than two siblings (or children or parents), or more than four half-siblings (or ucles, aunts, nephews, nieces, grandparents, grandchildren) or more than eight first cousins, etc." (4) These ratios ensure that as much genetic information is still in the population as one would have passed on themselves.
A second extent of altruism is towards a partner or mate. Because the partner's genes have their own interests in the survival of the offspring it is ensured that the offspring will be taken care of. Therefore, a gene that acts altruistically towards a partner will also be acting towards the protection of one's own genes through the offspring. (5) Thirdly, one sees altruism between friends. "Because of the benefits of such a symbiotic relationship, each then has an interest in the welfare of the other, especially as the other will similarly have an interest in their welfare." (5)
However, when one examines altruism among strangers a new type of altruism must be explained: reciprocal altruism. The assumption underlying reciprocal altruism is that an altruistic act will secure one's own protection in the future. On the level of the selfish gene, people behave altruistically for the sole purpose of knowing that their safety will also be protected. Reciprocal altruism requires that people live in a small enough community where they are certain they will encounter those whom they helped. (5)
Many of these arguments are countered by cultural explanations of altruism. To begin, the selfish gene theory only accounts strongly enough for the first one or two types of altruism described above. The selfish gene theory is very strict on the kinship level. One should not be sacrificing oneself if enough genetic information will not survive as a result. Yet are we to believe that we are altruistic to our friends because they will protect our genes just as effectively as the survival of our relatives? Again, one must be reminded of the difference between compassion and altruism. Altruism puts the actor at risk, and therefore suicidal altruism is quite often a reality. A gene that behaves altruistically towards a partner is only protecting 50% of the genetic makeup of the individual. One that acts altruistically towards a friend is protecting none of their own genetic material, while still putting themselves at risk.
Reciprocal altruism seems to have developed biologically but has recently become a cultural rather than biological phenomenon. Reciprocal altruism had a much larger impact during times of hunting and gathering, when there were very small communities. The harshness of the environment meant that people depended on others much more heavily for food, shelter, and survival. Sharing a kill was a very risky action, but it was much more likely to be reciprocated. The small communities guaranteed that one would see the person who s/he assisted, and could depend on him/her for future assistance. Those who did not reciprocate were immediately cut off from future help and therefore in great danger. Hence, reciprocal altruism was a necessity for survival. (5)
Today, reciprocal altruism has lost many of its underlying requirements. Primarily, large cities and towns make it very unlikely that one will see the person they helped later on. If this is the case, reciprocal altruism has no foundation because one cannot expect reciprocation. Furthermore, the impact of not reciprocating has seemingly lost all its consequences. No longer will the community cut a "cheater" off from all assistance because only a very low percentage of people will know of that person's altruistic history. Reciprocal altruism is similar to the Prisoner's Dilemma in that it depends on perfect memory so that all altruism and cheating affects future interactions and that it is unlikely that interactions will end so that every action is believed to have a consequence. (6) In today's society, there are so many interactions that perfect memory is difficult to sustain and secondly, it is highly likely that interactions among individuals will not occur again. Therefore biological reciprocal altruism loses much of its credibility.
The differences found along gender lines in altruistic behavior also points to a more cultural explanation of altruism. Through a number of studies, it has been observed that women are more altruistic towards family members, while men's altruistic actions tend to benefit the entire social unit. The genetic explanation of this observation is not very descriptive: men have "a wider range of possible and unverifiable progeny" while women have "very recognizable parameters of genetic interests". (3) However, the implications that culture have on gender roles holds significantly more water in this argument. Women reluctance in behaving altruistically for the benefit of the whole community originates in a young girl's fear of new, competitive situations "believing she is small, fragile, and vulnerable". (3) Women are more likely to form close, intimate relationships within which they will exhibit greater altruism than men. They are more concerned with the immediate well being of an individual rather than the survival of the group as a whole. Another gender-based observation of some experiments has shown that women are more likely to be assisted more evidence of cultural beliefs pervading the altruistic arena. (3)
Altruistic behavior is still very common in today's world. Biologically, it seems that the selfish gene theory provides very good evidence for the altruism exhibited towards family members. On the other hand, the theory of reciprocal altruism seems to have lost its biological foundations. It is in this example where it becomes apparent that altruism has become a cultural phenomenon. Regardless of who the person is that fell in the ice, our natural instinct has become that it is our duty to put ourselves at risk and save the person. It is impossible to include genetics when explaining the actions of martyrs. Self-sacrifice is not commonplace in today's society, but people do recognize the moral and cultural obligation surrounding that idea.
1) http://dictionary.reference.com/search?q=altruism
2) http://www.vix.com/objectivism/Writing/InBrief/altruism.html
3)http://www.west.net/~wwmr/altruism.htm
4)http://www.spectacle.org/297/alt.html
5) http://www.theunityofknowledge.org/the_evolution_of_altruism/introduction.htm
6)http://www.lifesciences.napier.ac.uk/courses/modules/BI22201/L16.HTM
Living With Fibromyalgia Name: Adrienne W Date: 2002-12-20 12:41:55 Link to this Comment: 4180 |
Fibromyalgia is a pain disorder that receives relatively little attention, and as a result, many are unaware of its existence and the dramatic effects it can have on one's life. According to the Online Medical Dictionary, the definition is as follows: "a disorder characterized by muscle pain, stiffness and easy fatigability. The cause is unknown and an estimated 3 million are affected in the US" (2). Officially, one that suffers from Fibromyalgia (FM) must have tender points or trigger points present in four specific quadrants of the body (1). There are eighteen specified tender points, which are clustered around the neck, shoulder, chest, hip, knee, and elbow regions (3). Unfortunately for patients who suffer from fibromyalgia, many debate its very existence as a disorder. However, this disease of the central nervous system does indeed exist. It is a specific systemic, non-progressive pain condition. Although it is a syndrome it is not less disabling or less serious than a disease (1). Those who suffer from fibromyalgia experience severe, sometimes constant pain and, as a result their lives are affected in numerous adverse ways. Since there is currently no cure for fibromyalgia, the patient must learn to live with the disorder. This paper will address how one lives with fibromyalgia: it will describe the effects it has on a person's health and life as well as provide examples of coping with it.
Although some regard fibromyalgia as a new disorder, or even as a fad disease, it was in fact first described by surgeon William Balfar in 1816. After over a century, the American Medical Association recognized it as a true illness and a major cause of disability in 1987 (1). The causes of FM are relatively unknown; however, researchers believe that it is initiated by a triggering event such as a viral or bacterial infection, a car accident, or the development of another disorder such as rheumatoid arthritis, lupus, or hypothyroidism. These triggers cause "an underlying physiological abnormality that's already present in the form of genetic predisposition," in other words, they trigger a genetic trait that has been dormant (3).
The symptoms include pain and fatigue as well as several overlapping disorders. Pain is, of course, the most prominent symptom, which entails deep muscular aching, burning, throbbing, shooting and stabbing pain, and stiffness that is worse in the morning. The fatigue symptoms include: total depletion of energy, a condition known as "brain fatigue," which is a difficulty concentrating, and a feeling as though arms and legs are tied to concrete blocks. This fatigue is perhaps caused by a sleep disorder, an overlapping disorder that many FM patients also suffer from. Fibromyalgia is associated with a disorder called the alpha-EEG anomaly, in which sufferers are able to fall asleep without trouble but their sleep is interrupted by bursts of awake-like brain activity. Researchers also believe that there is a connection between Irritable Bowel Syndrome and FM, as about 40-70% of fibromyalgia patients suffer from Irritable Bowel Syndrome, a disorder that entails constipation, diaherrea, frequent abdominal pain, gas, and nausea. In addition to the pain and overlapping disorders associated with fibromyalgia, other symptoms of fibromyalgia include: multiple chemical sensitivity syndrome, sensitivities to odors, noise, and bright lights; painful menstrual periods, and cognitive or memory impairment. In addition to causing extreme discomfort, these symptoms can severely impede one's lifestyle. All FM sufferers have difficulty with mundane daily tasks such as cleaning, driving, using the computer, or enjoying the company of loved ones. This is due to the pain they experience in very minor physical activities or simply the extreme fatigue they often feel. For some patients, it is even more difficult to function: depending on the severity of the disorder, FM patients are often unable to go to work everyday and must go on unemployment or disability insurance to provide them with income (3).
The dramatic effect fibromyalgia has on one's life warrants a lifestyle change, which can be as minor as the inability to clean for long periods of time or as major as the inability to go to work. Any negative lifestyle changes can cause depression; essentially, a fibromyalgia patient has experienced a great loss, the loss of their former life. Thus, although it is not directly caused by FM, depression is often associated with fibromyalgia. Although more research needs to be conducted on the link between FM and depression, some researchers have concluded that it is a misconception that fibromalgia causes depression or that depression causes the pain of fibromyalgia. For example, in a study of 69 patients, the researchers concluded: "Concurrent depressive disorders are prevalent in FM and may be independent of the cardinal features of FM, namely pain severity and hypersensitivity to pressure pain, but are related to the cognitive appraisals of the effects of symptoms on daily life and functional activities" (4).
Some of the reasons that lead to depression in FM patients are: a delayed diagnosis, which leads to a decreased self-confidence; disrespectful medical treatment from doctors who disregard the severity of FM or who are not trained in pain management; poor support from loved ones; the severe chronic pain that can be experienced on a daily basis; the sleep deprivation associated with FM; and neurotransmitter deficiencies, which researchers are beginning to link to FM in studies (5). For both their depression and their pain, many FM patients are prescribed medicines that boost levels of serotonin and norepinephrine, neurotransmitters that modulate sleep, pain and immune system function, such as Paxil, Serzone, and Xanax. These psychoactive drugs target their lack of sleep, muscle rigidity, pain, and fatigue (6).
Sleep disturbances, which are also prevalent in sufferers of FM are a cause of depression, but they are also a problem alone, as the fatigue can impede one's lifestyle greatly. As a result of sleep disturbances, many FM patients are too tired to complete daily tasks. Research seems to indicate that the cause of the muscle aches and pains also causes sleep disturbances. In short, the poorer the FM patient sleeps, the worse the pain and fatigue he or she experiences. Sleep disturbances are prevalent in FM patients: in a survey of over 1,000 FM patients, fewer than 1% had disturbed sleep, whereas 90% of these patients complained of sleep disturbances after contracting their illness. The effect of interrupted sleep creates a cyclical effect, as sleep is needed to repair muscles with the Growth Hormone, 80% of which is produced during sleep. As a result, sufferers are unable to repair their muscles, which could lead to more pain (8).
The loss of sleep can also lead to memory dysfunction, another common problem for FM suffers. Many complains about problems with memory, especially short term memory. Medical researcher Stuart Donaldson, Ph.D refers to this as a "fibro-fog," which is "decreased ability to concentrate, decreased immediate recall, and an inability to multitask." The connection between memory impairment and FM continues to be researched, as most studies focus solely on the pain. However, there have been some studies conducted on FM and memory. For example in an article published in 1995, researchers Schurr and MacDonald compared 134 subjects: two groups of chronic pain sufferers-one with lower back pain and one with whiplash, and a control group that did not experience any chronic pain. They found that the chronic pain sufferers experienced significantly more memory problems than the control groups, even when depression, another deterrent to memory was removed. Assuming that there is a connection between FM and memory impairment, what is the cause? Is there something wrong with the brain or is it the environmental factors associated with FM that leads to memory impairment? This is a question that remains debatable; however, new research on the subject continues to be conducted on cerebral blood flow, abnormal levels of some brain chemicals, and neuroendocrine abnormalities a href="#7>(7).
Of course, the largest factor that has an influence on one's life is the severe pain associated with FM. Depending on the severity of FM, it can be as disabling as rheumatoid arthritis. Although the symptoms should persist over time, they should not worsen. A six year study of 45 FM patients indicates that the symptoms can remain stable and go into remission. However, there are aggravating factors that can bring a patient out of remission such as: changes in the weather, cold or drafty environments, missed sleep, increased stress, disruption to routine, and hormonal fluctuations a href="#3">(3). Unfortunately, there is no cure for fibromyalgia, sufferers must live with the disorder and simply improve their lives from a functional standpoint, which can include support and help from family and friends, taking tasks one at a time, and attending support groups a href="#2">(2). To deal with the pain, many FM patients find hot baths, massage, and heat applied to pressure points to be helpful. Exercise is also important; it is recommended that FM patients participate in daily aerobic exercise and stretching. Of course, these exercises must be mild to prevent aggravation to the condition, but it helps pain management and acts as a sleep aid. An example of a gentle exercise is shrugging the shoulders in a circular pattern or bending forward while keeping legs straight. Many patients also require physical therapy several times per week. Other treatments include medications that affect the central nervous system, medications that improve deep sleep, regular sleep hours and an adequate amount of sleep, which is extremely important to the management of FM, as well as Tylenol and Advil.
Although at the present time there does not appear to e a cure, there still remains hope for suffers of the disorder because new research continues to be conducted and more is being discovered about the disorder. At the present time, however, fibromyalgia patients have difficulty in the painful symptoms they experience as well as the dramatic lifestyle changes they are forced to make. It is also very difficult for them because fibromyalgia is not given much attention: many people do not know much about it, including some doctors. The lack of knowledge about the disorder as well as incredulous opinions of its existence is also very painful for a sufferer of fibromyalgia. As this paper states, the lives of fibromyalgia patients are very difficult for various reasons, therefore one of the most important treatment they can receive is the support, help, and compassion from loved ones.
1) Author Devin Starlanyl's guide to fibromyalgia, contains a definition and treatment information .
2)Contains a definition and description of fibromyalgia .
3)The Fibromyalgia Network's website, contains information about symptoms .
4) Contains information about a study linking fibromyalgia and depression .
5) Information linking fibromyalgia and depression .
6) Contains information on the relationship between fibromyalgia and depression .
7) Contains information linking memory impairment and fibromyalgia .
8).
Biological Children from Two Parents of the Same S Name: Maggie Sco Date: 2002-12-20 15:12:32 Link to this Comment: 4181 |
Genetic engineering and biotechnology are two topics that are constantly making headlines. Everyone has heard of Dolly, the sheep that was successfully cloned except for accelerated aging. Word has just leaked out that the world's first human clone may be born around the middle of January 2003 (Footnote 1)(1). The amount of research being done is enormous and growing constantly. Cloning technology is increasing exponentially, and the amount of research money being spent continues to escalate. The ethics of cloning is still a very divisive issue, and I believe that this topic is important enough to have a paper completely dedicated to it. However, the issue I am interested in is the potential for this research to aid homosexual couples in conceiving children who are the biological children of both partners.
While plenty of information about cloning and genetic imprinting exists, from what I could find there are generally few publicized articles or studies about the possibility of homosexuals using this technology to conceive biological children. Cloning, and especially for the purpose of reproduction, is a highly controversial issue anyway. Because it was extremely difficult for me to find any information on this specific topic, I am concerned that the research is generally ignored, or concealed, because it is aimed to help homosexuals. Research involving any aspect of cloning with reproduction as the goal will probably inspire hundreds of protests, but it seems likely that such research will encounter more obstacles than normal because of the associations with homosexuals. As it is with much of biotechnology, self- proclaimed experts in the field all report different degrees of possibility and difficulty. By the end of my research I could conclude very little besides that this is an understudied or underreported topic. In a few different ways, cloning technology may be able to help homosexuals have biological children, but the problems are so numerous and difficult that the possibility is in the far future.
Dr. Calum MacKellar, a biochemist who edits a journal on bioethics and runs a non-profit organization called European Bioethical Research, has suggested that scientific advances could result in a homosexual couple having a baby by combining the DNA of both parents. MacKellar claims that by using the technology that produced Dolly, humans could reproduce without the traditional fertilization of a woman's egg by a man's sperm (2)(3)(6)(8)(10). The process would involve making a "male egg" from one of the men that would be fertilized by the sperm of the other partner. The "male egg" would be created by removing the nucleus from a donor egg and replacing it with the nucleus from the sperm cell of one of the men (2)(3)(6)(8)(10). Because the nucleus is what contains the DNA for a cell, if the nucleus of the original egg cell was removed, even if the fetus developed in a woman, it would not have any of her genetic information. While the fetus would (at least at first, until technology made further developments, such as an artificial womb) still need a surrogate mother in which to develop, the genetic information would all come from the two male parents.
Currently, the ability to create "male eggs" and fertilize them with another man's sperm is only theoretical. The procedure has been tested on mice by altering the nuclei of the sex cells (called pronuclei) to create zygotes (the cell formed by the union of two reproductive cells) from two male cells or two female cells (4). While these experiments have not been successful (4)(11), MacKellar maintains that the process is possible if 'imprinting' could be controlled (2)(3). MacKellar told The Times newspaper " because researchers are now beginning to find techniques of stripping imprints from certain chromosomes, a successful outcome for the male egg may not be far away"(8). The fact that I cannot find details on how researchers and scientists are planning on stripping imprints from these chromosomes does not necessarily mean it is impossible. These remarks were made to the media with the intention of scarcely informing the general public, which is generally not interested in the scientific details. MacKellar is also most likely trying to scare people with the potential of this scientific research (Footnote 2). However, because other sources say imprinting is enough of a problem to prevent the successful fertilization of "male eggs", I remain skeptical of how this procedure will take place.
Imprinting only happens in certain genes that behave differently depending on whether they are inherited from the mother or from the father. They are marked early on in development and retain this marking for the offspring's entire life. For a small number of genes, the maternally inherited copy is expressed and the paternally inherited copy is not, or vice versa (7)(12). A set of genes with imprints must be inherited from the mother and a second set must be inherited from the father in order for normal embryo development (9). If only one version (maternal or paternal) of an imprinted gene is inherited, the result is usually fatal developmental defects (7). It seems that male and female genomes have the complimentary information necessary for normal development in mammals (2). Imprinting does not occur in the cloning process because clones are not made from sperm and eggs (13). It is easy to understand that because of these complications, cloning from same sex partners to aid in reproduction has more potential problems than conventional cloning.
A fundamental understanding of biology seems like all it would take to determine that two male sex cells could not successfully result in an embryo. We have all been taught since our first "birds and bees" talk that it takes a female egg and a male sperm to form an embryo. Skeptics of this fertile future in genetic research argue that chromosomes from a man's sperm, even if transformed into an egg cell, would conflict with the sperm of the second man (6). Part of this incompatibility would be because both sets of chromosomes would contain centrosomes (6). Centrosomes are located in the cytoplasm near the nucleus, contain the centrioles and serve to organize microtubules (5); they are only present in male sperm and are important during cell division. One article suggested an alternative to resolve this problem by taking a non-sex cell from one man and fertilizing it with sperm from the other partner (6). While this may resolve some of the problems, like having the two sperm cells clash, the somatic cell would have to go through meiosis without altering the imprints (4). At this time, this problem is probably just as difficult to solve as the one of imprinting.
The experiments on mice to assess if same sex gametes could form a zygote have mostly been unsuccessful. To determine if the nuclei of male germ cells could develop the same as the nuclei of female germ cells, scientists did experiments very similar to the process that MacKellar says will work for humans. They removed one of the haploid nuclei from an embryo at the one-cell stage of development and replaced it with a nucleus of the same sex as the remaining haploid nucleus (4). The consequent embryo then had genetic material from two different parents of the same sex, either two males or two females. When the embryo consisted only of cells inherited from the father, the embryo will usually die prenatally, but the placenta developed normally. If the cells were inherited from the mother, the embryo would develop normally but the placenta would not (4)(11). While the process is possible, so far it has only produced successful embryos in birds and reptiles, which do not go through gene imprinting.
I found it very interesting that all of the articles focus on male homosexual relationships and not female ones. I found one article that discussed the science of the male procedure, and then said in parentheses "Of course, this could be done with the DNA of two female eggs as well" (10). However, I do not know how the specifics of the procedure as I have described it could really be reversed and used for females. One article suggested that it might be easier to insert the genetic material from one egg into an intact egg from the other partner in order to create an offspring with two genetic mothers (3). If this is indeed a possibility, lesbian couples may be closer to having their own genetic children, but all of their offspring would be female because eggs only carry the X chromosome. I am not sure if the lack of information on lesbian couples indicates an assumption that it is easier for women to find sperm donors than it is for men to find surrogate mothers (Footnote 3), or if the process is more complicated to create an embryo from two women.
Some speculators have increased the possibility of male-male conception even further. Robert Winston, Britain's leading fertility expert, believes that a man could carry a baby to term and deliver it by Caesarean section (5). Winston thinks that it is already feasible for men to be "pregnant" with the aid of hormone injections and some surgical modifications (6). The embryo would grow in a pocket of tissue attached to the abdominal wall (6) and could allow a heterosexual man if his female partner was physically unable, gay male couples, or single men who wished to be fathers to bear children. Winston admits that he doesn't "think there would be a rush of people wanting to implement this technology" (5). Some of the possible problems with this technology are the men taking the hormones might experience internal bleeding or grow breasts (5). For me, this option for male couples seems less desirable than an embryo developed from both of their sperm and gestated in a surrogate mother. However, that it might be a possibility in the future could be encouraging for them, because women willing to be surrogate mothers are probably rather difficult to find.
Current assisted reproductive technology is already a controversial issue. It is associated with multiple and premature births, and an over-riding fear is that the technology would weaken humans' natural reproductive abilities. In vitro fertilization for a heterosexual couple averages around $10,000 per cycle (14). In vitro fertilization becoming more commonplace raises questions about the division of who can or cannot have children based on material wealth. It is likely that the procedure for homosexual couples would be more expensive than current processes, and how many people (homosexual or heterosexual) can afford them? As there are not yet consistent laws regarding whether insurance should cover or offer coverage for infertility treatment (14), it seems like a distant future when insurance companies would be willing to offer coverage for same sex reproductive assistance.
The more straightforward aspects of cloning are still complicated and unperfected. That there is not currently much research or literature on cloning to aid homosexual reproduction does not mean that it will never be an issue. Research might be done more readily if the implications weren't to help homosexuals. Also, the amount of money needed to finance genetic engineering and biotechnology research is huge. Finding money to finance research specifically targeted towards homosexuals would be considerably more difficult. Because of these financial aspects and social implications, it is an interesting and worrisome question to wonder if social issues drive scientific research.
Footenote 1) Other experts in the field dispute this claim by Dr. Severino Antinori.
Footenote 2) Although not all of the articles made this very clear, MacKellar is against the use of egg nuclear transfers for homosexual couples to create genetic offspring.
Footnote 3)Which I recognize does not necessarily mean that they have less of a desire to have children with both of their genetic information, but it may explain why information about them is less frequent.
4)E-mail discussion w/ Tamara Davis, Assistant Professor of Biology at Bryn Mawr College, exchanged 12/17/02.
10)Salon.com: Mothers Who Think
Oil Spills Killing Nature Name: Amanda Mac Date: 2002-12-20 15:40:55 Link to this Comment: 4182 |
On November 8th 2002, an oil tanker, Prestige, carrying millions of gallons of crude oil sunk off the coast of Northwest Spain. As a result, tons of oil seeped into the ocean and made its way to the coast of Spain and slowly creeps toward France and Portugal everyday. When the tanker sunk, it brought down millions more gallons of oil contained within barrels. Reports now say, that these barrels are eroding due to the chemicals in the water and the pressure of the deep ocean, causing an estimated 125 tons of oil to disrupt the ecosystem daily. Furthermore, scientists predict that this will occur for the next 39 months. (1).This serious and tragic accident has and will continue to cause huge damages to the ecosystem. What happens to the environment when an oil spill occurs? How will those who are responsible for such catastrophes be punished? And what will the government do to prevent such an occurrence from happening again?
Oil spills occur most often due to natural disasters, malfunctioning equipment, and human carelessness. Petroleum is used as an energy source for many human machines: vehicles, heating sources, and electrical generation. The wide range of uses creates a large dependence upon oil by the human race. This dependency causes the shipment of oil to take place on land and water. It is during the transportation time, when much of the oil either leaks or spills from its transporter and then enters into the environment. These spills may be due to car accidents, hurricanes, and problematic transportation. Most importantly though, much of the oil spewed into the environment is due to small incidences of human carelessness. (2) For example, when people change the oil in their cars, they may dump the old oil onto the ground causing it to leak into sewers, and drain into streams and creeks. If all the oil dumped onto the ground were added, it would equal or surpass that amount of a huge oil spill.
When oil leaks into the environment it has huge repercussions on the plants and animals inhabiting that area: it poisons creatures after ingestion, by direct contact, and by destroying habitats. The harmful affects of toxic levels of oil are weakly understood, especially with microorganisms such as plankton, bottom dwelling organisms and larval fish. But, essentially these animals that are towards the bottom of the food chain ingest the oil. When larger animals eat them they too are affected by the oil and so on. Fish ingest oil through their gills and if this does not kill them, then it affects their ability to reproduce or causes deformed offspring. Also, slow moving shellfish like, clams and mussels are unable to escape the oil slick (the thin layers of oil that float to the top of the water) and therefore die. (3)
Direct contact with marine mammals and birds cause these species to ingest a great deal of oil when attempting to clean themselves. Carnivorous animals, which then eat the bodies of those animals, also end up ingesting toxic amounts of oil. Ingesting oil damages the internal organs, particularly the liver and also affects the reproduction system. Also, when birds come in contact with oil it damages their feathers disenabling them to fly and leaving them so heavy they may even sink. It also interferes with the feathers thermal characteristics. Birds in lower climates often die of hypothermia.
Lastly, living conditions are interrupted when tar-like clumps sink to the bottom of the ocean. (3) These clumps destroy living conditions for bottom dwelling organisms and destroy spawning sites for fish and shellfish.
Long term, oil spills create toxic materials that remain in the water and on the land for many years. These materials build up in the food chain to lethal levels and destroy or disrupt an area's ecosystem. At least one tier of the food chain, typically those in the lower levels, is affected greatly, if not eliminated. This causes a large disruption for those creatures that eat them have nothing to eat and then they die off and so on. (2) The largest oil spill in history was the Exxon Valdez spill into Prince William Sound, Alaska in March 1989. (4) An oil tanker ran aground on Bligh Reef leaking 11 million gallons of crude oil. This spill had serious ramifications on the ecosystem. During the event, 15,000 sea otters died mainly due to ingesting the oil. (3) Over 250 bald eagles were found dead due to the direct contact of oil on their feathers. (5) Common murres, another species of bird, were also affected greatly. It is estimated that nearly 20,000 died and furthermore, that the oil spill disrupted the reproduction cycle of this species. Pink Salmon, who carry characteristics that are particularly vulnerable to oil toxins, were affected by the hydrocarbon contamination. Scientists have found that two types of reactions occurred: growth rates in both wild and hatchery-reared juvenile pink salmon from oiled parts of the sound were reduced and two there was increased embryo mortality in oiled versus unoiled streams. Luckily, these species, ten years later have seemed to recover almost fully from the oil spill in 1989. But other creatures were not as fortunate.
The amount of harbor seals since the spill has declined by 43 percent. Carcasses of 838 cormorants, a species of bird, were recovered following the oil spill and many more were thought to be missing. Since the spill, the species has declined in numbers significantly. Pacific herring spawned in intertidal and subtidal habitats in Prince William Sound shortly after the oil spill. A significant portion of these spawning habitats, as well as herring staging areas in the sound, were contaminated by oil causing these fish to decline in numbers as well. It has been documented that pacific herring have not recovered from the spill. These are only selected examples of the damages that this spill had on creatures within Prince William Sound. The damages on the animals were colossal. Humans were affected as well. Commercial fishing was closed in the area due to the damages that the oil had on the sea-life. This largely affected families who depended upon the money from fishing and also had an affect on the economy of the town.
After the Exxon Valdez oil spill, the government passed laws in order to prevent such occurrences from happening again. The main changes occurred within the US government. Congress passed the Oil Pollution Act in 1990. These changes included requirements on the ships carrying oil. New tankers must be built with a double-hull and by 2015 double-hull tankers will be mandatory. In cases where the responsible parties are not able to pay for damages, the OPA provides financial aid for clean up. (6) These laws, or similar ones requiring such demands, were passed in Canada and the European Union more recently. All were laws hoping to prevent oil spills from occurring again.
When the oil tanker Prestige spilled over 60,000 tons of oil off the coast of Spain it killed over 15,000 birds. Each day volunteers are finding hundreds more. There are growing fears that Iberia's 10-25 pairs of guillemots will have been wiped out. There are only two tiny colonies of between five and 11 pairs in Spain, in the middle of the area affected by the spill. Several have been found dead. Growing concerns about extinction have been developing and it is now predicted that these birds will die out due to this spill. (7) This is only the beginning of the tragic event. More and more oil slicks are found everyday. And, more and more consequences are occurring. So, who is responsible for this event? The tankers murky ownership to avoid high taxes and costs has created a difficulty in seeking financial aid for cleanup. The Prestige flew a Bahamian flag. A Liberian company, Universe Maritime, from offices in Athens, managed it. A Liberian Company, Mare Shipping Inc, owned the ship. And Crown Resources chartered the ship, which is based in Switzerland. (8) Therefore, punishment so far has been low and perplexity very high.
Furthermore, it seems more recently that not only has it been difficult to find particular peoples responsible for the spill, but also others are unwilling to help. Spain has reluctantly cleaned its own coast, however when it comes to cleaning the water they have not been cooperative. Mostly, France and Portugal have moved in only for fear that their own countries will be affected. But all of this help for the environment is only out of forced necessity for these countries. One would think that after a huge environmental disaster that kills hundreds of other species and that was caused by our own species, people would jump at the chance to clean up this horrible event. Unfortunately, humans do not learn that quickly. It took the Exxon Valdez spill to influence the US to make just preventative laws. And it wasn't until other oil spills that caused the EU and Canada to do the same. When will this end? And how much longer can the ecosystem continue to absorb such carelessness?
Luckily, there is something we can do. Raise consciousness about everyday oil spillage, such as dumping car oil on the ground. Be aware that others are educated and conscious as well so that human carelessness is lessened. Visit the web-sites below to learn more about oil spills and spread knowledge.
1) "Spain Tanker Could Leak Out Oil Until 2004 ,on the Discovery Homepage.
2) A Fact Sheet on Oil Spills, ", AWMA homepage
3) PBS facts ,on PBS homepage
4)"What's the Story on Oil Spills," , NOAA homepage
5) Oil Spills , National Fisheries Service Online.
6)"Changes in US Shipping Laws," , US government law.
7)"Spanish government slammed for Prestige spill" ,Independent Online News.
8)"Chirac Threatens Legal Action Over Prestige Tanker Disaster" ,Yahoo News homepage.
Controversial Side-Effect of Oral Contraceptives Name: Emily Sene Date: 2002-12-20 16:25:48 Link to this Comment: 4183 |
Oral contraceptives (or OCs) work by introducing new levels of hormones to the body. There were two kinds of pills available in the 1960s-1970s. They contained either 50-80 mcgs of estrogen or a dose of progestin that is ten times higher than is used today. Research proved that these levels of hormones were unnecessarily high. 98% of OCs on the market today are for low-dose pills. These contain a blend of 35 mcg of estrogen and a small amount of progestin. (1) There are also prescriptions available that contain only progestin which work slightly differently from blended pills. They mainly serve to thicken the cervical mucous to prevent sperm from joining with the egg. (4) Both combination and progestin varieties prevent a fertilized egg from implanting in the uterus.
The hormone levels were initially much higher than necessary because no one knew exactly how much was necessary to prevent the egg from fertilizing. With the advent of combination pills, new research revealed that a blend of estrogen and progestin was as effective as higher doses of either hormone. (1) Although the separate, higher doses were effective, they were undesirable because of the nausea and bloating they caused. Today, the low-dose pills reduce the risk of unpleasant side effects. However, there can still be a number of complications depending on which formula of pill is prescribed and individual hormone composition. Research is still being conducted to determine the combination of ingredients that will minimize additional symptoms. For now, there are many brands of pills available and through trial and error, patients can determine which formula is the most compatible with their body chemistry.
Some changes are inevitable. Within the first few months of use, the body is experiencing a transition in hormone levels. This adjustment period includes several mild changes in the menstrual cycle. More specifically, bleeding between periods (called "breakthrough bleeding") and light or skipped periods. (5) Some other more common symptoms include nausea, breast tenderness, and changes in mood and sex drive. (4) These typically subside after three to six months of use. The nausea can be prevented in most cases by taking the pill with a meal.
Use of oral contraceptives has also been linked to several more serious conditions. Previous research made a connection between the pill and an increased risk for breast cancer. However, more and more studies are disproving this theory. There is a slight correlation due to the hormonal effects of estrogen and progestin, but overall the health benefits outweigh this risk. (1)(5) There has also been a lose connection established with cervical cancer. Long-term use of the pill can alter the cells in the cervical canal and make them more susceptible to the disease. (5) However this theory is also unproven. It is definitely unadvisable to smoke while on a prescription for oral contraceptives. This dramatically increases the chances of developing blood clots, a risk that is present even in non-smokers. (4)
Certain brands of OCs have a welcome side effect. Ortho Tri-cyclen is proven effective in the treatment of acne. (6) This skin condition is linked to hormone levels throughout the menstrual cycle. When oral contraceptives are used to regulate the hormones, they deplete several acne-causing agents. Up to 80% of patients in one study noted an improvement in the condition of their skin. (6) The pill is also prescribed to reduce cramping and increase menstrual regularity. It may decrease the risk of acute pelvic inflammatory disease and iron-deficiency anemia. (6)
One of the most debated side effects of the pill is weight gain and loss. This is a very sensitive issue among women and many will refuse oral contraceptives if they believe it will cause them to gain weight. It doesn't help that the most common users of OCs are young women, who are naturally susceptible to weight gain. This pill is often targeted as the cause of a phenomenon that would have occurred anyway. (2) One study observed the weight of 128 women during the first four cycles of OC use with surprising results. 52% of the women remained within two pounds of their starting weight and 72% had no gain or loss. In another trial of Ortho Tri-cyclen, more women discontinued use of a placebo than the actual drug because they felt it was making them gain weight. (2) Obviously, any fluctuation in hormone levels could effect metabolism or eating habits, but the myth regarding this side effect is grossly exaggerated compared to the scientific data. Body image is an important issue to women. If they believe oral contraceptives will compromise their physical appearance, they will continue to rely on less effective means of birth control.
The issue of side effects has caused the most controversy surrounding oral contraceptives. They have been linked to almost every health problem a woman can have, from cancer to weight gain. Myths such as these perpetuate themselves and become larger than life. Researchers today are discovering that most of the risks involved with oral contraceptives are negligible or totally non-existent. It is important for health care providers to stay up to date on these developments so that they can provide women with the information they need to make an educated decision on birth control.
2)Oral Contraceptive Pills and Weight Gain
3)Lycos Health with WebMD, Effects of Hormones
4)Planned Parenthood, Facts About Birth Control
5)Lycos Health with WebMD, Info on Birth Control Pills
Reflexology Name: Lawral Wor Date: 2002-12-20 16:46:17 Link to this Comment: 4184 |
This time of the year, with Christmas shopping, loads of family, crowded airports, and, oh yeah, finals, we could all use a good massage. When you get that coupon for a spa day in your Christmas stocking, you have a few different kinds of massages to choose from such as full-body, chair massage, deep tissue, recuperative, circulatory, Swedish massage, shiatsu (pressure point), and reflexology. Each of these types of massages is easy to understand and one can easily see how they would be beneficial after a long day at the mall, airport, or Canaday, except for reflexology. In reflexology, the entire body is accessed through pressure points in the feet. For example, that knot in your shoulder from carrying books back and forth to your carrel or packages around King Of Prussia will melt away by working on a pressure point in the tendon below your pinkie toe on the bottom of your foot. (1) I personally, think that reflexology is a load of phooey that results in a really good foot massage...that does nothing for the knots in your shoulders, but for those who believe in reflexology, all of life's ailments can be discovered in or fixed by working with your feet.
There are some disputes over the origins of reflexology and its current uses. Of the many national and international groups for reflexologists, there was a lot of conflicting information on both points. The Reflexology Association of America (3) states that all ancient cultures practiced reflexology. The International Institute of Reflexology (2) and the British Reflexology Association (5) both hold that reflexology originated in Egypt. The Reflexology Research Project (4) says that reflexology was practiced in ancient Egypt, China, India, and Japan. Even with the disputes in origin, the basic practices of reflexology remain the same. The charts found on most sites, when they will reveal their secrets, even correlate with each other. The easiest place to find charts, however, is in health stores in the form of reflexology socks. (7)
Like most forms of massage, people perform reflexology at home, unlicensed. Products such as reflexology socks can help people to better treat conditions that they think will be alleviated by reflexology. The Frequently Asked Questions sections of most of the official reflexology sites I found were flooded with questions about "my wife" or "my friend" or "can I help." There were very few questions regarding referrals to professional reflexologists. (3) (4) Some of the sites seemed to come to terms with the fact that most users of reflexology will practice it, untrained and unlicensed, in their own homes. These groups tried to give sound advice to the questions asked and stressed the need for reflexology to be consistent. It will not work without consistency of practice and patience. Reflexology can only work to alleviate bodily ailments when used over time; it does not work overnight. (4)
Reflexology is usually marketed as massage and when the average person thinks of reflexology they categorize it as a form of massage. Many spas even advertise reflexology as one of the types of massage one can receive. Those that practice reflexology in their homes also consider it a kind of massage, for the most part, although a kind of massage that is more beneficial than your run-of-the-mill back rub. Reflexologists, however, stress that reflexology is not massage; it is something much more. While massage stimulates the body tissue that is being directly touched, reflexology stimulates body tissues through the nervous system, only touching the feet and, sometimes, the hands. (3)
Many of the differences between massage and reflexology lie in how each is performed and under what circumstances. The main difference is, again, the amount of the body that needs to be touched. In regular massages, by professionals at least, the person receiving the massage must be practically nude and most of the body is touched. When being treated with reflexology, only socks and shoes must be removed, as those are the only areas of the body that will be touched by the therapist. Another of the big disparities between massage and reflexology is that massage does not claim to heal anything except sore, tense, or stressed out muscles, with your occasional chi cleansing on the side. Reflexology, on the other hand, claims to heal more serious problems than muscle tension. (3)
While all the information I have found about reflexology stresses that reflexology should be used in conjunction with more traditional medical care, not instead of it, many reflexology sites and collectives do seem to think that reflexology can cure many of your minor and basic ailments, as well balancing your karmic energies. Take the start page of the International Institute of Reflexology for example. It states, "If you're feeling out of kilter/ Don't know why or what about/ Let your feet reveal the answer/ Find the sore spot work it out." (2) This same group of reflexologists does state that while reflexology is an advancement in the health field, it does not claim to be medicine. The Reflexology Association of America, however, compares reflexology to chiropractic work, osteopathy, and somatic practices. Again, the discrepancies between the major organizations on this matter have not really impacted the way reflexology is practiced. It does, however, make me a little nervous that the people who are specialists in reflexology can't decide whether or not it is medicine.
In my research I could not find any sites against reflexology. I looked extensively for sites that backed up my opinions, but couldn't find them. I couldn't find any sites about how your horoscope isn't true either though. While I do think that reflexology has its valid, concrete uses, mostly relating to feet, I think it is the type of thing that needs to be lumped in with horoscopes, crystals, auras, and chis. You either believe in it or you don't and no amount of information on either side is going to convince you otherwise. Many of the people who use reflexology for themselves use it as a last resort. Many of those writing in for advice from national organizations mentioned incurable conditions that they hoped reflexology would help with such as cancer of various parts of the body, psychological disorders, and infertility. (4) Since conventional medicine has failed these people, it must be easier for them to believe that a good foot massage can stimulate the nervous system into action to make them healthy in a way that surgery, treatment, and medicines could not.
For myself, I still do not think that reflexology can cure problems in other parts of the body through nervous system stimulation. Reflexology seems like any other type of placebo treatment. It helps the patient to feel better simply because the patient believes that it will. With that line of thinking, reflexology will never produce results for me. I still enjoy a really good, deep, and thorough foot massage, but I think it is just that. I do not believe, as the Reflexology Wallet Chart would indicate, that my fourth toe allowed itself to be broken to alert me of my cold or that, conversely, my sinuses are weeping for the broken toe. I'm pretty sure that I just fell down the stairs and then, in a completely different set of circumstances, caught a cold. That is, of course, up for debate.
1)Reflexology Wallet Chart from the International Institute of Reflexology, copyright 1980.
2)International Institute of Reflexology
3)Reflexology Association of America
4)Reflexology Research Project
5)The British Reflexology Association
TMJ: It Sucks Name: Heather D Date: 2002-12-21 00:24:06 Link to this Comment: 4188 |
As I begin to approach my 21st birthday, I'm slowly starting to realize that I am not a child anymore. I no longer have someone following me around, cleaning up after me (not that I ever really did...) and my mom is not there to hold my hand. Then there are more material things that I miss like nap time, play dough and jelly shoes. However, as I got older I never thought one of the things that I would miss about my childhood would be sandwiches. Yes, you heard me, sandwiches. As I've grown older, I've developed a condition called TMJ (Tempro-Mandibular Joint) dysfunction which prevents me from opening my mouth all the way. Every time I try to bite into anything bigger than a chicken nugget (the small McDonald's kind), you can hear a loud pop come from my right hand jaw. I have seen several specialists about this disorder, but all have told me that there is nothing that can be done about this disorder. I simply need to keep my mouth shut, eat soft foods, and oh yea, not yawn, ever. I am very lucky that I do not have a more advanced version of the disease because those who do often have constant headaches, blurred vision, dizziness and even hearing loss. TMJ is such a serious problem, unfortunately not enough research is being conducted to find something to alleviate the pain.
TMJ is actually a very wide spread disease afflicting nearly 40 million Americans (1). No one is really sure what causes this disorder nor do they know how to cure it because there is no typical version of the disorder. There do not seem to be any linking factors between those who suffer from the disease except for the fact that nearly 90% of all TMJ sufferers are women. The dysfunction occurs when the jaw joint either becomes dislocated somehow or the cartilage in the joint wears down. This can cause excruciating pain because the jaw bone is either hitting bone or sensitive flesh, and also causes the nerves to stretch and become agitated (2). Because the TMJ is in such a sensitive area, there can be many symptoms and many very dangerous side effects as the condition progresses. The joint directly effects the mouth, neck, ears, and scarily enough the blood flow to a person's brain. With the TMJ being in such a dangerous spot, you would think that the medical profession would put more effort into finding a treatment or at least what causes it.
There is a lot of speculation about what really causes this joint to slip out of place. There is really no evidence that TMJ dysfunction is caused by aging, stress or bad genes. For a long time, it was though to be a genetic disorder, but after careful research, doctors have come to the conclusion that it is not a genetic problem. Some doctors treat it like they would a knee injury because in both cases they are dealing with cartilage giving out. However, one cannot really cut back on the use of this essential joint. It is used when a person swallows, chews, talks, or yawns. Scarily enough, some researchers are starting to link this problem with many everyday activities that one would usually not connect with their jaw like carrying a heavy back pack, playing an instrument, cheek biting, mouth breathing, or even reading a book (3)! The muscles of the back and shoulders seem to have an effect on the TMJ, which can also be seen in the fact that as a TMJ disorder gets worse, many patients often complain of back, shoulder and neck pain.
Doctors have experimented with several treatment methods, but nothing conclusive has ever been found. They tried putting Teflon plates in sufferer's jaws to help smooth the joint, but soon after the surgery was done most patients complained of even worse pain due to the jaw slipping off of the Teflon plate. The most common therapy these days is by using an appliance that snaps into the mouth to keep the jaw forward in its correct position(4). Unfortunately, this only works for a very few sufferers of TMJ dysfunction. At best, someone with TMJ problems can try yoga and relaxation therapy, but even that is a far cry from a cure.
Not only is there a desperate need for therapy, there is also a need for money to fund research and get people the help they need. Many insurance companies will not pay for alternative therapy to help ease jaw pain and trauma; they simply shell out money to give people medications to numb the pain. Medicine however, cannot stop a person's jaw from locking in place or popping every time they chew. Medicine cannot bring back a person's hearing after it is gone and it sure as hell cannot take away all the pain of this horrid problem. TMJ dysfunction is an incredibly painful problem and there is no reason in our modern world that someone cannot figure out a way to fix it.
.
1)TMJ Care Hawaii
2)Pain and the TMJ
3)The TMJ Association, Ltd. Changing the Face of TMJ
4)Headache Stop
The Science of Smelling Attractive Name: Meredith S Date: 2002-12-21 00:38:26 Link to this Comment: 4189 |
You see the advertisements all over the place; in every department store, you are bombarded with the smells that are supposed to make you more attractive to another person. Can these perfumes, made of fats and chemicals, actually make a person more attractive? Until recently, science would have said no, but now more and more research is proving that humans can and in many cases do rely on their nose to find a significant other. The once thought of as vestigial sensory organ, now known as the vomeronasal organ (VNO) located in the nose, is used by most mammals to receive chemical signals released from other animals. The signals, sent via chemicals called pheromones, once released, send messages concerning whether the animal is ready to mate, its genetic make-up, and possibly more, although scientists are still researching. Regardless of what else they might find, it is more than apparent that pheromones do play a large part in how lower mammals live, and, though to a lesser extent, even humans are affected.
Pheromone comes from the Greek words, pherein, to convey and hormone to set into motion, urge on(1). They were first discovered by the French etymologist, Heri Fable in the late 19th century. He noticed that a male moth could detect a female moth even in the presence of other, very strong chemicals. Later, in 1959, official testing of pheromones commenced, with scientists finding samples of chemicals that insects emitted when trying to communicate with one another(2).
Scientists have known about the VNO in animals for years. And in 1800, an odd group of cells was found by L. Jacobson in a patient's nose. He thought that they were a group of non-sensory organs, and then ignored them. The topic of VNO of humans was closed and most text books stated that humans "lost" their active and useful VNO as they developed in the womb, and only retained the vestigial and imperfect organ at birth. But, in the mid 1980's the debate was reopened, although scientists noted that they were not in the same location is in other mammals(3).
These tiny, cigar-shaped sacs, receive signals from chemicals emitted from the sweat of others. In case of rodents, these cells do not connect with the main olfactory bulb, the index of the brain in which smells are registered, but rather, the signals are sent to the accessory olfactory bulb, and then on to the part of the brain which controls sexual reproduction and maternal behaviour (4). Although not proven, many scientists believe that the same phenomenon is true in humans, meaning that the VNO is not connected directly to the part of the brain which recognizes smells, but rather the part that controls reproduction. All in all, this could mean that humans smell chemicals and react to them without even knowing that they are doing so.
Humans, as well as all other mammals, have glands throughout their bodies. Although normally located in concentrations, they are found at the base of every hair follicle, with more being found under the arms and in the genital region. These glands produce chemical compounds, combined with sweat, that have been proved to affect others of their same species.
One proven example of these compounds affecting others of the same species is the McClintock effect otherwise known as "women's dormitory effect"(5). In 1971, Martha McClintock published date proving that women who live together tend to ovulate at the same time. The menstrual cycle has three distinct phases; the menses, or actual blood flow; the follicular, or pre-ovulation period; and finally, the luteal, or post ovulation period(6). McClintock established that two different pheromones were emitted by women. One pheromone, produced during the the follicular stage, accelerated ovulation, shortening the cycle; the other, produced during ovulation, delaying it, and making the entire cycle longer. As a result, women who lived within close proximity of each other, are thought to have sensed the pheromones of one another and converge their individual cycles into one(7).
Another example which proves that humans can sense one another based only on smell is a study in which small infants are proven to recognize the scent of their mother. When an infant was presented with a gauze pad worn by its mother versus one worn a stranger, the infant will be attracted its mother and be repulsed by the foreign scent(8). Scientists believe that this experiment proves that smell among humans is not completely learned and further supports that there is another sensory organ at work.
Infants are not the only humans who consciously or unconsciously use their sense of smell to determine the identities of other; adults do too. In a study using mice, it was determined that "...in strongly inbred strains of mice that share virtually all the same genes, the individual mice will choose mates that differ only from them in terms of the major histocompatibilty complex (MHC)." The MHC is a set of DNA that determines the make-up of the immunity systems. Individuals have different MHC's, making them insusceptible to different diseases, helping improve their chances of survival(9). Scientists believe that the proper pheromones, produced at certain times, alert neighbouring mice to whom and what, in terms of genetic make-up, they are thinking of choosing. In choosing partners with different MHC's, these mice are actively helping diversify the gene pool from which the resulting offspring may come.
The link from pheromones and genes may seem hard to establish, especially in humans, but it nevertheless exists, and is growing stronger as more research is conducted. In a study involving humans, women seem to choose their husbands or friends by how they smelled, namely that their friends produced the same pheromones as their fathers. By examining the clothing worn by close friends, scientists concluded that most women prefer the smell of men close to, but not the exact same as their own(10). Another study, involving a community of Hutterites, a religious community that has segregated itself from other populations of people for hundreds of years, producing an inbreeding affect much like the mice had above. These women too, choose to marry men with different MHC's than their own(11).These two studies would support the conclusions drawn from the mice experiment above, that genetic variability is sought after when the body is thinking about reproduction.
But pheromones are not just a subconscious phenomenon, with only genetic results. Aromatherapy is an entire industry based on changing the moods of humans, just by surrounding them a pleasing scent. Scents have been established as attributing to trigger mood swings and other psychological responses, such as the Gulf War Syndrome(12). With more research being completed on pheromones, it has been proven that they can also improve the human mood. The smell of another person may make a human feel more aggressive or sexually aroused, but they can also feel calmer. It has been proven that both men and women who smelled the gauze worn under that arm of that of an elderly woman felt "uplifted," whereas the smell of a young man had a "depressive effect. (13)"
Scientists are not the only people researching the scent of love. Commercial corporations such as perfume manufacturers are very interested in how much of a role pheromones play, and if they can be mass produced. Palatin Technologies is just one of the many companies that are researching to manufacture the perfect scent to attract others. Already a protein compound called PT-141, a copy of a hormone found in and acting on the brain, has been created and proven by the use of mice to increase the amount of sexual activity in females(14). And Palatin Technologies is not the only company researching and profiting from marketing how humans smell. Every year, numerous companies, such as Erox Corporation(15) spend billions and billions of dollars, as do consumers, on trying to help men and women attract each other.
Doctors are also looking into pheromones. Another possible product that may emerge from this research is an alternative version of birth control. Seeing that the pheromones are now known to alter the menstrual cycle, it may be seen in the future that a safer and more natural form of contraceptive could be in taking a few more pheromones at certain times of the month.
These two products are only some the possibilities that pheromones provide, in terms of products and better understanding the human body. As the need for more research grows and scientists realize more and more the importance of pheromones, the field of study of the olfactory system is expanding. The true extend of how much of the human body reacts to smell is becoming more and more apparent every day. Looking for other tell tale signs of pheromones, scientists have found suspected pheromonial chemicals in human saliva and semen(16). There are critics to the idea of a subconscious sense of smell dictating in part the lives of humans. One large criticism is that humans are far more complex than mice, and they use their body language, as well as words to express themselves. Research currently being conducted involving primates, who like humans have a very complex language of communication, involving facial expressions, as well as sounds equivalent to human words, is proving these critics wrong(17). But finding reliable evidence that is not tainted by the drive to make a monetary profit is hard to do. Nevertheless more and more experiments are producing results indicating the pheromones play at least a part in human behaviour. How strong these pheromones are has not been completely established, but that has not stopped researchers from continuing to try and find out.
1)OED Online
2) http://www.colorado.edu/iec/FALL297RW/Pheromones.html
3) http://www.colorado.edu/iec/FALL297RW/Pheromones.html
4) www.hhmi.org/senses/d210.html
5) www.santesson.com
6) http://www.mckinley.uiuc.edu/health-info/womenhlt/ir-mense.html
7) www.newscientist.com
8) http://www.colorado.edu/iec/FALL297RW/Pheromones.html
9) www.apa.org/monitor/jan98/phero.html
10) www.nature.com
11) www.apa.org/monitor/jan98/phero.html
12) www.bbc.co.uk
13) www.bbc.co.uk
14) www.ananova.com
15) euclid.ucsd.edu
16) www.libray.ubc.ca
17) http://www.colorado.edu/iec/FALL297RW/Pheromones.html
Soy and the Effects it May Have on Humans Name: Margot Rhy Date: 2002-12-21 00:45:28 Link to this Comment: 4190 |
I wanted to tell the real, full story about soy, but instead I am going to explain why I cannot tell the real, full story of soy. It all begins with this theory I have about Asian men. Asian men are viewed as really effeminate by the standards of Western culture. My dad is Korean, and so I witness this firsthand. But then I wondered that maybe there is some biological support to why Asian men are stereotyped as being small and not so masculine- I thought that maybe it was connected to how much soy they eat. I know that Asians do not have soy-based diets, my dad does not just eat tofu and miso, but he does intake some amount of soy at least once a day, every day, and has done so for his whole life. I got to wondering that if he had this diet, keep in mind that my dad only eats traditional Korean food, and his father had a similar diet, and his father had a similar diet, and so on, that maybe after generations and generations of steadily including soy in meals might have had an effect on Asian men's estrogen and testosterone levels. Perhaps soy did lead to producing men with different characteristics, those judged as "effeminate" by the West. It is just a theory, so I researched to see if anyone else thought about this and did scientific studies measuring the effects of soy on the male body. If only the story of soy could be so simple.
The information I found on soy leads to many different conclusions, and deciphering fact from fiction becomes a question of "whom are you willing to trust?" The story of soy is messy because everyone- nutritionists, the "Food" section in local papers, women going through menopause, cancer specialists, body builders, soy farmers, the International Soy Symposium- talks about the effects of soy on the human body and presents their opinion as fact. The scientific truth about soy became very unclear to me while researching this paper because I had to start investigating the motivations people have for either promoting or discouraging the consumption of soy in America.
Naturally, soybean farmers promote eating soy. In fact, most of the websites that only discussed the positive effects of soy were backed by companies who stand to make money off of soy, such as www.talksoy.com, sponsored by the United Soybean Board, or www.soyfoods.com, the U.S. Soyfoods Directory sponsored by the Indiana Soybean Board. Furthermore, on October 25, 1999, the FDA decided to allow a health claim for products "low in saturated fat and cholesterol" that contain 6.25 grams of soy protein per serving (1). This meant that cereals, smoothie mixes, meat substitutes, and all other soy-based products on the market with a soy protein serving of 100 grams could now be sold with labels sharing the benefits of soy. My question, though, is how could the FDA not approve soy when Illinois, Indiana, Kentucky, Michigan, Minnesota, Nebraska, Ohio, North Dakota, Maryland, Delaware, Arkansas and South Dakota all have soybean councils and make so much profit off of growing soy? In fact, such state soybean councils provide the money, at least $2.5 million, for scientific research that supports the positive effects of soy (1). This research tells consumers that soy will "prevent heart disease and cancer, whisk away hot flashes, build strong bones, and keep us young forever," thus making the position of soybeans in the marketplace so strong (1). When science meets industry, when there is money to be made, the story of soy becomes very complicated.
The other information I found about the effects of soy on the body led me to a state of even more confusion. Some suggest that soy isoflavones promotes bone formation in women, and these are especially helpful when the postmenopausal decrease in estrogen increases the risk of osteoporosis. One study showed how women who consumed soy isoflavones had significant increases in bone mineral content and density in their lumbar spines, compared with a control group (2). However, nutritionists Sally Fallon and Mary G. Enig say that soy blocks calcium and causes a deficiency of Vitamin D, both obviously important for healthy bones (3). Therefore, as a consumer, it is difficult to understand just how positive soy is. No advocates of soy are very clear on how much to soy to take, they just say to eat more of it, nor are they clear on who exactly gains the most benefit from soy.
Even when there is a promotion of soy to a specific group, such as older men, the statistics are still puzzling and do not lead to a clear conclusion. For example, there is a theory that Asian men are at a lower risk of having prostate cancer than American men because they intake more soy. Every website I came across in a general Google search for "soy Asian Men," such as www.vogels.com, www.wral.com., and alwaysyourchoice.com, only discussed how soy reduces the risk of developing prostate cancer. However, if soy really decreases cancer, then why does the FDA not allow any claims about cancer prevention on food packages? I was puzzled until I came across advertisements claiming how "the Japanese, who eat 30 times as much soy as North Americans have a lower incidence of cancers of the breast, uterus, and prostate" (1). No other scientific studies I found argued against this. However, it was very convenient of this advertisement to ignore that Asians in general have much higher rates of other types of cancer, such as cancer of the esophagus, stomach, pancreas, liver, and thyroid. Therefore, the research that links low rates of reproductive cancer to soy connects high rates of thyroid and digestive cancers to these same soy foods. Studies have shown that soy causes these types of cancer in laboratory rats (1). I think this is a sign that anyone can support research, analyze the data, and interpret it however they want to, especially if they can make money in the process. A last example of this relates to the information I found which supports my personal theory about soy and Asian men.
There was one study I found which gave the information which I was originally looking for. It suggests that since soy protein is estrogenic, it can lower testosterone counts, and can even kill testicular cells. A research group found that "total and free testosterone concentrations were inversely correlated with soy product intake" (4). However, this data was presented in Testosterone Magazine, which I doubt has any real standing in the scientific community as a source of credible findings.
Perhaps my theory about soy and Asian men will just stay a theory. I do not believe that the story of soy will be straightened out any time soon, especially since it is so entangled with money. A group of researchers can interpret their data, or choose to not share all of their findings, and present whatever information they want to about soy and its effects on the human body. What this creates is a market, full of soy-based products, and a public, who wants to make healthy choices, but gets little help from the industry in sorting through all of the information. Depending on the newspaper you read or health magazine, the story about soy is always unclear, and I get the impression this is because it is always half told.
1) www.leaflady.org/new-soy.htm , website promotes herbal healing methods, has many articles about natural health options
2) Abatra Technology, Co., Ltd , website of Abatra Technology, Co., Ltd, a natural health product center
3) www.yourtruhealth.com/main/information/foodadditives/Soy.asp , nutrition website, promotes commercial products
4) Testosterone Magazine , article from the Testosterone Magazine website
Give a Little Bit: Touch and Depression Name: Chelsea Ph Date: 2002-12-21 07:14:51 Link to this Comment: 4193 |
"I'll give a little bit
I'll give a little bit of my life for you
So give a little bit
Give a little bit of your time to me
See the man with the lonely eyes
Oh take his hand
You'll be surprised"
-Supertramp
What is depression?
Depression is an illness that affects all aspects of person's life and the lives of those around them who care about them. Surprisingly widespread, approximately one in five Americans will suffer from depression at some point in their lives. Three main types of depression are recognized by the National Institute of Mental Health: major depression, which is most severe and affects a person's ability to work, eat, sleep and function; dysthymia is milder in degree, but lasts for longer periods of time and may be interspersed with bouts of major depression; and bipolar disorder, or manic-depression. For the purposes of this paper, I will be focusing on major depression and dysthymia. The clinical treatments available for depression are therapy and medication. However, these treatments leave out a serious element of a person's life which is affected- personal relationships, human contact. Touch is the first sense to develop in humans and is the most important for the establishment of good, healthy social relationships, positive self-images and a sense of acceptance and love.
Why is it relevant to life?
Why is the study of depression relevant in our everyday lives? One reason is that, statistically, everyone in America either knows someone who is depressed, or is depressed themselves. Depression is a serious illness which should not be taken lightly by anyone. Many times, those who are friends or relations of a depressed person feel frustrated and helpless to help or reach out to the person. One of the easiest and most helpful things a person can do to is to reaffirm their relationship in a positive way. Touch makes someone feel cared for, accepted and loved.
In order to understand why touch may help some people cope with depression, it is necessary to know why people get depressed. Unfortunately, there isn't always a reason. A person may certainly become depressed when coping with loss (death of a loved one, end of relationship, miscarriage), an illness of themselves or others around them, an upheaval in their lives (such as losing a job, a child being adopted, moving away from family or friends) or for genetic reasons, but depression does not have to be linked to these. The chemical causes of depression can be present without any sort of concrete cause linked to them.
What is the chemistry behind depression?
Clinical depression is characterized by the interactions of three neurotransmitters, serotonin, dopamine and norepinephrine. Neurotransmitters are chemicals that are passed from a synapse to another, relaying messages along the nervous system by generating an electronic potential. Briefly, serotonin helps to regulate sleep and body temperature, dopamine is involved in the experience of positive feelings, and norepinephrine which increases mental alertness (13). These basic chemicals control the things we need most to function in the world: sleep, a sense of purpose and metal clarity. If the balance is disturbed, the physical symptoms manifested are referred to as depression. These neurotransmitters also affect each other, so an interruption in one can work as a domino effect to interrupt the others, and vise versa when correcting the imbalances.
Selective Serotonin Reuptake Inhibitors (SSRIs) prevent the absorption of serotonin, so that higher levels are present in the brain and make it more conducive to passing messages between nerve cells. This is also shown to raise levels of norepinephrine, apparently because the two are linked. Therefore, norepinephrine levels can be controlled indirectly by controlling serotonin levels. The difficulty comes in finding a balance. Noreprinephrine, when present at high levels, causes anxiety which can cause physical symptoms of depression. When in too little supply, the messaging system in the brain is unable to function at an adequate level, which can also cause depression. Ok, so what does all this say about touch?
Scientifically speaking, what happens when you are touched?
Now that we understand the chemistry of depression, we can look at the chemistry of touch and see where that leads us. The Touch Research Institute at the University of Miami reports that message therapy on 52 depressed hospitalized children for 30 minutes a day, 5 days a week resulted in a lowering of norepinephrine and cortisol (a stress hormone) levels. The lowering of norepinephrine levels was a positive reaction in this case because it meant the children's levels were returning to a healthy balance- they were under less stress. The control group for this experiment was shown relaxation videos rather than having massages without experiencing any of the benefits the other children did. Touch, hugging in particular, has been found to increase hemoglobin and promote oxygen flow (5). Similar experiments showed touch therapy to be effective at reducing the severity and frequency of migraines, aid in the healing of physical injuries and in the survival rate of premature infants (10).
Touch is also linked to increased levels of serotonin, and a hug is a natural way to increase the dopamine levels in your body (8). The need for touch is something that every human feels from the moment they are born. The environment of the womb is one that leads to a need to feel connections with other humans after birth, much as we were all once connected to our mothers. Studies have shown a positive correlation between breast-feeding (the first and most important way a mother-child relationship is established) and mental and physical health later in life (8). As it is now apparent, touch helps to induce feelings of worth, love, acceptance and connection to others, which also means that chemical levels in the brain are stabilized and functioning. Touch therapy is the essential third component to combating depression; in some cases, perhaps an alternative to medication.
If depression is linked to chemical imbalances, why do we still need therapy? Why does touch matter when a pill can produce the same chemical effect?
Depression can sometimes be dealt with effectively using therapy or medication, but often, benefits are lost. "In psychoanalytic therapy, people customarily spend lots of time talking about their feelings rather than experiencing them. They do not learn to integrate their body and emotions with their intellect (12)." The most commonly recommended method of dealing with depression is called combination therapy, and involves both medication and psychotherapy (4). This is generally found to be very successful because it allows for treatment of both the emotional and chemical aspects of depression. However, just as the experiments and the Touch Research Institute showed, there is something special about touch which calms and enriches our lives. Medication is something that can take a long time to work, and it needs to be tailored to the person receiving it. This process can be an additional physical, mental and emotional tax on the person receiving it- even with therapy, many people find medication difficult to stay on; touch is something stable which can be offered during the upheaval.
Why is touch therapy not a well-known treatment for depression?
There is widespread concern and fear in professional relationships of client abuse, most often of the sexual kind. This concern is certainly well-founded, as in one self-reporting survey, 16.8% of professionals responding admitted to having sexual contact with at least one client (7). Unwanted sexual contact is always emotionally damaging to the victim, but can be heightened by the client/professional power dynamic. In these cases, there is a definite power advantage in the corner of the professional; the one who is supposed to help, supposed to be trustworthy. A betrayal by such a person can be doubly damaging, and lead to emotional scarring and the closing off of the victim to other relationships (7). For this reason, it usually takes a personal recommendation from a professional for someone to seek touch therapy. A professional might be afraid to suggest it because misinterpretation of their motives could lead to a lawsuit.
The knowledge of such client abuse is, slowly but surely, becoming wide-spread and it is both accepted and encouraged to speak up against this kind of abuse. Victims can see examples of other's coming forth without themselves being blamed for the actions of others. Fortunately, the laws regarding client/professional relationships are conducive to protecting others from victimization. The American Psychiatric Association suspends or expels 12 professionals each year for misconduct by the use of a peer review board (7). This board, while designed for the express purpose of preventing and punishing professional malefactions, is often accused of downplaying the extent or nature of client/professional relations. Concerns of bad publicity and public distrust often keep public law enforcement out of the cases (1).
The disadvantages of these laws and the actions of those who have laid the foundations for them are severe, however. The practice of psychiatry has always focused on the ability of the patient to speak about and communicate their emotions with the aid of nothing but neutral prodding of the therapist. Unfortunately, this paired with the widely publicized cases of client abuse places restrictions on the conduct of the psychiatrist, even when motivated out of compassion or concern. A therapist cannot become even remotely personal with a client, giving a hug or holding a hand brings them in danger of a lawsuit, and consequently, therapy (even paired with medication) does not satisfy all the needs of the client.
Why doesn't it work for everyone?
Touch is something that may not work for some people for a variety of different reasons. At the root of most is a lack of good experience with touch- whether from abuse, neglect or simply a dislike for physical contact without a known reason (6). In these cases, the best way for family and friends to show support is by connecting emotionally to the person, not by forcing touch on them. Some support groups, such a GLOW, focus on touch as a way to heal. This, unfortunately, can be very overwhelming for someone who is uncomfortable. The important thing to remember is that many ways of getting support are non-physical, and respect for the person's wishes is most important. In time, touch may be something a person can learn to enjoy.
In conclusion, depression is an illness that happens for many different reasons. It is nobody's "fault", and the victim will not just "snap out of it". Medication and psychotherapy are the most common methods used to treat depression, but the introduction of touch therapy is a crucial element when one is treating the whole person. Touch is our most basic sense. It is with us from the time we are in the womb until our lives are over. Touch can make us feel alive, connected, and cared for, chemically restoring a balance within our brains that helps us to function. Although touch has been proven very effective therapy, it is also something approached with caution by many. If a person's depression is caused by abuse, touch may be more detrimental than beneficial, and is considered inappropriate in a professional setting. Touch therapy allows those around the depressed person to contribute actively to their therapy- love and support from friends and family is free, and it really might be the best medicine!
1) Church of Scientology International , Psychiatrists and Sexual Abuse
2) National Institute of Mental Health website , Depression
3) Serendip , Depression...Or(better?) Thinking about Mood
4) Paxil CR website , Depression, your treatment options
5) Healing, Helpful, Heartfelt- HUGS , an ode to hugging
6) Hopper, Jim , Child Abuse: Statistics, Research, and Resources
7) Kolsby, Gordon, Robin and Shore , Sexual Abuse in Professional Relationships
8) Touch the Future , Breastfeeding: Brain Nutrients in Brain Development for Human Love and Peace
9) Wing of Madness , Reflections on Depression
10) Rigby, Judy , The Importance of Touch
11) About.com , The Antidepressant Waiting Game
12) Silver, Nina , The Biology of Passion: A Reichian View of Sex and Love
13) Women-wise.com , Holistic Nutrition: Food and Mood
It's a... ? Name: Lauren Fri Date: 2002-12-21 08:58:29 Link to this Comment: 4200 |
When most babies are born, the doctor holds up the newly-delivered child to exuberant parents declaring the infant's sex with such celebration that the lines s/he utters are oft-quoted on balloons, bibs, and baby shower invitations. Like most prototypical situations, this one is hardly consistent. More often than most people think, babies are born who are not quite male and not quite female. These individuals, who used to be called "hermaphrodites," are now referred to as "intersexuals" or as having an "intersexed" condition. The official definition of an intersexual is a person "born with sex chromosomes, external genitalia, or an internal reproductive system that is not considered 'standard' for either male or female" (1). The major problem of intersexuality is not a result of any sort of medical complications that arise from an unaltered intersexed condition. Rather, intersexuals' main obstacle is that intersexuality lacks a place in our society's current climate. Everything in our society from huge issues like gender roles to seemingly small issues like public bathrooms is based upon a rigid binary gender system. Thus, babies with ambiguous genitalia or sex are assigned a sex at birth, either male or female. The implications of this assignment are far-reaching, and leave many intersexed individuals confused and deeply troubled through much of their childhood and adult lives. The problem of intersexuality within society does not lie within the individuals themselves, but rather within the culture in which they are forced to exist and the non-realistic standards to which they are forced to conform.
Most minorities are ostensibly protected within American society. Anti-discrimination laws protect those in racial, ethic, and, just recently, sexual minorities. The Disabilities Act broke ground in accommodating those with physical disabilities, allowing them to fully participate in all aspects of society. Anti-discrimination laws and the American Disabilities Act gave previously marginalized groups the rights they deserved as people living in America. It has been estimated that individuals with some intersexed condition make up approximately 1% of the population, and that as many as two in every 1000 newborns receive surgery to "normalize" genitalia (2). Everyday, five children have their genitals mutilated in the United States alone (7). According to Dr. Anne Fausto-Sterling, 1.7% of the population has some degree of intersexuality (4). Why is this population, which is significantly larger than some politically-recognized minority groups, consistently ignored and purposefully outcast from so-called mainstream life? Before our society can fully understand and accept intersexuality, we must equip ourselves with the language we need to discuss intersexuals and their unique anatomic, reproductive, and chromosomal conditions. Since intersexuals are not definitively male of female in the traditional senses of the words, scrambling for pronouns can cause unnecessary distress. Recently, the pronouns zie, zir (pronounced tze/tzer) have been suggested, mainly by transgender activists . These pronouns are also of use to intersexed individuals who were assigned a m/f sex at birth but are trying to regain their spot on the gender continuum. Many people are not familiar with the vocabulary of intersex/intersexed/intersexual/intersexuality and instead still use the term "hermaphrodite." When discussing intersexuality, "language matters," and hermaphrodite is no longer an acceptable term for use by those outside the immediate intersex community (3). While hermaphrodite is a word that has been used to describe the intersexed condition throughout history, "many intersex activists reject this word due to the stigmatization arising from its mythical roots and the abuse that medical professionals inflicted on them under this label" . Another term that many intersex activists balk at is "ambiguous genitalia." While this term is still widely used, intersex activists contest that it is outdated since "the ambiguity is with the society's definition of male and female rather than [with intersexed] bodies" (3). The so-called "ambiguous genitalia" of many intersexuals are simply under-developed genitalia, stuck in a phase of genitaliac development that most people pass through. Some intersexed individuals have genital folds. Genital folds are "common to both males and females early in development. In males the genital folds develop into the scrotum and in females develop into the labia majora" (5). Other intersexuals display cases of urethral folds. Urethral folds are "common to both males and females early in development, in males the urethral folds develop into the urethra and corpora and in females into the labia minora" (5). Once non-intersexed individuals understand that so-called "ambiguous genitalia" are simply an equally valid variation of their own genitalia, and once they can discuss these differences as differences and not deficiencies, intersexuals can move closer to being accepted by the mainstream, at least lexically.
Once lexical steps toward acceptance are taken, the issues of human interaction and societal privilege remain. Often people
are not told of their intersexual condition under they are in their late teens. Others are not told under they are in the
midst of a gender crisis so extreme that truth is the only option. Sometimes, a "child is left physically damaged, and in an
emotional limbo without access to information about what has happened to them" (8). Some intersexed
individuals do not need to be told of their condition; their realization comes with the understanding that their body is
"different" from the bodies of other boys and girls, different from sex ed class diagrams, different from porn, different
from any example available in the mainstream media. The results of this realization or admission can have "traumatic
repercussions... in a culture which insists on believing that sex anatomy is a dichotomy, with male and female conceived of as
so different as to be nearly different species" . Before any true progress can be made, there needs to be more open
dialogue. There has to be dialogue beyond the whispers in hospital rooms and the sobbing in dorm rooms.
The "conspiracy of silence", the policy of pretending that intersexuality has been medically eliminated, in
fact simply exacerbates the predicament of the intersexual adolescent or young adult who knows that s/he is different, whose
genitals have often been mutilated by "normalizing" plastic surgery, whose sexual functioning has been severely impaired, and
whose treatment history has made clear that acknowledgment or discussion of intersexuality violates a cultural and a family
taboo. (8)
For intersexuality to truly be accepted as a valid part of the gender continuum, there needs to be a complete reeducation of society. Parents, children, doctors, educators, and all people need to be made aware of the issues facing intersexuals. They need to know not only of the horror that takes place in hospitals, but of the plethora of issues, both physical and psychological, that arise in intersexed adults. Healthcare professionals need to look into ISNA's request to leave "ambiguous genitals" unaltered (9). Some people are skeptical of the idea that a child can be raised without a clear-cut gender, but the gender re-assignment that takes place today clearly is not working. Does it need to be a BOY or GIRL? How about holding up that new bit of life in the delivery room and simply declaring, "It's a person!"
Vaccinations: Time to Weigh the Risks Name: Chelsea W. Date: 2002-12-22 01:19:00 Link to this Comment: 4203 |
Vaccinations are often toted as important and advantageous scientific developments, and, in some respects they may be. However many of the negative aspects and potential risks associated with vaccines are often not widely discussed. Here, I will endeavor to explore a bit of each.
Biology Behind Vaccines
Vaccines typically operate by exposing your body to weakened versions of the pathogens which cause the disease, or, in some instances, to inactivated versions (1). Then, the body is intended to make use of these weakened or inactive pathogens to create antibodies which it can then retain in order to fight against any exposure to the true pathogen (1).
The "Good Stuff" about Vaccination
Vaccination has helped to wipe-out or minimize the threat of a large number of diseases (2). Polio, a disease which was once widely feared and is now rarely considered a serious threat, is one example (2). Of course, paradoxically or not, this situation is sometimes used to justify not vaccinating, given that the risk is now often low of contracting many of the diseases against which one might be vaccinated (given the low incidence in the population) (2).
When Problems Sometimes Arise from Vaccination
112,699 total vaccine adverse reactions are reported by the FDA through the Vaccine Adverse Event Reporting System, and more than one billion dollars of government funds have been paid to children who were harmed by vaccines (3). Many indications even show that there may be links between autism and vaccination (4). More common (and less disputed) are the existence of generally short term reactions to vaccination shots, sometimes including a rash or fever (5). Yet, parents are still rarely provided with adequate information with which to weigh the risks of vaccinating their children.
In Conclusion
This is a complicated issue, though one which is becoming increasingly relevant with fears of bio-terror threats - and not just for families. However, it is important that people are well-informed and able to weigh the pluses and negatives of vaccination, with appropriate supplies of information to do so. And, in many places "opting-out" of childhood vaccination is possible (if complicated in some instances), and it is important that this personal choice of what to put in one's body remains just that - a personal choice.
.
Beating the Binge Name: Stefanie F Date: 2003-09-29 00:15:09 Link to this Comment: 6662 |
Beirut, Pong, Quarters, Flip Cup, the Name Game, and 7-11 doubles are just a few of the names given to what is quickly becoming the new great American past-time for young people, drinking to excess. College-age students across the country have taken to channeling their energies into the creation of drinking games like these, without perhaps looking at the consequences of such creatively destructive behavior.
In the United States, forty-four percent of persons ages eighteen to twenty-one are enrolled in colleges or universities (1). According to recent statistics released by the Health and Education Center, forty-four percent of college students are categorized as heavy drinkers. Alcohol abuse is one of the biggest issues on college campuses nationwide, but what is it that makes excessive alcohol consumption such a concern in the year 2003?
Excessive alcohol consumption is often known as "binge drinking". Binge drinking is defined as the consumption of at least five or more alcoholic beverages for men and four or more alcoholic beverages for women in a row on a given occasion (2). Studies show that in addition to the forty-four percent of college students who binge drink, one third of high school seniors also admit to having binged at least once in the two weeks prior to being surveyed. The greatest question posed, is why does such a destructive activity appeal in particular to this age group?
One might initially assume that all people in this age bracket are prone to participate in binge drinking. However, while forty-four percent of college students binge drink, only thirty-four percent of students the same age who are not enrolled in a college or university binge drink. There may be several reasons why those people who are submersed in academic environments are more likely to participate in excessive alcohol consumption.
The effects of alcoholic beverages are incredibly appealing to students who are enrolled in institutions of higher learning. Often these students are thrust into social situations to which they may not be accustomed. Alcohol consumption in many ways makes students feel more comfortable in the new collegiate social scene by creating a false sense of calm or euphoria, and use of alcohol throughout a student's four year college experience often begins during a student's freshman year.
The presence of alcohol on college campuses is overwhelming, and the availability to all students despite their legality is even more surprising. A running joke on most college campuses is that everyone's favorite type of alcohol is either "free" or "cheap". Students of legal drinking age are always more than willing to purchase alcohol for those who are not of legal drinking age. Many underage students will also go to great lengths in order to obtain alcohol by purchasing falsified identification or frequenting establishments near their respective campus which may have lax serving policies.
Alcohol is a depressant, which causes increased relaxation and decreased inhibition. Alcohol absorption begins immediately. The tissue in the mouth absorbs a very small percentage of the beverage when it is first consumed. Around twenty percent of the beverage is then absorbed by the stomach, and the remainder is absorbed by the small intestine, which distributes the alcohol throughout the body (2). The rate of absorption of the beverage is dependent on the concentration of the alcohol consumed, the type of drink, and whether the stomach is full or not. Carbonated beverages tend to intoxicate more quickly because they speed the process of absorption. Conversely, having a substantial meal will slow down the process of absorption.
The kidney and lungs together expel 10% of alcohol consumed, and the liver has the task of breaking down the remaining alcohol into acetic acid. The body only has the capability to expel 0.5 oz of alcohol, which is equivalent to one shot, glass of wine, or twelve ounce can of beer, per hour (3). Therefore, by definition, a binge drinking woman would have consumed four times the amount of alcohol her body is able to expel per hour. The altered state of mind that is caused by overindulgence can lead to any number of dangerous, potentially life-threatening situations.
Studies show that binge drinking is the cause of 1,400 deaths, over 500,000 injuries, and 70,000 cases of sexual assault/date rape each year (4). In addition to such serious personal risk, students under the influence also negatively affect their own educations and the educations of others by causing disruptions in both the academic and residential spheres of college and universities.
At the beginning of this paper I posed the question, "what is it that makes excessive alcohol consumption such a societal concern in the year 2003?" I think that the sheer number of articles and studies I found presented by both public and private organizations would answer this question; people are finally noticing a potential problem. These statistics speak for themselves. Collegiate binge drinking is an issue which must be addressed by colleges and universities in the United States. However, there is no evidence that a person who binge drinks in college will continue binge drinking after graduation. Certainly, some students continue alcohol abuse after graduation, but a predisposition for that condition should be taken into consideration, and I would venture to say that it is a small percentage of students who suffer problems of alcoholism and alcohol abuse later in life. I believe that this particular age group is prone to rebellion and experimentation. Some propose that lowering the legal drinking age to eighteen once again would remedy the situation. However, I believe that carefree behavior and to a certain extent, irresponsibility are inherent to this particular age group, and is merely a part of human maturation.
1) United States Census Bureau
Beating the Binge Name: Stefanie F Date: 2003-09-29 09:41:54 Link to this Comment: 6669 |
Beirut, Pong, Quarters, Flip Cup, the Name Game, and 7-11 doubles are just a few of the names given to what is quickly becoming the new great American past-time for young people, drinking to excess. College-age students across the country have taken to channeling their energies into the creation of drinking games like these, without perhaps looking at the consequences of such creatively destructive behavior.
In the United States, forty-four percent of persons ages eighteen to twenty-one are enrolled in colleges or universities (1). According to recent statistics released by the Health and Education Center, forty-four percent of college students are categorized as heavy drinkers. Alcohol abuse is one of the biggest issues on college campuses nationwide, but what is it that makes excessive alcohol consumption such a concern in the year 2003?
Excessive alcohol consumption is often known as "binge drinking". Binge drinking is defined as the consumption of at least five or more alcoholic beverages for men and four or more alcoholic beverages for women in a row on a given occasion (2). Studies show that in addition to the forty-four percent of college students who binge drink, one third of high school seniors also admit to having binged at least once in the two weeks prior to being surveyed. The greatest question posed, is why does such a destructive activity appeal in particular to this age group?
One might initially assume that all people in this age bracket are prone to participate in binge drinking. However, while forty-four percent of college students binge drink, only thirty-four percent of students the same age who are not enrolled in a college or university binge drink. There may be several reasons why those people who are submersed in academic environments are more likely to participate in excessive alcohol consumption.
The effects of alcoholic beverages are incredibly appealing to students who are enrolled in institutions of higher learning. Often these students are thrust into social situations to which they may not be accustomed. Alcohol consumption in many ways makes students feel more comfortable in the new collegiate social scene by creating a false sense of calm or euphoria, and use of alcohol throughout a student's four year college experience often begins during a student's freshman year.
The presence of alcohol on college campuses is overwhelming, and the availability to all students despite their legality is even more surprising. A running joke on most college campuses is that everyone's favorite type of alcohol is either "free" or "cheap". Students of legal drinking age are always more than willing to purchase alcohol for those who are not of legal drinking age. Many underage students will also go to great lengths in order to obtain alcohol by purchasing falsified identification or frequenting establishments near their respective campus which may have lax serving policies.
Alcohol is a depressant, which causes increased relaxation and decreased inhibition. Alcohol absorption begins immediately. The tissue in the mouth absorbs a very small percentage of the beverage when it is first consumed. Around twenty percent of the beverage is then absorbed by the stomach, and the remainder is absorbed by the small intestine, which distributes the alcohol throughout the body (2). The rate of absorption of the beverage is dependent on the concentration of the alcohol consumed, the type of drink, and whether the stomach is full or not. Carbonated beverages tend to intoxicate more quickly because they speed the process of absorption. Conversely, having a substantial meal will slow down the process of absorption.
The kidney and lungs together expel 10% of alcohol consumed, and the liver has the task of breaking down the remaining alcohol into acetic acid. The body only has the capability to expel 0.5 oz of alcohol, which is equivalent to one shot, glass of wine, or twelve ounce can of beer, per hour (3). Therefore, by definition, a binge drinking woman would have consumed four times the amount of alcohol her body is able to expel per hour. The altered state of mind that is caused by overindulgence can lead to any number of dangerous, potentially life-threatening situations.
Studies show that binge drinking is the cause of 1,400 deaths, over 500,000 injuries, and 70,000 cases of sexual assault/date rape each year (4). In addition to such serious personal risk, students under the influence also negatively affect their own educations and the educations of others by causing disruptions in both the academic and residential spheres of college and universities.
At the beginning of this paper I posed the question, "what is it that makes excessive alcohol consumption such a societal concern in the year 2003?" I think that the sheer number of articles and studies I found presented by both public and private organizations would answer this question; people are finally noticing a potential problem. These statistics speak for themselves. Collegiate binge drinking is an issue which must be addressed by colleges and universities in the United States. However, there is no evidence that a person who binge drinks in college will continue binge drinking after graduation. Certainly, some students continue alcohol abuse after graduation, but a predisposition for that condition should be taken into consideration, and I would venture to say that it is a small percentage of students who suffer problems of alcoholism and alcohol abuse later in life. I believe that this particular age group is prone to rebellion and experimentation. Some propose that lowering the legal drinking age to eighteen once again would remedy the situation. However, I believe that carefree behavior and to a certain extent, irresponsibility are inherent to this particular age group, and is merely a part of human maturation.
1) United States Census Bureau
Persistent Resistent Germs Name: Rochelle M Date: 2003-09-29 19:50:32 Link to this Comment: 6687 |
"At the dawn of a new millennium, humanity is faced with another crisis. Formerly curable diseases... are now arrayed in the increasingly impenetrable armour of antimicrobial resistance."
Director General of the World Health Organization After the discovery of penicillin and streptomycin3. One of the things he observed was that the Staphylococcus aureus developed cell walls that became increasingly resistant to the penicillin. This meant that the offspring of the bacteria would come back and multiply if most of the parent cells were not killed off with the first set of treatment. These offspring would have a stronger resistance and it would be much more difficult to kill them off. Bacteria have the ability to change their cell wall in order to protect themselves from antibiotics. They also exchange genes among themselves. Because of this ability, various types of bacteria have formed immunity to drugs that are commonly used to treat diseases.
Have no fear, though; there are many steps that one can take in order to maintain one's health. The first is of course is proper nutrition--you have eat healthy to be healthy! Observe personal hygiene, such as washing hands regularly. And one should also try to get plenty of sleep. It is when we sleep that our body recuperates and rejuvenates itself. Also, should one be traveling to a country where disease transmission through insects is at a high rate, make sure not to do a lot of early morning or nighttime outdoor activity2 Streptomycin is an antibiotic that is used to treat tuberculosis. "Tuberculosis facts for Parents" http://pediatrics.about.com/library/bltuberculosis.htm?terms=streptomycin#Disease_1
The Access Project at http://www.atdn.org/access/drugs/stre.html.
These websites provide drug descriptions and ways of preventions.
3 This information was found in the article, "Those resilient germs, how they rebound"
4 These suggestions were found in the article, "When germs will not harm anyone" in the Awake! Magazine.
Persistent Resistent Drugs Name: Rochelle M Date: 2003-09-29 19:56:23 Link to this Comment: 6688 |
"At the dawn of a new millennium, humanity is faced with another crisis. Formerly curable diseases... are now arrayed in the increasingly impenetrable armour of antimicrobial resistance."
Director General of the World Health Organization After the discovery of penicillin and streptomycin3. One of the things he observed was that the Staphylococcus aureus developed cell walls that became increasingly resistant to the penicillin. This meant that the offspring of the bacteria would come back and multiply if most of the parent cells were not killed off with the first set of treatment. These offspring would have a stronger resistance and it would be much more difficult to kill them off. Bacteria have the ability to change their cell wall in order to protect themselves from antibiotics. They also exchange genes among themselves. Because of this ability, various types of bacteria have formed immunity to drugs that are commonly used to treat diseases.
Have no fear, though; there are many steps that one can take in order to maintain one's health. The first is of course is proper nutrition--you have eat healthy to be healthy! Observe personal hygiene, such as washing hands regularly. And one should also try to get plenty of sleep. It is when we sleep that our body recuperates and rejuvenates itself. Also, should one be traveling to a country where disease transmission through insects is at a high rate, make sure not to do a lot of early morning or nighttime outdoor activity2 Streptomycin is an antibiotic that is used to treat tuberculosis. "Tuberculosis facts for Parents" http://pediatrics.about.com/library/bltuberculosis.htm?terms=streptomycin#Disease_1
The Access Project at http://www.atdn.org/access/drugs/stre.html.
These websites provide drug descriptions and ways of preventions.
3 This information was found in the article, "Those resilient germs, how they rebound"
4 These suggestions were found in the article, "When germs will not harm anyone" in the Awake! Magazine.
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-29 23:26:02 Link to this Comment: 6693 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-29 23:26:28 Link to this Comment: 6694 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-29 23:28:48 Link to this Comment: 6695 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-29 23:30:00 Link to this Comment: 6696 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-30 09:34:15 Link to this Comment: 6704 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion and the Brain Name: Natalya Kr Date: 2003-09-30 16:48:24 Link to this Comment: 6733 |
WWW Sources
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer
Extroversion, Introversion, and the Brain Name: Natalya Kr Date: 2003-09-30 20:10:00 Link to this Comment: 6735 |
The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?
The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).
Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).
What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this
punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.
In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.
Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).
Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same
as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.
The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion
decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley
Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).
An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and
absorption remains intact.
It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?
Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy,
well-adjusted individuals.
What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.
The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.
The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.
1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry
2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.
3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.
4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.
5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.
6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.
6)
Cornell University: Science News: "Cornell Psychologist finds chemical
8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by
evidence for a personality trait and happiness".
David Mintzer
EPILEPSY: Name: Anna Katri Date: 2003-10-01 01:45:38 Link to this Comment: 6739 |
There's a myriad of fad diets out these days: Atkins, the fruit juice diet, Russian Air Force diet, and the Zone to name a few. However, the most recent craze is, "The Blood Type Diet", based on the book, Eat Right 4 Your Type by Doctor Peter D'Adamo. The diet focuses on an individual's genetic makeup (blood type) in determining which foods are best digested. D'Adamo heads up the Institute for Human Individuality (IfHi), which "seeks to foster research in the expanding area of human nutrigenomics. The science of nutrigenomics (naturopathic medicine) seeks to provide a molecular understanding for how common dietary chemicals affect health by altering the expression or structure of an individual's genetic makeup" (1). On the website, the "five basic tenets of nutrigenomics" are listed as:
1. Improper diets are risk factors for disease.
2. Dietary chemicals alter gene expression and/or change genome
structure.
3. The degree to which diet influences the balance between healthy and
disease states may depend on an individual's genetic makeup.
4. Some diet-regulated genes (and their normal, common variants) are
likely to play a role in the onset, incidence, progression, and/or severity
of chronic diseases.
5. "Intelligent nutrition" - that is, diets based upon genetics, nutritional
requirements and status - prevents and mitigates chronic diseases. (1).
The Blood Type Diet is founded upon the microscopic observation of how ABO types break down different foods, suggesting that one person's nourishment may be another's poison. The book examines the demographic distributions of different blood types, and proposes that "the variations, strengths and weaknesses of each blood group can be seen as part of humanity's continual process of acclimating to different environmental challenges" (2). D'Adamo asserts that blood groups "evolved as migratory mutations," with type O being the most "ancient" of the ABO group, and housing the largest population (40-45%), second to type A (35-40%), dwindling in B (4-11%), with the rarest being AB (0-2%). People with type O blood (hunter-gatherers) are encouraged to be carnivores, while type A's can survive solely as vegetarians. Explaining the origin and spread of blood type B, D'Adamo states, "Two basic blood group B population patterns emerged out of the Neolithic revolution in Asia: an agrarian, relatively sedentary population located in the south and east, and the wandering nomadic societies of the north and west" (2).. Most Jewish populations have average blood type rates of B; specifically, B group is most frequently found in Europeans: Asians, Poles, Russians, and Hungarians.
The book stresses that certain blood types are more susceptible to specific diseases than others, because of dangerous agglutinating lectins which attack the blood stream and lead to disease. Specifically, people of blood type B are more prone to hypoglycemia, stress (type B's show higher than normal cortisol levels in situations of stress), MS, lupus, chronic fatigue syndrome, auto-immune and nervous disorders. D'Adamo writes that type B's "sophisticated refinement in the evolutionary journey;" was "an effort to join together divergent peoples and cultures. Usually type B's can resist the most severe diseases common to modern life" (2)., i.e., heart disorders and cancers; however, their systems are more prone to exotic immune system disorders, in this case: epilepsy.
About 1% of the world's population are affected by seizures. A person who experiences seizures is not an "epileptic" but rather suffers from the disorder epilepsy. Epilepsy is a chromosome abnormality or inherent genetic trait where "chronic or spontaneous, abnormal and excessive discharge of electrical activity from a collection of neurons arises in the brain as electrical misfirings" (4).. The exact cause of epilepsy has yet to be specifically determined, thus characterizing it as an idiopathic disease, or a disease without any real identifiable origin. The electrical misfirings, which arise within the cerebrum, are usually traceable to some form of injury as a child to one or more of the brains lobes. Via an EEG machine, it's been discovered that seizures seem to originate most often in the temporal lobe, occurring in the gray matter of the brain. The gray matter in the brain is composed of cell bodies of neurons, the white matter is composed of axons of neurons, coated with insulation made from fat (hence the white color). The focus is the damaged gray matter, which is abnormally excitable, and when it spontaneously discharges, the result is a seizure.
According to D' Adamo, B group is prone to magnesium deficiency, which plays a crucial role in this disorder. "Magnesium acts as a catalyst for metabolic machinery in the B's blood type. B's systems are very efficient at assimilating calcium, and thus risk creating an imbalance between their levels of calcium and magnesium" (5). Believe it or not, this seemingly simple imbalance can lead to nervous disorders and many skin conditions (my sister has grand mal seizures and eczema). B's also have severe neurological reactions to vaccinations; because their nervous systems produce an enormous amount of B- antigens, when a vaccine is introduced into the system, there is a cross reaction, which, as D'Adamo points out, "causes the body to turn and attack its own tissues. These war-like antibodies think they are protecting their turf. In reality, they destroy their own organs: inciting an inflammatory response" (5).
What exactly happens in the brain when someone has a seizure? The first seizure is directly related to the location of the focus (the damaged gray matter in the brain); with time, the electrical explosion continues to travel rapidly throughout the brain, becoming more pronounced, more dramatic, like a forest fire spreading from tree to tree. This activity spreads along the surface of the brain cells by the sequential opening of tiny pores, which act like channels, permitting small, charged particles of sodium and calcium to enter the nerve cell. This wave of sodium and calcium ions entering the nerve cell sequentially along the surface of other cells leads to electrical excitation. Drugs that block these channels decrease the spread of abnormal electrical activity. Conversely, a lack of calcium and sodium ions, or an imbalance in the system will causes abnormal electrical activity.
"Balancing the system," is the foundation of "Eat Right For Your Type." Foods such as corn, buckwheat, lentils, peanuts, and sesame seeds affect the efficiency of the metabolic process, resulting in fatigue, fluid retention, and hypoglycemia (severe drop in blood sugar after eating a meal). The gluten found in whole wheat and wheat germ adds to the digestive and distribution problems. One of the "non brain" causes of epilepsy is a disturbed glucose metabolism (often associated with diabetes). Simple sugar used by the brain is an important form of energy. To produce glucose, the body needs insulin. Too much glucose (hyperglycemia) or too little creates the imbalance needed to trigger seizures. One of the key foods B blood types should avoid, D'Adamo says, are beans: lentils, garbanzos, pintos, and black eyed peas. Why? They interfere with the production of insulin.
A second cause of the chronic seizures disorder known as epilepsy is an electrolyte disturbance: occurring when the levels of salt in the blood stream (i.e. sodium chloride) fall too low. This can happen when bodily fluids are lost through severe diarrhea or vomiting, after extended exertion. D'Adamo attributes diarrhea to nutrient deficiency in essential fatty and folic acids (5). To compensate for this, lecithin (a lipid), choline, serine, and ethanolamine (phospholipids) supplements should be taken. While rye, corn, buckwheat, tomatoes, olives, and adaptogenic herbs (used to increase concentration and memory retention) should be avoided at all costs.
Grand Mal seizures, or Tonic Clonic seizures are perhaps the most severe and debilitating over time. To paint a picture of what happens when a person experiences a Tonic Clonic seizure, let me take you back to my first day of senior year in high school... Everyone is gathered in the auditorium for an opening day speech by the Headmaster. Mary, my 16 year old sister, 12 at the time, had had a rough morning waking up. She was tired, and my parents forced her to choke down some Farina (warm wheat-meal). It is It is early morning, and sitting in the top row, Mary gave a little cry as the air was forced out of her lungs. She slumped in her seat so her head fell on the boy next to her. Thinking she was playing a trick, he gently pushed her. Mary falls to the ground, unconscious and unresponsive as her body begins to stiffen - this is referred to as the Tonic phase. She begins to jerk - Clonic phase, as the electrical explosion spreads to both sides of her brain. The breathing slows and stops. She bites her tongue, frothing at the mouth. Her skin turns bluish gray as her air supply is cut off, putting enormous stress on her heart.
This moment can be absolutely terrifying for a family member to watch. Grand Mals reek utter havoc on the body, and often, when the affected wakes up, she is completely exhausted, feeling as if she has run a marathon. A common misconception about children is that they need excessive amounts of physical exercise. However, D'Adamo points out that stressful situations, fatigue, and unbalanced nutrition have been shown to trigger seizures, and B blood types should focus more on strengthening and toning exercise then strenuous physical exertion (substitute yoga for field hockey). Children are most prone to seizures when they wake in the morning, as their body desperately needs nutrients, what is eaten is essential. Mary, a B blood type (my brother is also a B and has Tonic Clonic seizures) had a bowl of wheat farina, which inhibits the production of insulin. We were in a rush that morning and enormous pressure was on her (she's pokey) to get out the door and off to school. In the car she tried to sleep but was restless, complaining of a headache. Mary also has very low blood pressure and had not had any juice to drink for breakfast, instead, had a glass of milk, perhaps causing an imbalance of electrolytes, or salt ions in her blood stream. Because B's are very efficient in assimilating calcium, they risk creating an imbalance between calcium and magnesium in their systems: magnesium being the chief catalyst for the metabolic machinery in B blood types. The summation of observations here? If there is not enough magnesium in B's digestive system, it cannot metabolize food properly and thus lacks any of the appropriate nutrients needed to run the body. If an agglutinating food is the first thing eaten (such as Farina), it attacks the blood stream, interfering with the production of insulin. An excessive amount of calcium in the blood first thing in the morning would create the imbalance between magnesium and calcium. A flux of calcium ions entering the nerve cell, coupled with the inability to produce insulin (hypoglycemia) is the exact recipe for an electrical storm inside the brain.
Thirty-five years ago, in a proprietary formula used for bottle feeding babies, "the vitamin B6 was inadvertently destroyed during sterilization, causing widespread seizures in infants. The newborns were cured with a B6 supplement, but this situation dramatically shows the impact B vitamins have on the nervous system" (6).. By the way, babies who are breast fed by mothers eating a low B6 diet can also have seizures. Why isn't there more information about the glaring connection between seizures and nutrition? There is a "seizure diet" on the market, the Ketogenic diet, however, it's only recommended for children 1-6 years (7)., and even then in extreme cases. Does this indicate that there's no hope for people who suffer from seizures and that they will be on medication for the rest of their lives? I don't know. Generally speaking, epilepsy is still a mystery to scientists, and in more than half of all cases of people with recurring seizures, scientists have yet to identify a cause. Research is slow, and due to the severe impact a seizure has on the brain- participants are scarce. Nervous disorders seem to occur when our systems are out of wack, or out of balance. D'Adamo's assertions ring of truth, and I believe there's matter to his words, matter worth looking into.
1) D'Adamo, Peter. Eat Right 4 Your Type. New York: Putnam Pub Group, 1996. Learn the whole philosophy/ anthropology behind the Blood Type Diet, including the foods your type best metabolizes, and natural options for treating and preventing disease.
2)IfHi Homepage, A very informative and enlightening web site about the group's dedication to naturopathic medicine and their goal in benefitting mankind through the development of new applications and practices in Naturopathy.
6)Article by Dr. Aesoph on B vitamins, The importance of B vitamins to our body's health should never be taken for granted. The site provides detailed information on the essential vitamin, B6. Doctor Aesoph offers some compelling reasons to start taking supplements....now.
7)Ketogenic Diet UK Home Page, Provides basic information about the Ketogenic diet, which is high in fat, and sometimes used as a last resort in young children to aid in curing seizures.
Cancer in Cosmetics Name: Charlotte Date: 2003-11-09 22:01:21 Link to this Comment: 7165 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
3)From Detergents To Cosmetics, Home Is Where The Cancer IsCancer in Cosmetics Name: Charlotte Date: 2003-11-10 09:22:06 Link to this Comment: 7173 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
3)From Detergents To Cosmetics, Home Is Where The Cancer IsCancer in Cosmetics Name: Charlotte Date: 2003-11-10 23:19:11 Link to this Comment: 7188 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
3)From Detergents To Cosmetics, Home Is Where The Cancer IsThe Science of our Justice System Name: La Toiya L Date: 2003-11-11 03:23:53 Link to this Comment: 7190 |
Science and the Judicial System are two concepts that at face value seem to be very distinct and unique in their own nature, but at their cores share interesting parallels. They each propose a different way of understanding how we comprehend and organize order and structure within institutions, yet they do so with similar strategies. In this paper I'll address my understanding of both, what characteristics they share and how these similarities prove them to be inextricably connected by what we call life and its connection to the human experience.
Although Science is largely composed of observation, experiments and their results, it is often controversial because perspective and experience play a key role in how data is interpreted. And because perspective and experience undoubtedly vary with each person due to various reasons; how is it possible that we can assign concrete truths to such a varied conceptualization? Scientists fuse logic and philosophy.
Traditional science often fails to provide theories and explanations for phenomenons that hold truth and validation in both a scientific context and the context of the human mind. I feel that Science often caters to only a "black and white" way of formulating answers; failing to recognize the gray areas. Often times people try to find the most common and accepted ways to support their theories and in doing so they adapt to the standard and more traditional ways of viewing the world. This leaves less room for creativity and exploration of the mind when trying to formulate "truth". "A body of assertions is true if it forms a coherent whole and works both in the external world and in our minds." Roger Newton (1)
Much like science, the justice system in this country is very much based on experience. Although the understanding of these laws is largely composed of formal education, logic and reasoning, there is more to law then these solid and concrete aspects. Experience plays a key role because before obtaining any form of judicial authority one must practice and "get a feel" for what the position entails. Through these experiences one acquires a very personal and first hand knowledge and experience that is necessary before venturing out into his or her field. The judicial system poses a similar problem to that of traditional science. I believe the laws in our justice system are far too clear cut. There are a lot of gray areas when it comes to crimes committed, political decision making, and societal issues. I feel our constitution, which is what our laws are based on, is too limited and poses a problem because a lot of the pressing issues in our society such as abortion and gun control lay on right and wrong border lines. It's hard to come to a resolution because of the strict and limited language of our laws and also because of the fact that there's more to these problems than laws; they involve emotions, perceptions, culture, and perspectives; none of which are taken into consideration in legislation.
The controversy with Pro-Life or Pro-Choice is controversial and complex because there are so many ways to examine the issue, all of which have valid points depending on which light you're looking at it under. Abortion is both a societal issue as well as a political issue. It involves high sensitivity because of the direct connection to our emotions and personal values. Politics and laws also play a major role in this debate because so many of them have been passed concerning this issue. The Government on many levels is dealing with the issue of abortion. The courts, federalism, judicial review and the separation of powers are all involved in and dealing with this issue. In 1973 the Supreme Court declares abortion as a constitutional right. (2) Scientist have clearly declared the fetus as a living thing and it is clearly illegal by law to kill another human being, yet it is perfectly legal to have an abortion. When this issue is examined thoroughly one can see how controversies arise and stay in debate. So this case really depends on how one looks at it. This poses a problem because an agreement and a middle ground are almost impossible to reach because people specifically those with opinions about it, can only see the credibility in their value and position. Thus, the choice is highly dependent on personal perspective, moral, and experience. Although constitutional law governs the issue of abortions, science clearly plays a role of equal importance and authority.
Gun control is deeply rooted in controversy and is an epitome of a gray area when dealing with right or wrong. There are two conflicting sides, those in favor of gun regulation and those against it. It is an issue for our nation as a whole but it stems from the division of this country's mixed cultures. Those who have grown up in a culture where hunting is a family and cultural tradition are strongly against gun control, but for people who did not grown up with hunting as a sport do not see the same value. This conflict is rooted not only in value but also politics. The respective sums of experiences for both sides are the reasoning behind their positions on the issue. Science and the judicial system produce gray areas when trying to understand and rationalize. Both are inextricably connected to life. Holmes convinced people through his work and writings that the law should develop along with the society it serves. If this is true than law should always be changing because society is constantly changing with time and experience. "Life of the law has not been logic: it has been experience." (Oliver Wendell Holmes). We systematically try to put life in a box to create order, order insures a comfort, but that comfort often gets in the way of open-mindedness. The human mind by itself is a convoluted vast universe. We as scholars, scientists, and human kind need to understand that by assigning concrete truths, right or wrongs we are limiting the extent of our intellectual capacities.
.
The Science of our Justice System Name: La Toiya L Date: 2003-11-11 03:24:11 Link to this Comment: 7191 |
Science and the Judicial System are two concepts that at face value seem to be very distinct and unique in their own nature, but at their cores share interesting parallels. They each propose a different way of understanding how we comprehend and organize order and structure within institutions, yet they do so with similar strategies. In this paper I'll address my understanding of both, what characteristics they share and how these similarities prove them to be inextricably connected by what we call life and its connection to the human experience.
Although Science is largely composed of observation, experiments and their results, it is often controversial because perspective and experience play a key role in how data is interpreted. And because perspective and experience undoubtedly vary with each person due to various reasons; how is it possible that we can assign concrete truths to such a varied conceptualization? Scientists fuse logic and philosophy.
Traditional science often fails to provide theories and explanations for phenomenons that hold truth and validation in both a scientific context and the context of the human mind. I feel that Science often caters to only a "black and white" way of formulating answers; failing to recognize the gray areas. Often times people try to find the most common and accepted ways to support their theories and in doing so they adapt to the standard and more traditional ways of viewing the world. This leaves less room for creativity and exploration of the mind when trying to formulate "truth". "A body of assertions is true if it forms a coherent whole and works both in the external world and in our minds." Roger Newton (1)
Much like science, the justice system in this country is very much based on experience. Although the understanding of these laws is largely composed of formal education, logic and reasoning, there is more to law then these solid and concrete aspects. Experience plays a key role because before obtaining any form of judicial authority one must practice and "get a feel" for what the position entails. Through these experiences one acquires a very personal and first hand knowledge and experience that is necessary before venturing out into his or her field. The judicial system poses a similar problem to that of traditional science. I believe the laws in our justice system are far too clear cut. There are a lot of gray areas when it comes to crimes committed, political decision making, and societal issues. I feel our constitution, which is what our laws are based on, is too limited and poses a problem because a lot of the pressing issues in our society such as abortion and gun control lay on right and wrong border lines. It's hard to come to a resolution because of the strict and limited language of our laws and also because of the fact that there's more to these problems than laws; they involve emotions, perceptions, culture, and perspectives; none of which are taken into consideration in legislation.
The controversy with Pro-Life or Pro-Choice is controversial and complex because there are so many ways to examine the issue, all of which have valid points depending on which light you're looking at it under. Abortion is both a societal issue as well as a political issue. It involves high sensitivity because of the direct connection to our emotions and personal values. Politics and laws also play a major role in this debate because so many of them have been passed concerning this issue. The Government on many levels is dealing with the issue of abortion. The courts, federalism, judicial review and the separation of powers are all involved in and dealing with this issue. In 1973 the Supreme Court declares abortion as a constitutional right. (2) Scientist have clearly declared the fetus as a living thing and it is clearly illegal by law to kill another human being, yet it is perfectly legal to have an abortion. When this issue is examined thoroughly one can see how controversies arise and stay in debate. So this case really depends on how one looks at it. This poses a problem because an agreement and a middle ground are almost impossible to reach because people specifically those with opinions about it, can only see the credibility in their value and position. Thus, the choice is highly dependent on personal perspective, moral, and experience. Although constitutional law governs the issue of abortions, science clearly plays a role of equal importance and authority.
Gun control is deeply rooted in controversy and is an epitome of a gray area when dealing with right or wrong. There are two conflicting sides, those in favor of gun regulation and those against it. It is an issue for our nation as a whole but it stems from the division of this country's mixed cultures. Those who have grown up in a culture where hunting is a family and cultural tradition are strongly against gun control, but for people who did not grown up with hunting as a sport do not see the same value. This conflict is rooted not only in value but also politics. The respective sums of experiences for both sides are the reasoning behind their positions on the issue. Science and the judicial system produce gray areas when trying to understand and rationalize. Both are inextricably connected to life. Holmes convinced people through his work and writings that the law should develop along with the society it serves. If this is true than law should always be changing because society is constantly changing with time and experience. "Life of the law has not been logic: it has been experience." (Oliver Wendell Holmes). We systematically try to put life in a box to create order, order insures a comfort, but that comfort often gets in the way of open-mindedness. The human mind by itself is a convoluted vast universe. We as scholars, scientists, and human kind need to understand that by assigning concrete truths, right or wrongs we are limiting the extent of our intellectual capacities.
.
The Science of our Justice System Name: La Toiya L Date: 2003-11-11 03:24:31 Link to this Comment: 7192 |
The Science of our Justice System Name: La Toiya L Date: 2003-11-11 03:25:19 Link to this Comment: 7193 |
Science and the Judicial System are two concepts that at face value seem to be very distinct and unique in their own nature, but at their cores share interesting parallels. They each propose a different way of understanding how we comprehend and organize order and structure within institutions, yet they do so with similar strategies. In this paper I'll address my understanding of both, what characteristics they share and how these similarities prove them to be inextricably connected by what we call life and its connection to the human experience.
Although Science is largely composed of observation, experiments and their results, it is often controversial because perspective and experience play a key role in how data is interpreted. And because perspective and experience undoubtedly vary with each person due to various reasons; how is it possible that we can assign concrete truths to such a varied conceptualization? Scientists fuse logic and philosophy.
Traditional science often fails to provide theories and explanations for phenomenons that hold truth and validation in both a scientific context and the context of the human mind. I feel that Science often caters to only a "black and white" way of formulating answers; failing to recognize the gray areas. Often times people try to find the most common and accepted ways to support their theories and in doing so they adapt to the standard and more traditional ways of viewing the world. This leaves less room for creativity and exploration of the mind when trying to formulate "truth". "A body of assertions is true if it forms a coherent whole and works both in the external world and in our minds." Roger Newton (1)
Much like science, the justice system in this country is very much based on experience. Although the understanding of these laws is largely composed of formal education, logic and reasoning, there is more to law then these solid and concrete aspects. Experience plays a key role because before obtaining any form of judicial authority one must practice and "get a feel" for what the position entails. Through these experiences one acquires a very personal and first hand knowledge and experience that is necessary before venturing out into his or her field. The judicial system poses a similar problem to that of traditional science. I believe the laws in our justice system are far too clear cut. There are a lot of gray areas when it comes to crimes committed, political decision making, and societal issues. I feel our constitution, which is what our laws are based on, is too limited and poses a problem because a lot of the pressing issues in our society such as abortion and gun control lay on right and wrong border lines. It's hard to come to a resolution because of the strict and limited language of our laws and also because of the fact that there's more to these problems than laws; they involve emotions, perceptions, culture, and perspectives; none of which are taken into consideration in legislation.
The controversy with Pro-Life or Pro-Choice is controversial and complex because there are so many ways to examine the issue, all of which have valid points depending on which light you're looking at it under. Abortion is both a societal issue as well as a political issue. It involves high sensitivity because of the direct connection to our emotions and personal values. Politics and laws also play a major role in this debate because so many of them have been passed concerning this issue. The Government on many levels is dealing with the issue of abortion. The courts, federalism, judicial review and the separation of powers are all involved in and dealing with this issue. In 1973 the Supreme Court declares abortion as a constitutional right. (2) Scientist have clearly declared the fetus as a living thing and it is clearly illegal by law to kill another human being, yet it is perfectly legal to have an abortion. When this issue is examined thoroughly one can see how controversies arise and stay in debate. So this case really depends on how one looks at it. This poses a problem because an agreement and a middle ground are almost impossible to reach because people specifically those with opinions about it, can only see the credibility in their value and position. Thus, the choice is highly dependent on personal perspective, moral, and experience. Although constitutional law governs the issue of abortions, science clearly plays a role of equal importance and authority.
Gun control is deeply rooted in controversy and is an epitome of a gray area when dealing with right or wrong. There are two conflicting sides, those in favor of gun regulation and those against it. It is an issue for our nation as a whole but it stems from the division of this country's mixed cultures. Those who have grown up in a culture where hunting is a family and cultural tradition are strongly against gun control, but for people who did not grown up with hunting as a sport do not see the same value. This conflict is rooted not only in value but also politics. The respective sums of experiences for both sides are the reasoning behind their positions on the issue. Science and the judicial system produce gray areas when trying to understand and rationalize. Both are inextricably connected to life. Holmes convinced people through his work and writings that the law should develop along with the society it serves. If this is true than law should always be changing because society is constantly changing with time and experience. "Life of the law has not been logic: it has been experience." (Oliver Wendell Holmes). We systematically try to put life in a box to create order, order insures a comfort, but that comfort often gets in the way of open-mindedness. The human mind by itself is a convoluted vast universe. We as scholars, scientists, and human kind need to understand that by assigning concrete truths, right or wrongs we are limiting the extent of our intellectual capacities.
.
Cancer in Cosmetics Name: Charlotte Date: 2003-11-17 10:49:47 Link to this Comment: 7288 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
3)From Detergents To Cosmetics, Home Is Where The Cancer IsCancer in Cosmetics Name: Charlotte Date: 2003-11-18 13:55:06 Link to this Comment: 7309 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
3)From Detergents To Cosmetics, Home Is Where The Cancer IsCancer in Cosmetics Name: Charlotte Date: 2003-11-18 13:55:35 Link to this Comment: 7310 |
You might want to think twice before you apply your blush and mascara every morning. Putting on make-up is part of a woman's daily routine: before going to work, before going on a date or just simply to make herself look more attractive. But has anyone considered the idea that applying something unnatural to your body might be harmful? We question taking medication, certain foods, breathing polluted air, but make-up has never become an important issue in showing how harmful it can be to human beings. The truth is, make-up contains hundreds of toxic chemicals that are carcinogens and put humans at a higher risk of having cancer. Some science researchers have been in constant battle with the U.S. Food and Drug Administration (FDA) to take better control of cosmetics, for many consumers are unaware of what they buy. (1) What consumers and mainly the FDA do not realize, is that the toxins that are floating in the air, that are polluting our environment and affecting our health can basically be found in a tube of mascara or in a skin cream. However, the FDA does not seem to place cosmetics on their priority list and instead the issue of tobacco smoking has become the number one cause for cancer. The main idea behind all this is that anything unnatural can put one's life at risk: cancer just happens to be one example.
The most harmful cosmetics are the ones that are easily absorbed into the skin. Skin creams, blush, concealer, mascara, pencils, lipstick, and eye shadow all have oils that allow the chemicals to penetrate the skin easily. These particular products have chemicals that are associated with cancer. A lot of different other products, such as hairspray and perfumes, can be inhaled and can affect the lungs as well. (2) These types of products are used on a regular basis by most women and are ignorant of the consequences that they might have on their health.
Researchers have discovered where the cancer originates in cosmetics. There are harmful chemicals that are present in cosmetics that are known as phthalates, which are commonly used to soften plastics. (6) Over decades, these chemicals can eventually be confused with hormone receptors and alter cell structure, which then initiate the disease. (2) However, most people are unaware of this fact because there are poor label warnings stating the risks in using certain cosmetic products. Even though it is required that 10 to 20 chemical names be listed on the back of cosmetic products, a label showing the risks in using the product is still lacking. The FDA has been held responsible for neglecting the dangers of these cosmetics by not requiring label warnings of the ingredients used within the product. According to Dr. Epstein, a professor at the University Of Illinois Chicago School Of Public Health, the FDA "violates the 1938 Federal Food, Drug and Cosmetic Act which mandates that each ingredient used in a cosmetic product shall be adequately substantiated for safety prior to marketing." (1) The reason for this negligence is that cosmetics are not taken seriously because their role is to beautify women, whereas drugs and food have obvious effects on the human body. But over the years, these toxic chemicals will and have already shown effect and have affected thousands and thousands of people (mostly women) with cancer.
According to the United Nations Environmental Program, there are "70,000 chemicals [that] are in common use across the world with 1,000 new chemicals being introduced every year." Within the cosmetics industry, the National Institute of Occupational Safety and Health finds 900 of these chemicals toxic. Many other organizations have done research and have come up with similar statistics. (2) For example, the U.S. Public Interest Research Group has found 100,000 different chemicals used and 400 of those toxic that have been found within human blood and fat tissue. (3) The numbers are high and definitely indicate a present danger. Another group, Women's Voices for the Earth, speaks up by notifying that "Labeling requirements are so lax that many containers list the ingredients inside the sealed package in font fit for a flea and many don't list the culprit at all." (1) As a result, women are unknowingly at a higher risk of developing cancer.
Not only are these toxic chemicals linked to cancer, but they pose other dangers such as genetic damage, reproductive toxicity, immune system disorders and infertility. (1) Children are therefore greatly affected by these products. They are at a higher risk of birth defects, developing childhood leukemia and brain tumors from breathing in toxic chemicals. (3)
Unfortunately, although many petitions have been signed by various health groups and sent to the FDA, the issue of cancer in cosmetics has not yet made the front cover of a newspaper. Perhaps, they are waiting for enough deaths to prove the presence of toxic chemicals in cosmetics and then take action in adding warning levels. That is not to say that warning labels will keep consumers from purchasing cosmetics. The same way warning labels on cigarette packs have not kept people from smoking, but they certainly have decreased the number of consumers and people affected by lung cancer. In this sense, there is hope for cosmetic consumers of the future. But as of now, it is important to spread awareness. It is also important to realize that this problem is not only related to cosmetics. There are so many products that we humans use on a regular basis without having the slightest idea of what they are composed of. Therefore, humans are infected with toxic chemicals daily and, most of often, do not even notice it until it is too late and already diagnosed with cancer.
1)The Links Between Cancer And Cosmetics
2)Make-up Holds Hidden Danger of Cancer
5)Cancer Alert for Toiletries and Cosmetics
biology Name: lilian ahm Date: 2004-03-08 12:41:43 Link to this Comment: 8721 |
YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH
, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).
SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH
, BUT NOT BOTH)
FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT
FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY
REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT