Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Bio 103 Web Paper Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Cervical Cancer: The Best Form of Prevention Is To
Name: Melissa Te
Date: 2003-09-25 17:55:29
Link to this Comment: 6632


<mytitle>

Biology 103
2003 First Paper
On Serendip

Cervical cancer is the second most common cancer among women and the leading cause of death among women in underdeveloped countries. In fact, 500,000 cases are diagnosed each year worldwide. This particular cancer is found mainly in middle-aged to older women; it is very rare to find it in women age fifteen and younger. The average age of women with cervical cancer is age 50-55; however, the cancer begins to appear in women in their twenties (2). It is also found in women of lower-class areas, as they are not able to see a gynecologist to be screened. African American, Hispanic, and Native American women are more prone to developing cervical cancer as well (1).

The cervix is an organ in the female reproductive system; it is the entrance to the uterus. Cancer of the cervix develops in the lining of the cervix. The normal cells go through abnormal changes and become precancerous cells. These changes are called Cervical Intraepithelial Neoplasia (CIN). CIN is categorized as low grade and high grade. It progresses to one of two conditions: (a) Squamos Intraepithelial Lesion (SIL) which leads to invasive cervical cancer, or (b) Carcinoma in Situ which is non-invasive, or localized, cervical cancer (1).

The causes of cervical cancer are unknown. However, scientists believe that there is a link between two kinds of Human Papallomavirus (HPV) and the cancer. HPV is a group of 100 different viruses. Some types of HPV cause warts and are considered "low-risk" when discussing causes of cervical cancer. However, other types of HPV cause precancerous conditions, resulting in different types of genital cancers, and is therefore considered "high-risk" (1). HPV is a sexually transmitted disease and it is extremely contagious. Recent research has shown that condoms do not completely prevent HPV from being transmitted. Women with HPV usually do not have symptoms, and at times will never develop the cancer; and in the same manner, some women have developed the cancer without ever having had HPV (2).

Also, Human Immunodeficiency Virus (HIV) increases the likelihood of the precancerous cells developing into cancer. This occurs because HIV weakens the immune system, and a woman with HIV is unable to fight off HPV and precancerous abnormalities (2). Scientists have also found that smokers are possibly twice as likely to develop cervical cancer. Cigarettes release many chemicals that cause cancer (1). When a woman smokes a cigarette, these chemicals enter her bloodstream, and they are carried to all parts of the body. These chemicals are also believed to damage the DNA in cervical cells (2). Scientists are also examining the effects of Oral Contraceptives. No direct links have been found, but there is some statistical evidence showing that women that have taken Oral Contraceptives for over five years have a low risk of developing the cancer (2).

Often there are no symptoms while developing the cancer. For this reason, it is extremely important for a woman to have an annual Pap smear test, as this is the only way to detect any kind of abnormalities. The Pap smear evaluates the cells of the cervix under a microscope. It looks for three signs: (a) inflammation of the cells, (b) the amount of estrogen in the cells and (c) the presence of precancerous cells. The test is 90-95% accurate in finding an abnormality (3). Some possible symptoms of the cancer, however, are abnormal vaginal bleeding, abnormal vaginal discharge, low back pain, painful sexual intercourse, and painful urination (1).

To be diagnosed with Cervical Cancer, a woman would need to have a Pap smear test with abnormal results. The most common abnormality is called dysplasia, precancerous cells. Dysplasia is caused by CIN or by low or high-grade intraepithelial lesions. In simpler terms, dysplasia is an abnormal cell growth (4).

The best form of prevention of cervical cancer is to be informed and aware. If you are a woman age 18 or older, whether you are sexually active or not, go see a gynecologist. Why is this so important for women our age? Because at the age of 19, I was diagnosed with cancerous abnormalities of the cervix. Here is what happened.

My mother never thought it necessary for me to see a gynecologist. But when I came to Bryn Mawr, I took it upon myself to see one. My freshman year, everything in that department was perfectly normal. At the beginning of my sophomore year, I saw the gynecologist again for my annual Pap smear. Two weeks later, I received a message from the doctor asking me to make an appointment to see her immediately. It turned out that the results of my Pap were extremely abnormal. I had high-grade dysplasia and lesions on my cervix. So what do I do next? I thought. I asked if I could retake the Pap, as there are many false negatives and false positives; maybe this was just a false positive. But the gynecologist here in Bryn Mawr, as well as the other two doctors that I got second and third opinions from, all felt that I was at too much of a risk to waste time with another Pap. They told me that on a scale from zero to four, zero being perfectly normal and four being invasive cervical cancer, I was at a three.

We went ahead with a test called a Colposcopy. (These next procedures cannot be done in the Bryn Mawr College Health Center, so I went back to my hometown for the rest of my treatments.) The Colposcopy was done in my gynecologist's office, and, in retrospect, it was not terrible at all. My doctor applied a vinegar solution to the surface of my cervix, which highlighted the infected areas. She then put a microscope into my birth canal to have a closer look. The image of the infected area was transmitted to a monitor that we were both able to look at. During the same office visit, I also had a biopsy. For this procedure, my doctor removed a small piece of the surface of my cervix to have sent to a lab. Thankfully, the biopsy did not hurt at all, as there are very few nerve endings in the cervix.

At this point, I thought to myself that these tests would come up negative, because after all, I felt fine. I felt completely healthy. I had absolutely no indications or symptoms of anything whatsoever. But I was wrong. My gynecologist called me about two weeks later to tell me that my cervix was, in fact, infected with cancerous cells. The next step was surgery. The surgery I underwent is called a cone-biopsy. The surgery was done under general anesthesia, so I was not awake for any of it. While I was sleeping, my doctor removed a large piece of my cervix, starting with the entire surface and cutting into the back of the cervix in a cone-shape. The point of this surgery was to have a large enough sample of the infected area to see how deep the cancer is.

Another two weeks went by and I received a call from my gynecologist. It turned out that I had non-invasive cervical cancer, and they had successfully removed the entire affected area during the surgery. Where did we go from there? Every three months I had to have another Pap smear. This passed summer I had my third Pap since the surgery, and all of them have had normal results. My next Pap will be in six months, and if that test also has normal results, I will be back to my annual Pap, just like everyone else.

Here is the unsettling part: Had I skipped that Pap smear during my sophomore year, I would have had invasive cervical cancer within a year and would have had to undergo chemotherapy. So once again, and I cannot stress this enough, the best form of prevention of cervical cancer is to be informed and aware. If you are a woman age 18 or older, whether you are sexually active or not, go see a gynecologist.

References

1)Oncology Channel
2)American Cancer Society: Do We Know What Causes Cervical Cancer?
3)BestDoctor.com: Pap Smears
4)Cervical Dysplasia Causes


Migraine: The Unbearable Headache
Name: Diana E. M
Date: 2003-09-26 21:05:51
Link to this Comment: 6646

Migrane: The Unberable Headaceh

I often remember my grandmother lying down on the couch with an agonizing look on her face. At times like these, she'd frequently ask to turn down the volume of anything seemingly too loud, or to dim the lights. Grandma was going through her common, yet terrible incidents of migraine headaches. As a child, I never really understood why aspirin wouldn't help her pain. After all, that's what we all took when we had a headache, and soon enough we were back to feeling fine. Little did I know of her "condition" until, as an adolescent, I experienced, for the first time, what my sisters and I jokingly called, "the grandma episodes." The pain was so terrible I could barely eat, drink, move, talk or see things they way they normally looked. Flashing lights overtook my vision and a nauseating feeling kept me hidden in my totally dark bedroom attempting to make the overall disgust go away. I went from prescription pills that would only relieve the other symptoms, to inhalers that would knock me out after a couple of minutes. I also tried green apples, stopped drinking caffeinated substances, made a journal, tried breathing exercises, and nothing really helped. So, where was science? Why was it not coming to my aid? Controversies over the origin of migraines, and TV specials regarding what to do about them, would always leave me empty-handed. In time, I came to accept the fact that no one had real answers and that I had to live with my condition the best way possible.

But what exactly do "scientists say" is a migraine headache and what does science have to say in contribution to this? A migraine headache is considered a vascular condition that is associated with changes in the size of the arteries within and outside of the brain causing them to throb and spasm. The National Headache Foundation estimates that 28 million Americans suffer from migraines and these occur about three times more frequently in women than in men. A quarter of all women with migraines suffer four or more attacks a month; 35% experience one to four severe attacks a month and 40% experience one or less than one severe attack a month. Each migraine can last from four hours to three days. Occasionally, lasting longer. Studies have shown that per 100 people, about 5.5 days of activity are restricted per year due to migraines. In addition, 8% of men and 14% of women miss all or part of work or school in a 4-week observation period. In the US, annual lost productivity due to migraine costs measures in at over $1 billion. (1) So, shouldn't we be more eager to find a solution as opposed to getting the usual, "this is the best we can do" when we go to the doctor?

Migraines are typically characterized by intense, pulsating pain on one side of the head, frequently with pain behind one eye, nausea, vomiting, and sensitivity to light and noise. Some, yet not all, migraines are preceded by an aura -- visual disturbances that happen up to an hour before the actual headache begins (which I often experience). These auras are described as bright shimmering lights around objects or at the edges of the field of vision (called scintillating scotomas)(2) or zigzag lines, wavy images, or hallucinations - even temporary vision loss. On the other hand, nonvisual auras include motor weakness, speech or language abnormalities, dizziness, vertigo, and tingling or numbness of the face, tongue, or extremities.(2)

In recent years, scientific studies have shown that there is certainly a strong genetic component in migraine with or without auras. Researchers have located a single genetic mutation responsible for the very rare familial migraine, (thanks grandma) but a number of genes are likely to be involved in the great majority of migraine cases. A number of chemicals, structures, nerve pathways, and other players involved in the process are under investigation. According to a study published in the American Journal of Human Genetics (3) a group of researches from the University of Massachusetts Medical School identified an "MO-susceptibility (migraines without an aura) locus on chromosomes 14q21.214q22.3. Yet, further studies are required to identify the causative MO gene in the studied family and to delineate the role of this locus in other families affected with MO."(3) As of now, no clear results have been given on how this possible genetic condition can be remedied, but the search is still on. Unfortunately for me, it seems to be taking just way too long. On the other hand, it is also true that migraines are likewise triggered by non genetic factors such as stress, sleep disorders, fatigue, hormonal changes - specially during the menstruating cycle - dietary issues, weather changes, smoking, caffeine withdrawal, alcohol, glare from a light source, and anxiety, among other variants.(4) However, triggers for headaches vary depending on the type of headache and on the individual. A sound or smell that can trigger a migraine in one person, for example, may have no effect at all on another. Conversely, Triggers do not "cause" migraine. Instead, they are thought to activate processes that cause migraine in people who are prone to the condition. A certain trigger will not induce a migraine in every person and, in a single migraine sufferer, a trigger may not cause a migraine every time. Being able to identify the triggers that set off the headaches can at times help avoid them or learn to cope with them more effectively.

As being a victim of this neurological monster, I have ceaselessly looked to science to give me a solution to my dilemma. One thinks that as technology progresses at the speed of a mouse click, science is equally able to provide answers to issues troubling so many Americans. However, the answers are not always there. Many a times, as is my case, one just sort of has to find it within by bringing one's self into specific states of mind that my ease some of the tension caused by the illness. Even if that means spending hours curled up in a ball in your dark room, praying for the pharmacist to give you better news the next time you show up. I've come to see that with science you just have to wait to see what happens next. And even the, there is usually some more waiting. Nonetheless, this research has allowed me to understand that science does take daily steps into understanding the source of our dilemma and that even if they are seemingly slow steps, their only attempt is at coming closer to getting their previous researches less wrong. But then, will science ever get it more right? Because every time I hear on the news that new discoveries have been made about migraines, I get excited. Yet, my doctor is still telling me there is very little he can do for me. So how do we reconcile?


References:

1 National Headache Foundation: Educational Resources, www.headaches.org

2 The Neurological Channel: Signs and Symptoms www.neurologicalchannel.org

3 American Journal of Human Genetics: A Locus for Migraine without Aura Maps on
Chromosome, electronic edition [computer file] Chicago]: University of Chicago Press 2002

4 Discovery Health On Line. Migraine Madness www. Health.discovery.com

5 National Center For Biomechanical Technology: www.ncbi.nlm.nih.gov


Diabetes, Minority Status, and the African America
Name: Paula Arbo
Date: 2003-09-28 12:40:30
Link to this Comment: 6650


<mytitle>

Biology 103
2003 First Paper
On Serendip


In March of 2003, a bill known as the "Minority Population Diabetes Prevention and Control Act of 2003" was introduced to Congress, and then referred to the Committee on Energy and Commerce. According to this bill's findings, "minority populations, including African Americans, Hispanics, Native Americans, and Asians, have the highest incidence of diabetes and the highest complications of the disease" (1). The alarming rate at which the incidence of diabetes is affecting African American and Hispanic American communities has led the government, health care professionals, clinics, and other organizations to begin to question the process by which information and treatment is being accessed by members of these communities.

Diabetes mellitus is defined as "a group of diseases characterized by high levels of blood glucose, which result from defects in insulin secretion, insulin action or both" (2). There are two types of diabetes, one that "occurs when the body produces little or no insulin, and that typically affects children and young adults," and the other, which "typically develops in adults, and occurs when the body does not use insulin effectively", types II diabetes being the most common (3). According to the CDC and the National Center for Health Statistics, "the number of Americans with diabetes in the year 2000 was 17 million or 6.2 percent of the population, as compared to 15.7 million (5.9 percent) in 1998" (4). However, and on average, Hispanic Americans and African Americans are almost twice as likely to have diabetes in comparison to white Americans. In addition, African Americans and Hispanic Americans show a higher incidence of suffering from diabetes related complications including but not limited to eye and kidney disease, amputations, heart disease, heart stroke etc (5).

Various factors are said to increase the chances of developing type II diabetes. These factors fall under two categories-genetics and medical/lifestyle risk factors, which include impaired glucose tolerance, gestational diabetes, hyperinsulinemia and insulin resistance, obesity and physical activity (6). Although studies have shied away from making direct correlations between obesity/physical activity and the susceptibility of developing type II diabetes, researchers suspect, however, that a lack of exercise and obesity, as well as other unidentifiable factors, may be contributing to the high diabetes rates in African American and Hispanic American communities. The NHANES III survey indicated that "50 percent of African American men/65 percent of Mexican American men, and 67 percent of African American women/74 percent of Mexican American women participated in little or no exercise" (7). In addition, both African Americans and Hispanic Americans experience higher rates of obesity than white Americans; these rates continue to be on the rise.

With this information in mind, it is necessary to examine the prevalence of this disease within these groups, while at the same time, examining the disparities in access to care, treatment, information, and health care in comparison to Caucasian patients-those that are and aren't diabetic, and how this further complicates the ability of these communities to combat diabetes and other health-related problems. An associate professor at the Johns Hopkins Bloomberg School of Public Health stated that "simply expanding insurance coverage to previously uninsured minority patients, although helpful, may not overcome disparities in the qualitative experience of primary care among racial and ethnic groups. It is particularly crucial to identify disparities in the experience of primary care across racial and ethnic groups, since the minority population will almost equal the size of the non-Hispanic white population by the middle of the next century" (8). At the same time, however, the issue is not only about preferential and/or differences in treatment and health care options among minority and Caucasian groups, it is also about getting minorities to access any type of health care regardless of whether they have health insurance or not. The reality is that minorities are most likely to have no health insurance and therefore no access to health care. Hence, "lack of health insurance is linked to less access to care and more negative care experiences for all Americans. Hispanics and African Americans are most at risk of being uninsured. Nearly one-half of working-age Hispanics (46%) lacked health insurance for all or part of the year prior to the survey, as did one-third of African Americans. In comparison one-fifth of whites and Asian Americans ages 18-64 lacked coverage for all or part of the year" (9).

Lack of access to proper health care, low attainment of health insurance, and a growing diabetes epidemic in African American and Hispanic American communities is a complicated and alarming set of circumstances. However, this type of medical care alienation/ignorance and the implications it has on these communities is a small part of a greater dialogue and debate about the status of minorities in the United States, and their ability to access adequate cultural, social, and economic capital. I would argue that diabetes the disease could be replaced by many other diseases and health related problems-HIV/AIDS, obesity, strokes, high blood pressure, etc. The question that remains is whether minority groups can gain access to health care and other types of capital, from which many are deprived, without changing or challenging the existing structures.

1)
Minority Populations Diabetes Prevention and Control Act of 2003,

2) /index.htm">Diabetes in Hispanic Americans,

3) /index.htm">Diabetes in Hispanic Americans,

4)Diabetes
Among Racial and Ethnic Minorities in Nebraska 1992-2001,

5) index.htm">Diabetes in African Americans,

6) /index.htm">Diabetes in Hispanic Americans,

7) index.htm">Diabetes in African Americans,

7) PR_1999/minority_care.html">Minorities' Primary Health Care Substandard Compared to Whites,

7)Minority Americans Lag Behind Whites on Nearly Every Measure of Health Care Quality,


Emergency Contraception
Name: Megan Will
Date: 2003-09-28 21:55:14
Link to this Comment: 6654


<mytitle>

Biology 103
2003 First Paper
On Serendip

Emergency Contraception

There are many myths surrounding the use of emergency contraception. The question of what it is and when to use it is just a fraction of the controversy surrounding this arguably new practice. Emergency contraception is a method of preventing pregnancy after the act of unprotected sexual intercourse. It does not protect against sexually transmitted diseases. However, emergency contraception can not be obtained without a prescription. Why does the US government not trust women with the choice of making sure they do not get pregnant after having unprotected sex? If abortion is a choice and abortion terminates a life, why can women not have the choice to make sure they do not need an abortion? What is wrong with preventing an unwanted pregnancy?

There are two types of emergency contraception, pills (ECP's) or copper T intrauterine devices (IUD). There are two distinct pill types, the brand name "Preven" and the brand name "Plan B". Preven contains the same hormones as regular birth control, estrogen and progestin. (1) It causes more instances of nausea and vomiting than Plan Bs, and decreases the chances of pregnancy by 75%. However, Preven can be used as an ongoing form of birth control. Plan B only contains the hormone progestin. It is more effective, decreasing chances of pregnancy by 89%, and has less of a chance for side effects. (1) These pills can be taken immediately after the sex, or up to 72 hours later. (2).
The other form of emergency contraception, IUD, can be inserted up to five days after the unprotected sex and is more effective than the pills (99% decreased chance of pregnancy). (1) An IUD can be left in for up to 10 years as a form of birth control, but in some cases can lead to pelvic infection, which in turn could lead to infertility. (1)

Emergency contraception works in three ways. It slows down ovulation, it stops the fertilization of the egg, and it stops the attachment of the egg to the wall. (2). It is not an "abortion pill" or RU-486. (2).
It does not kill the baby, as the baby is never formed. Emergency contraception can be used in instances of a broken condom, sexual assault, or really anytime after unprotected sex. This is part of the issues surrounding its use. Many physicians do not think that it should be used in any situation, except that of true emergency. This is part of the reason that emergency contraception is not an over the counter drug. It is approved by the FDA, but all but three states in the US require a woman to see a physician before they can get a prescription for it. About one half of unwanted pregnancies are due to the failure of a contraceptive. Similarly, about one half of unwanted pregnancies end in abortion. (3).
What would be worse, killing an unborn child or making sure that the child is never biologically formed?

However, many pharmacies refuse to stock ECP's. One for example, is Wal-Mart. Many groups, such as the AMWA, see this as "denial of emergent care".(3). In fact, the American College of Obstetricians and Gynecologists "estimate that emergency contraception could prevent 800,000 abortions and 1.7 million unintended pregnancies in the United States each year". (4).

ECP's are available through Planned Parenthood. They are priced on a sliding scale, with the average cost of $20-$25 for pills and $30-$35 for a visit. (5).

Many groups are adamant about making emergency contraception an over the counter purpose. Other countries, such as France and Britain already have the luxury of such a purpose. The American government still denies the request for these to be readily available for American women. Why is this so? In a country where abortion is such a moral issue, you would think that an alternative to having to end a life would be widely welcomed by all sides of the issue.

WWW Sources
1. 1)emergency contraception at princeton
2. 2)teen forum on ec, myth/fact based site
3. 3)publication paper, stand on emergency contraception
4. 4)ec connection, valuable resource
5. 5)plannedparenthood, good price list


The Amazing Cheesy Adventures of Professor Sanders
Name: Brittany P
Date: 2003-09-28 21:56:07
Link to this Comment: 6655


<mytitle>

Biology 103
2003 First Paper
On Serendip

***

I hope you guys have as much fun reading this as I did writing it. ^_^

***


The Amazing Cheesy Adventures of Professor Sanderson's Paleobiology Class!

Investigation 1: Where did mammals come from? Or: Therapsids!

**

Professor Sanderson's class was popular. Partly this was because he was a well-meaning psyinstructor; the images he crafted were neat, cohesive, and usually entertaining. Mainly, though, it was because he was a young male teacher at an all-girls' college, who had the fortune to resemble Jai from "Queer Eye For the Straight Guy." These two factors led to an unnaturally high enrollment in Paleobiology 101. No less than fifty-two girls sat chittering in the classroom before he appeared each day, punctually, at 10 a.m. Most were more intrigued by *his* anatomy than that of the long-dead tetrapods to which he devoted his lectures.

Today's attendance was especially high. It was a Field Trip day. The term wasn't literal. There was no trip involved---simply the students closing their eyes and falling into the trance-like state from which the professor led their excursions. There, in the collective canvass of their psyches, he would build that day's lesson, sculpting visceral images from his expansive knowledge of biology and his even more expansive creativity.

Today's lesson was mammalian origins.

"Where do we come from?" he had asked, by way of preamble. "We all know the basic answer, or think we do. Apes, right? And apes from primates, and primates from mammals, all well and good. But where do mammals come from? I mean, what did mammals evolve from, and what were the major evolutionary steps they took to get there? Doesn't that sound fascinating? "

The class eyed him warily. A few actually listened. The rest swooned.

"Today we're going to try and explore that question. If you'll all take out your textbooks, flip to page 137, lean back, orient your touchpads, and close your eyes..." he waited while the actions were performed. "We'll be going to the Permian. That's the time right before the Triassic period, which started the age of the dinosaurs. It's approximately 300 million years in the past." (1)

He briefly surveyed the class, then looked thoughtful for a moment. "Wait. I guess I should give you a little background first," he relented. "The main thing we're going to see today is a group of animals called the therapsids. They were precursors to the mammals. Both they and reptiles were tetrapods, a category created for the earliest four-legged land animals. The therapsids lived in the Permian era, mainly, and were a hugely diverse group of animals. We'll see just how diverse in a moment." (2) If the class had been watching, they would have seen him shut his eyes and tap his temple, once, gently. "Hm, guess that's about it for now. Here we go."

**Click.**

The jungle was humid but sparse. No flowers, little underbrush; only thick conifers and ferns, and rock-hugging carpets of moss. Rivulets of water veined the damp soil. Professor Sanderson was standing casually next to a sprawling, eleven-foot-long lizard-like beast whose ridged sail-back opened like a fan into the heavy sunlight.

"This," he said, gesturing vaguely, "Is modern-day Texas. Currently it's squashed in the middle of the supercontinent Pangea, which only recently formed. We're actually around the equator. This---" he pointed out the creature at his knees--- "is your great-great-great-great... well, it keeps going. He's a pelycosaur, more specifically a Dimetrodon, and he's our collective ancestor." (2)

The class, arrayed in a circular fashion before him, looked skeptical.

"Looks like a lizard to me," one girl muttered.

The professor caught it. "I know he looks like one," he replied. He indicated the animal's parabolic dorsal ridge. "See this? It's like a solar conductor. He uses this thing to soak up sun radiation, and also to provide him with surface area for temperature control. But he's not a lizard."

The thing grunted and scratched itself. The girl raised an eyebrow. "Why?"

Sanderson snapped his fingers and the class started with disgust. The skin, muscle, and guts of the pelycosaur had disappeared, leaving only a brown-white skeleton which continued placidly scratching itself. Unruffled, he pointed to the skeleton's jaw. "Under the jaw here, he's only got one hole for his jaw muscles to go through. Reptiles have two---he's really primitive. (3) Also, he's more mobile than reptiles. If he hoists himself up a little from a squat, his back legs can run straight-legged. Makes sense, considering he's endothermic like his mammal descendents. They were all less climate-dependent, and so they could afford to move more."

A different student raised a hand. "Uh, but if he's endothermic, why does he have that sail?"

Sanderson looked uncomfortable. "Don't rightly know," he admitted. "Some pelycosaurs didn't even have one."

"Was that group more advanced?"

He shrugged. "No, not from all we know so far. The group of pelycosaurs that turned into mammals, the sphenacodonts, included Dimetrodon. (3) But some sphenacodonts didn't have sails. We figure maybe they lived in cooler environments, like thicker forests, where they'd need less temperature regulation. Some even had smaller sails."

The student sniffed skeptically. "So they were like what, half warm blooded? A quarter less warm blooded? Why are we descended from the most cold-blooded one?"

Sanderson held up his hands. "Again, sorry, I don't know. Maybe we're not---the fossil record is sketchy here. We may well be descended from one of the more forest-dwelling types. Or maybe the sails weren't used for temperature regulation at all, but for something else, like display." Wryly, he snapped his fingers again and the pelycosaur regained flesh, this time of a strikingly iridescent color. "But don't say that around any paleontologists."

The pelycosaur yawned. Sanderson glanced at it, then back at his class. "In any case, this is still as far from a mammal as any Synapsid---that's the largest grouping we'll see here, it's a vertebrae-bearing collective of which mammals are the only surviving members---has a right to be. (3) We won't learn much about mammals from studying him. Let's get a little closer."

He raised a hand.

**Click.**

The landscape was much the same: level, grass-less, braided with rivers, shaded by conifers. However, the air had a different quality. It was cooler and drier, as if the altitude has increased. The students discovered that they were now wearing windbreakers.

Again, Sanderson stood about ten feet in front of them.

"This," he explained, reviewing the landscape with a smile, "is the central plains of Russia during the middle-to-late Permian, around 255 million years ago. This is where it really gets good."

The students blinked, glanced around. Apart from a few disturbingly-large insects, there was nothing in sight.

"Um, sir?" one student raised her hand tentatively. "What are we here to see?"

"Something very cool!" he chirped, looking for all the world like a kid in a candy shop. "Our first therapsids. They're the descendants of the pelycosaur you just saw, and the biggest significant jump towards mammals we'll see today." (2) He turned and began striding through the brush, beckoning one hand loosely behind him. "Come on, follow me."

They wove after him between the conifers, which were thicker and more complex than the ones in "Texas." After about a minute the ground began to slope upwards, and soon they emerged on a ridge whose treeless banks tumbled down into a muddy river valley.

"I thought it would be better if we watched them from a distance," the professor said.

Below, in the bowl-like crevasse the river had chewed into the floodplain, a bizarre and eclectic group of animals was busily going about its business.

"Those are therapsids," began Sanderson. "More specifically, a group of gigantic, somewhat primitive therapsids called dinocephalians. Name means 'ugly head.' (2) Accurate, eh?"

It was. The closest animal was a lumbering, hippo-like herbivore with a fat, stiff tail. Its head resembled that of a noseless pit-bull with an overbite. Despite its bowling-ball rotundity, it walked upright on a columnar pair of hind legs---clearly more mobile than the pelycosaur. Sanderson, for some inscrutable reason, had decided to color the entire species pink.

"Those pink ones are Ulemosaurs," he said, pointing. "Notice how much more they look like mammals! There's a definitive movement towards true endothermic metabolism here. See the short neck and tail, and how fat they all are? That's to reduce surface area, to conserve heat. They're beginning to regulate their own internal temperatures, since now the climate's a lot cooler." (2)

He paused for a second, watching the Ulemosaur wade heavily towards a group of water-ferns. "Unfortunately, all of that gut makes them sort of slow..."

The girl who had questioned endothermism raised an eyebrow. "Sounds like a premonition to me."

"Um," said Sanderson, one hand behind his back.

Not a moment later, something large and purple and striped wriggled from behind a clump of ferns and galloped at the Ulemosaurus. It attacked in comic slow-motion. The beast's gallop was actually more of a trot: although it bounced confidently along on four upright legs, its body was too thick-limbed for litheness.

Nevertheless, the Ulemosaurus was far outmatched. It tried pivoting its elephantine body, but only succeeded in turning broadside in the mud, widening the attacker's target. The striped animal drove into the Ulemosaurus's side, snapping, and bore it down beneath the brown water.

"Um," Sanderson repeated, glancing sheepishly at his students' shocked expressions, "Maybe planning that wasn't really a good idea. Uh...." The Ulemosaurus was making obese thrashings beneath the water. Sanderson's discomfort increased, and, decisively, he clicked his fingers. The Ulemosaurus disappeared, leaving its attacker nuzzling confusedly around the riverbed. "See, that carnivore there is called Inostrancevia," (4) he explained quickly. "It's the most advanced type of therapsid we've yet seen, from a group called the theriodonts that appeared right at the end of the dinocephalian reign. Theriodont means 'beast-toothed....'" (4) The class was still gaping at the water. He continued loudly, "Can, er, anyone tell me what they noticed about Inostrancevia's looks that was different from dinocephalian features?"

After a few moments, one of the more strong-stomached girls raised a tentative hand. "The way it ran," she said. "It was upright. It could trot. And the shape of its head was like, I dunno, a really ugly hairless dog."

Sanderson nodded palely. "Both right. By this point in their evolution, the therapsids were moving mainly like modern mammals---with legs beneath the body. This made them faster. They were the first animals that could trot. (4) And the head's very important as well. Theriodonts had shorter heads and better jaw muscles, which meant they could chew more." He patted the underside of his own jaw. "They had a bigger gap for muscles to go in, under here.

"Theriodonts also had more differentiated teeth---specialized for different jobs. (5), (6) Dinocephalians, on the other hand, couldn't chew much." He was calming down some, regaining his joviality. "It's why they had such big guts, to digest all that rough plant material." Sanderson mimed a sagging belly. "Theriodonts, I'm extrapolating here, could afford to be sleeker because they chewed more. And maybe it's that sleekness that allowed them the more erect form, which in turn gave them the speed. All because of chewing."

"But-" the girl furrowed her brows---"it couldn't have chewed that much. You agreed that it looked like a dog, and dogs don't really chew."

"Point taken," admitted Sanderson. "I should have said comparatively. This is just my own theory, understand. Theriodonts chewed more than dinocephalians, not more in any modern sense. Modern mammals, especially those with molars, are still tops in the chewing department." He rubbed the back of his neck, looking a little overwhelmed. "Speaking of modern mammals, you saw how comparatively fast the thing was?" The class nodded vaguely. "That's from internal temperature regulation again. The theriodonts were another step in the warm-blooded direction." (7)

In the basin, Inostrancevia finally realized that its prey had eluded it. Honking a snort of defeat, it begrudgingly extracted itself from the mud and trotted off. The class let it disappear into the foliage before raising more questions.

A student who hadn't spoken before raised a hand. "Is that, I mean, the temperature regulation, why they eventually got fur?"

The professor nodded. "I would guess so. Although we can't know for sure when or where or even exactly how. For example, some paleontologists depict Inostrancevia with fur. I don't think it appeared that early on, so I didn't."

"Yeah. You made them purple," she noted dryly.

"I like purple," he defended.

The class had no further questions. Most were too preoccupied with examining the underbrush for concealed predators.

"Right then," Sanderson declared. "Since I guess we're done here, it's time for us to move to our final stop on this field trip... the closest therapsid-mammal ancestor we'll be seeing today. This guy is pretty famous, so try to remember him."

He looked markedly relieved as he raised his hand.

**Click.**

The forest was dense, coniferous, and dark. Clumps of waxy ferns splayed upwards between mounds of rotting logs. The air, though cool, had a spicy humidity. The class was standing tightly-packed in the pit of a deep gully. Sanderson perched nearby on the arm of a fallen tree. He was gazing intently at a spot about ten feet away.

Beneath the shadow of a gutted tree-trunk lay a squiggling mass of stiff fur. The animal, which lay serenely on its side, was house-cat sized, with a mole's pinpoint head and a dog's protruding claws. Four or five smaller tufts of fur were tugging at its middle: pups, nursing.

"This little family," introduced the professor quietly, "belongs to the species Dvinia prima. They're one of the most advanced type of therapsid, from a group called the cynodonts. They're classified as theriodonts as well, although they're obviously a lot more advanced than Inostrancevia. Cynodont means 'dog-tooth,' by the way," he added, catching the pensive look on one of the Latin major's faces. "They're our most direct Permian ancestors." (6)

"So this is still the Permian?" a student whispered.

"Probably," Sanderson answered. "Dvinia lived in both the late Permian and early Triassic. This could be either, although I'm inclined towards Permian, since there's still a lot that looks alive around here."

The student crinkled her nose in preemptive distaste. "What do you mean, *alive*?"

"The Permian extinction. Wiped out 70% of land species around 251 million years ago. (8) I probably would've simulated a lot less diversity if this were the early Triassic, even plant-wise." He tilted his head to gaze upwards into the forest canopy. "But I sort of like trees, so I stuck with Permian. Dvinia lived in this sort of cool forest, anyway."

Her nose remained crinkled. "They look like rats," she said.

Sanderson nodded acquiescingly. "They do. But they're not mammals yet. Very, very close, but no cigar. Later cynodonts like Dvinia here have most of the main traits we call mammalian. Fur, most obviously. Diversified teeth, for chewing. Almost exclusive endothermism. And a bigger brain, although they're still pretty dull compared to modern mammals." He paused to smile fondly at the little family. "They also nurse their young." (7)

The "rat"-comment girl was deadpan. "You think they're cute, don't you?"

"I do."

"You *are* Jai."

He blinked, puzzled, but opted to ignore the comment. "Most of the differences that separate Dvinia from 'true' mammals are trivial," he continued. "Things like a fused brain-case, the arrangement of the middle ear, sweat glands, etcetera." He ticked the traits off on his fingers. "It gets pretty hard to tell the difference from here on out. Externally, it's near impossible." (7)

One of the nursing furballs raised its snout in a miniscule yawn. "Aww," murmured the class, save the girl who had termed them rats, who scowled.

The student who had questioned endothermism waved her hand for the third time. "So if the differences between these guys and mammals are so trivial, how are we to say what's a mammal and what's not?" she asked. "I mean, I read in my college seminar that some scientist in the 1700's coined the term 'mammal.' How do we know it's not just an arbitrary grouping?" She waved a hand at the Dvinia family. "How are we to say that those aren't mammals?"

Sanderson shrugged, smiling. "I don't know. They certainly elicit the same response a group of baby mammals would. I mean, they're so cute! Our emotions identify them as mammals. Science says they're not. Who's to say who's right?" He turned to the bundle of tiny furballs, still grinning. The mother Dvinia, blissfully unaware of his presence, flicked one fleshy ear and wiggled to better accommodate her children.

The student mimicked the professor, shrugging. "Does it matter?" she asked.

The class seemed to mull this for some time. "I don't think so," another student finally answered, quietly. It was the girl who had called the theriodont an "ugly hairless dog." "I mean, we're going to say 'aww' at both these guys and puppies, and 'ugh' at those Inostrancevias, no matter what science tells us. I think scientists only make those distinctions because they're interested in how we got where we are. They just need some method of labeling the steps."

Sanderson shifted on his log, then nodded. "Works for me," he asserted. "I mean, that's basically what I meant to show you all today: label, and see, some of the steps on the way towards what we call mammals." He craned his had back towards the class. "Did I do ok? How did you all like it?"

Most of the class gave perfunctory nods. A few were still swooning. The remainder were still cooing over the Dvinia family.

The professor's thin face relaxed into a smile. "That's great!" he enthused. "Because tomorrow, how cool is this, we're doing Lepidosaurs!" (5)

The response this time was slightly less enthusiastic. Sanderson didn't seem to notice.

"Well, that's it for today then," he concluded sunnily. "I'll see you all tomorrow at 10 a.m. Class dismissed!"

He raised his hand.

**Click.**

**


References

Works Cited

1. 1)Palaeos, a great all-around paleontology site with a bunch of great graphics

2. 2)The Therapsids!, an illustrated tour-de-force of the therapsid group

3. 3)Pelycosaurs!, all about Pelycosaurs

4. 4)The big bad Inostravencia, what it sounds like...

5. 5)The Permian Slide Show!, a cute, concise, illustrated slide show of predominant Permian/Triassic life

6. 6)BBC, the BBC's take on the Permian world

7. 7)Theriodonts in Detail, a huge bunch of technical jargon concerning theriodonts with a few laymen's terms thrown in for good measure

8. 8)The Permian Extinction, an overview of the Permian extinction


Human Intelligence/ IQ Controversy
Name: Ramatu Kal
Date: 2003-09-28 22:21:19
Link to this Comment: 6656

Ramatu Kallon
9/29/03
Biology 103
Professor Grobstein

Human Intelligence/IQ Controversy

¡°Human intelligence is an eel-like subject: slippery, difficult to grasp, and almost impossible to get straight¡± 3) IQ and Human Intelligence, investigations on intelligence, by N.J. Mackintosh. Many scientist and psychologist have made numerous attempts to come up with an explanation for the development of human intelligence. For many years, there has been much controversy over what intelligence is and whether it is hereditary or nurtured by the environment. Webster¡¯s dictionary defines intelligence as ¡°the ability to acquire and apply knowledge; which includes a sensing an environment and reaching conclusions about the state of that environment¡± 7) Electronic Dictionary, an electronic dictionary. In this paper I am going to examine the factors, which make up ones intelligence. I will be investigating whether or not intelligence is fostered by genetic heritance or nurtured by ones environment.

¡°There can be of course no serious doubt that differences in environment experiences do contribute to variation in IQ¡± 5) Flynn, J.R. ¡°Trends over time: Intelligence, race, and meritocracy.¡± Princeton University Press, 2000.
The environment is made of circumstances, objects, and conditions by which a human, animal, plant or object are surrounded in science. It has been argued that the environment in a child¡¯s developing years could in fact be a factor that will determine this IQ. In a study of adoptive and biologically related family¡¯s psychologist Scarr and Weinberg recognized that with children between 16 and 22 years of age, environment was more powerful in influencing IQ level in the young child, than the young adult. Scarr and Weinberg reasoned that ¡°environment exerts a greater influence on children, who have little choice; as they age, diversity age, diversity and availability of choices expands, and if these choices are at least partially determined by genetic factors, the influence of environment is there by diminished.

Heritability is a term from the population of genetics. It refers to ¡°the capability of being passed from one generation to the next¡± ) Cognitive Psychology and its Implications , interesting ways of learning. Intelligence has for centuries been considered as fixed trait. A number of investigators have taken ¡°an approach that intelligence is highly heritable, transmitted through genes.¡± 3). Cognitive Psychology and its Implications, interesting ways of learning Kinship studies have shown that the habitability of IQ is significantly less than 1.0, and recent attempts to model kinship correlation especially in children have agreed that IQ is influenced both by the child¡¯s parent and the environment. Other factors such as parental affection, birth order, gender differences, and experiences outside the family, accidents, and illnesses may account for IQ.
Writer of Hereditary Genius, Francis Galton developed a theory know as the ¡°genius¡± theory. He thought human intelligence was hereditary under ¡°limitations that required to be investigated¡± 2) Hereditability and Intelligence, a rich resource on intelligence and heritability. He tried to distinguish between factors in several ways, culminating with his study of the life history the changes of twins, from which he concluded that the effect of nurture was very weak compared with that of nature. Continuing his investigation Galton ¡°measured resemblance between relatives; understanding of the genetic relatedness of monozygotic and dizygotic twins; and the accumulation of data on cross- fostered children and on twins raised apart¡± ¡±2) Hereditability and Intelligence, a rich resource on intelligence and heritability, this allows the effect of heredity to be distinguished from that of shred family environment. This allowed Galton to conclude, ¡°Education and the environment produce only a small effect on the mind of any one, and that most of our qualities are innate¡± ¡±2) Hereditability and Intelligence, a rich resource on intelligence and heritability.

The hereditability of intelligence has been an extremely controversial topic. In fact, it has been so notorious that it is difficult to pin point how intelligence is actually formed. There is no direct answer to whether or not intelligence is affected by ones environment or traits. As shown in this paper many scientist and psychologist have several different views on the formation of intelligence. Some believe that intelligence is affected by the environment; while others strongly believe that intelligence is hereditary. With further research, I hope to continue to explore the diversity of views surrounding the issue of intelligence. I also hope to come to a conclusion on whether or not intelligence is fostered by genetic inheritance or nurtured by the environment.

WWW Sources


1) http://citeseer.nj.nec.com/context/173701/0

2) http://www.indiana.edu/~intell/

3) http://www.oup.co.uk/isbn/0-19-852367-X

4) http://www.nature.com/nsu/021104/021104-7.html

5) Flynn, J.R. ¡°Trends over time: Intelligence, race, and meritocracy.¡± Princeton University Press, 2000.

6) Galton, Francis. Hereditary Genius. London: Macmillan and Company, 1892.

7) http://dictionary.reference.com/search?q=intelligence


Information for Plate Tectonics
Name: Vanessa He
Date: 2003-09-28 23:18:57
Link to this Comment: 6659


<mytitle>

Biology 103
2003 First Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


"Viewed from the distance of the moon, the astonishing thing about the Earth, catching the breath, is that it is alive. Photographs show the dry, pounded surface of the moon in the foreground, dead as an old bone. Aloft, floating free beneath the moist, gleaming membrane of bright blue sky, is the rising earth, the only exuberant thing in this part of the cosmos. If you could look long enough, you would see the swirling of the great drifts of white cloud, covering and uncovering the half-hidden masses of land. And if you had been looking for a very long, geologic time, you would have seen the continents themselves in motion, drifting apart on their crustal plates, held afloat by the fire beneath." (1) These were the words spoken by Lewis Thomas, the U.S. Physician and author.

The story of Plate Tectonics is a fascinating story of continents drifting majestically from place to place breaking apart, colliding, and grinding against each other; of terrestrial mountain ranges rising up like rumples in rugs being pushed together; of oceans opening and closing and undersea mountain chains girdling the planet like seams on a baseball; of violent earthquakes and fiery volcanoes. Plate Tectonics describes the intricate design of a complex, living planet in a state of dynamic flux. (1)

Many forces cause the shape of the Earth to change over long time. However, the largest force that changes our planet's surface is the movement of Earth's outer layer through the process of plate tectonics. This process causes mountains to push higher and oceans to grow wider. The rigid outer layer of the Earth, the lithosphere, is made up of plates that fit together like a jigsaw puzzle. These solid but lightweight plates seem to "float" on top of a more dense, fluid layer underneath. (2)

Motions deep within the Earth carry heat from the hot interior to the cooler surface. These motions of material under the Earth's surface cause the plates to move very slowly across the surface of the Earth, at a rate of about two inches per year. (2) When two plates move apart, rising material from the mantle pushes the lithosphere aside. Two types of features can form when this happens. At mid ocean ridges, the bottom of the sea comes apart to make way for new ocean crust formed from molten rock, or magma, rising from the mantle. Continental rifts form when a continent begins to split apart (the East African Rift is an example). If a continental rift continues to split a continent apart it can eventually form an ocean basin. When two plates move towards each other, several features can form. Often, one of the plates is forced to go down into the hot asthenosphere at a subduction zone. Volcanoes may form when a subducted plate melts and the molten rock comes to the surface. If neither plate is subducted, the two crash into each other and can form huge mountains like the Himalayas. (3)

There are several different hypotheses to explain exactly how these motions allow plates to move. Powered by forces originating in Earth's radioactive, solid iron inner core, these tectonic plates move ponderously about at varying speeds and in different directions atop a layer of much hotter, softer, more malleable rock called the athenosphere. Because of the high temperatures and immense pressures found here, the uppermost part of the athenosphere is deformed and flows almost plastically just beneath the Earth's surface. This characteristic of the athenosphere to flow allows the plates to inch along on their endless journeys around the surface of the earth, moving no faster than human fingernails grow. (1)

One idea that might explain the ability of the athenosphere to flow is the idea of convection currents. When mantle rocks near the radioactive core are heated, they become less dense than the cooler, upper mantle rocks. These warmer rocks rise while the cooler rocks sink, creating slow, vertical currents within the mantle (these convection currents move mantle rocks only a few centimeters a year). This movement of warmer and cooler mantle rocks, in turn, creates pockets of circulation within the mantle called convection cells. The circulation of these convection cells could very well be the driving force behind the movement of tectonic plates over the athenosphere. (1)

During Earth's 4.6 billion year history, the surface of our planet has undergone numerous transformations. These transformations have had a profound impact on the evolution of life on Earth. When plates move they carry living organisms along with them like passengers on a slow moving ice floe. As a plates' relative position to the equator changes over time, organisms well adapted to a polar environment, for example, must either evolve through adaptations or perish as the plate migrates into a tropical environment. (1)

Did you ever wonder why elephants are only found in Africa and Asia? With plate tectonics as a guiding principle, the answer becomes moderately clear. As India broke away from Africa 20 million years ago it very likely ferried some unsuspecting elephants (along with many other organisms) northward to Asia. The Asian and African elephants have slight physical variations, but they are clearly cut from the same genetic mold. (1)

Another very interesting theory to emerge recently concerns, perhaps, the greatest of all mysteries - the origins of life on earth. The predominant theory held that life had its origins in warm ponds or similar small bodies of water protected from the harsh environment of the early earth and far from the escaping heat of the deep sea-floors. But now scientists have discovered organisms that thrive in these hellish conditions and appear to have been around long before the earliest known organisms previously known. Could the hot vents at mid-ocean ridges have been the incubators of life on this planet? (1)

(1) www.platetectonics.com
(2) www.windows.ucar.edu/tour/link=earth/interior/plate_tectonics.html
(3) www.windows.ucar.edu/tour/link=earth/interior/lithospheric_motion.html&edu=high
(4) www.windows.ucar.edu/tour/links=/earth/interior/how_plates_move.html&edu=high


Symmetry? Could This be the Answer to the Age Old
Name: Patricia P
Date: 2003-09-28 23:38:22
Link to this Comment: 6660


<mytitle>

Biology 103
2003 First Paper
On Serendip

What attracts one person to another? The question is crucial as we consider the values of our society, the emphasis we put on physical beauty and beauty products, the new resurgence of weight loss wonder drugs and popular fad diets, not to mention a new reality TV show devoted to placing a new person under the knife for plastic surgery every week. All of these carry the same message: beauty is nearly synonymous with happiness. So then is the nature of "beauty" a philosophical conundrum, a biological issue, a psychological mind set, or a cultural problem? What are we attracted to, why are we attracted to it, and is there a ratio or specific definition of this beauty we are looking to attain?

Variations of this question are timeless, and without ever defining beauty, we are constantly attempting to achieve it. Hundreds of years ago the essence of beauty was a philosophical question. Plato was one of the first to conjecture that beauty may be due to what he called the "golden proportions." Plato went on to describe that the "width of an ideal face would be two-thirds its length, while a nose would be no longer than the distance between the eyes." (3) Although all of Plato's ideas were not entirely defendable, it was the first recognition that symmetry might play a part in what humans deem attractive.

Today we have taken on the task of beauty quite seriously. From a biological and psychological standpoint, we do believe that there are certain determinant factors in a person's attractiveness. Studies focusing on the effects of beauty are growing in number and recognition. For example, human infants prefer images of symmetrical patterns rather than nonsymmetrical ones. (5) Furthermore, babies also prefer looking at pictures of symmetrical people over pictures of those who were measured to be asymmetrical. (4) As people, we may not grow out of this preference to symmetry. When several faces are arranged to create a composite that is more symmetrical than the individual faces on their own, people find the composite to be more attractive than the individuals' pictures. (4) Studies such as these led to the production of a program known as FacePrints, "which shows viewers facial images of variable attractiveness. The viewers then rate the beauty pictures on a scale from one to nine. In what is akin to digital Darwinism, the pictures with the best ratings are merged together, while the less attractive photos are weeded out. Each trial ends when the viewer deems the composite a 10 – yes, beyond the normal scale." This program found that all photos voted Perfect 10's were super-symmetric. As Nancy Etcoff, author of Survival of the Prettiest: The Science of Beauty explains that our sensitivity to beauty is hard-wired and shaped by natural selection. "We love to look at smooth skin, shiny hair, curved waists and symmetrical bodies because, over the course of evolution, people who noticed these signals and desired their possessors had more reproductive success." (4) This is not only a principle that is true for humans. Animals are more attracted to the most symmetrical of their species. For example, scientists discovered that by clipping the tail feathers of male swallows with symmetrical tail feathers, (making them unsymmetrical,) they were able to reduce their attractiveness to female swallows (reduce their sex life.) (2) The consensus seems to be that symmetric individuals have a "higher mate-value." (4) One study concerning symmetrical and a symmetrical men found that women who made love to the most symmetrical men orgasm 75% of the time during intercourse, while women who made love to the least symmetrical men had orgasms only 30% of the time during intercourse. Furthermore, the most symmetrical men were more likely to ejaculate at the same time that their female partner was orgasming! In line with this study, symmetry may also indicate a higher chance of pregnancy during "symmetrical" sex. (2)

What began as the philosophical "golden proportions," has today been further investigated to such a wild extent that Stephen Marquardt, a retired California plastic surgeon has moved away from the medical aspects of beauty in order to study the mathematical. He feels that there is a common ratio that can be found among things that are commonly considered "beautiful" or attractive in nature (flowers, pine cones, seashells) and in human works (the Parthenon, Mozart's music, da Vinci's paintings) His research brought him to the mathematical finding of the "golden ratio," which is 1:1.618! (1) From this, Marquardt created a mask that applies his golden ratio to the face! The ration between the width of the mouth and the width of the nose all fit his ratio, and the mask can be used to allow plastic surgery to come as close to these proportions as possible. (1) Marquardt explains that, "A lot of this is Biology. It's necessary for us to recognize our species. Humans are visually oriented, and the mask screams, 'Human!'" (1) So than the question now is; can your looks be measured by a mathematical ratio? Could we all be our most attractive if our features fit this mask? Other aspects of science say NO.

Other arenas of science, as well as cross cultural studies caution us not to over generalize. John Manning of the University of Liverpool would explain that, "Darwin thought that there were few universals of physical beauty because there was much variance in appearance and preference across human groups." (4) For instance, we must consider that the rules of symmetry can be outweighed by many unique cultural preferences; Chinese men prefer women with disproportionately smaller feet, while many African tribal cultures prefer women with large discs inserted into their lips. For instance, similar studies to the "symmetry = beauty" theory argue that men are pre-disposed to desire a low waist-to-hip ratio. (WHR) From a Darwinian perspective, this may be because women with high WHRs are more likely to suffer from "health maladies, including infertility and diabetes." However, people who have little contact with the Western World, in southeast Peru, actually have a preference for high WHRs. (4) Other quantitative studies also show that symmetry may not be the most important factor in what others view as beautiful. In one study, 70% of college students deemed an instructor physically attractive when he acted in a friendly manner, while only 30% found him physically attractive when he was cold and distant. (4) It seems as if a person's personality, and the frequency with which they smile, has a lot to do how physically attractive they appear to others.

It is understandable that physical symmetry is subconsciously, as well as consciously, perceived as a sign of better health and even better strength and fertility. From a Darwinian approach, one must consider that a woman may be in search of a protector and one with great health to support the survival of her offspring, while a man may be in search of the healthiest women to carry and support his offspring. Symmetry may also meet our innate desire to find order. We find order in nature, order in our own man-made works, and order in symbols. We may be looking to mate with, what we innately see as the closest image to order, since our own existence is built on such a delicate principle or order and perfect balance. However, it can not be ignored, even in the midst of science, that many studies also show the need for human compassion and personality in order for a person to make the determination that they have found an appropriate mate. This also holds true from a Darwinian stand-point, as humans, along with many other creatures, conceive offspring in pairs, and then raise those offspring together. Qualities such as personality, kindness, generosity and emotional stability are not just afterthoughts in the quest for a mate, but they may need to fight for their place, next to the enormous power that physical beauty and symmetry may have on our choice of a mate.


References

1) USA Weekend.Com, The Beauty of Symmetry

2) Great Moments in Science , Beauty- Part One


3) Symbol of Beauty, An article on symmetry in nature and our relationship with it.


4) Looking Good: The Psychology and Biology of Beauty, An article on different on the approaches to beauty.


5)
Beauty: Form and Symmetry


Synesthesia and the Implications of sensory fusion
Name: Shafiqah B
Date: 2003-09-29 01:41:05
Link to this Comment: 6663


<mytitle>

Biology 103
2003 First Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT

Synaesthesia and the Implications of Sensory Fusion
Synesthesia is defined as the sensation produced at a point other than or remote from the point of stimulation, as of a color from hearing a certain sound.[1]
(From the Greek, syn=together+aesthesis=to perceive).
In common language synesthesia is an involuntary blending of the senses by some people, which allows them to see colors when looking at numbers, for instance.
This is a topic that was introduced over a century ago, but has not been taken serious until recently with the development of tests capable of testing whether or not the condition was real. Previously, scientists thought that this was a figment of the imagination, drug abuse, or in its most concrete form one of memory. As if seeing a number paired with a color, say in early childhood was the reason that a person paired them later on in life. There was also the theory that these people were very creative and when they said that they could taste a shape, it was only an unconventional metaphor.
However, thanks to in depth pursuit of this topic by scientists, especially Ramachandran and Hubbard the validity of such statements has been proven. One test they developed to test the ability of people to pair colors with the site of ordinary numbers involved printing up sheets with similar numbers, like 2 and 5. Many people claimed to see a certain color when presented with the number 2 and a different color when shown 5.
The 2's and 5's were arranged in such a way that one number formed a distinct shape in the midst of the jumble of the other number. A non-synesthetic would be incapable of distinguishing any pattern due to the close resemblance of the numbers. But, in 90% of the cases where people claimed to see colors they were easily able to discern the shape because it registered stood out for them as a completely different color.
One wonders what takes place in the brain to cause such phenomenal differences in perception. The cause is unknown for certain, like many things in the realm of science it has not been researched nearly enough, but there are some indications.
The merging of certain senses points to a crossing of signals in the brain. Although the theory is an old one, it has come to the forefront of the scientific researcher's minds, with increased focus on the topic. Perception of the five senses has a lot to do with the brain's lobes. The color receptors send their signals to the occipital lobe that controls sight, sound goes through the temporal and touch through the parietal. Although these lobes are distinct, they are located very closely together, and there is even a (TPO) juncture above the lobes where signals from different lobes meet. Therefore, it would not be a leap of faith to assume that signals for color can get mixed up within the TPO where numerical computation functions. This mix up would be considered "higher" synesthesia, as opposed to "lower" synesthesia that takes place in the occipital lobe where both color and the visual appearance of numbers reside together. [2]
Much of the initial observations made by scientists were that cross-wiring was a physical action that took place in the brain, but upon deeper probing have decided that cross wiring could be a chemical action too. Some chemicals inhibit others and therefore cause a chain reaction where a region in the vicinity of an inhibiting region would also be affected. This could happen between distant brain regions as well, but nothing has been proven yet. If distant regions could affect one another, it might be a reason for the intermingling of senses and perceptions that are physically distant from one another in the brain.
What does all this technical stuff mean for someone like me and you? Well, I think that this subject will have a ripple effect on other currently undiscovered topics that concern the brain. The author of one of my articles said that there could be a link to creativity and the mixing of senses, as well as find out if seemingly random perceptual commonalities in people with synesthesia actually have a logical connection. I became fascinated by the effect this topic could have on the arts. I am English major and a poet.
What if one day the exact relationship between the senses and their mingling were discovered. Could I produce poetry that was edible, could a person taste a rainbow like they say on all those Skittles' commercials? Would a greater understanding of other people be fostered if we could somehow touch their pain?
The last one was a big jump from the current evidence that is out there, but we can't think of science on a yearly scale, a right now immediate scale, but rather in terms of decades or centuries. In the thirties who would have thought that dishes could wash themselves?
Back to the Synesthesis, I found a fascinating poem by Arthur Rimbaud; where he discusses the taste of purple and the smell of black as reflected by blood and flies respectively. And he wrote in the 17th century. But, the most interesting discovery I made was from a Hebrew Medical Faculty man by the name of Zvi Rosenstein. He traced Synesthesia back 3313 years to the moment when the Hebrews received the Torah on Mount Sinai. In the old testament of the Bible (Exodus, 20, 18) it says that when the Ten Commandments were given all of the people saw sounds and heard images.
How is that for every thing in science being traced back to one?

Bibliography;
http://www.doctorhugo.org/synaesthesia/SynZvi.html
http://www.doctorhugo.org/synaesthesia/dickinson.html
http://www.doctorhugo.org/synaesthesia/rimbaud.html
http://myweb.lmu.edu/mmilicevic/pers/exp-film.html
http://psyche.cs.monash.edu.au/v3/psyche-3-06-vancampen.html
http://www.thecure.com/robertpages/synaesthesiapiece.html
http://www.macalester.edu/~psych/whathap/UBWRP/synesthesia/history.html


Alzheimer's Disease: a hopeless battle?
Name: Charlotte
Date: 2003-09-29 02:03:52
Link to this Comment: 6664

<mytitle> Biology 103
2003 First Paper
On Serendip

We all occasionally unintentionally forget to take out the garbage, brush our teeth, or the name of a person we have just met; and we eventually realize either instantaneously or slightly later that we have forgotten and, make sure to go back and take the garbage out, brush our teeth extra clean the following day and ask for that person's name again. This thought process occurs in our brain, a very complex organ that allows us to think and remember these routinely habits. As we get older, our body changes the same way our brain does; we start to forget on a more frequent basis and have more trouble remembering. This is a normal phenomenon in an aging person. However, many elderly people forget on a much larger scale and eventually lose the ability to care for themselves. Although half of the elderly's population carries these symptoms, their condition is considered abnormal and the leading cause for dementia, otherwise known as Alzheimer's disease. Alzheimer's was named after the German physician Dr. Alois Alzheimer who discovered the disease in 1906 in a middle-aged woman with dementia. After her autopsy, two abnormal brain structures were found that today are fundamental in understanding the progression of Alzheimer's disease. The disease basically consists of a gradual reduction of the brain size due to the fact that its nerve cells one by one die. This gradual process in the brain causes a person to suffer from dementia, which "is a brain disorder that seriously affects a person's ability to carry out daily activities" (9). Therefore, a person affected with Alzheimer's disease will gradually lose the ability to function by him or herself. The cause of the disease is not yet understood, but the actual process inside the brain has been identified and demonstrated. Basically, the onset of the disease is provoked by abnormal structures inside the cerebral cortex, which is the region of the brain responsible for our thought process, memory, emotions and movement. The first abnormality occurs when enzymes, "a substance that causes chemical reactions" (4), cut a brain protein (amyloid precursor protein) into smaller fragments that are made up of another protein called beta-amyloid. These beta-amyloid fragments then stick together and form Alzheimer plaques. In addition to the plaques, a protein called tau, "which helps support [nerve] cell structure" (10) tangles and leads to the death of a cell. These are the two main causes that have reoccurred in many patients suffering from Alzheimer's, but researchers have yet to find what initiates this process. There are different stages at which Alzheimer's progresses. The first stage affects the memory, but is too close to the normal aging symptom to determine whether or not it is the early stage of Alzheimer's. At the second stage, there is an increase in memory loss, as well as disorientation, changes in behavior and difficulty in handling daily chores. By the third stage, the patient is not only affected mentally but also physically: the ability to recognize, speak, and learn is now also affected and there is difficulty in getting up and controlling impulses. Anxiety is also a common trait as the patient The last stage is the most severe, where the patient is both mentally and physically impaired. The patient loses weight, cannot swallow properly, does not formulate sentences anymore, moans, and spends most of his or her time sleeping. "At the end, patients may be in bed much or all of the time. Most people with Alzheimer's die from other illnesses, frequently aspiration pneumonia. This type of pneumonia happens when a person is not able to swallow properly and breathes food or liquids into the lungs" (8). Other times, the patient will die of another illness because his body is susceptible to getting sick very easily. Usually, Alzheimer patients may live up to 20 years after being diagnosed, but the average is 8 to 10 years of life after the initial diagnosis. There has been extensive research that is still going on at the present time in finding treatments for patients with Alzheimer disease. There are four different kinds of medication that are administered today to patients. All four of them slow down the progression of the disease, reestablish parts of the memory, stabilize behavior shifts and bring back self-confidence. The patient, early on in the progression of the disease, does realize that he or she is getting ill and is aware of the symptoms that occur on and off. This can be hard to accept and affects the morality of the patient. "Effective treatment of symptoms of Alzheimer's disease preserves patients' dignity and increases their comfort and independence" (7). Even though, they do not keep the disease from developing, it provides temporary relief. The families and the caregivers feel less overwhelmed and can have a more human rapport with the ill one. The patient using a treatment must also always take it consistently or else the disease will progress at an even faster rate. There are also other ways of treating patients with Alzheimer's. Vitamin E is commonly used because it prevents memory loss if taken in large doses. The usual dosage is 30 to 60 units, while Alzheimer patients take one thousand units of vitamin E. The side effects are inevitable, such as "bleeding and upset stomach" (7). There is also a Chinese therapy called Ginkgo Biloba, which "is an over-the-counter herbal treatment alleged to improve memory, attention and other thinking functions" (7). This treatment has not been proven successful in preventing the disease. Researchers are still in the process of looking into the effects of this herbal treatment. To the present day, a cure for Alzheimer's disease still has not been found. Over four million Americans are affected by this disease. Ten percent are age 60 and over and forty percent are age 85 and over. Alzheimer's disease is not only present in the United States, but also worldwide. "It is estimated that by 2020, 30 million people will be affected by this devastating disorder worldwide and by 2050, the number could increase to 45 million" (11). The statistics show the rapid development of this disease, despite all the money and the research that has been put into towards finding a cure. Alzheimer's disease is a traumatic experience for both the patient and his or her close ones who are in direct contact and see the progressive changes, both physical and mental. There is an intense amount of care needed for the patient and can be very draining for the members of the family and the caregiver. It is one of those tragic situations where you are constantly battling for survival knowing that the ultimate outcome is the inevitable fatality of the disease.

References

WWW Sources

1)About the Human Brain

2)History of Alzheimer's Disease

3)What is Alzheimer's Disease?

4)The Causes

5)Symptoms

6)Diagnosis

7)Treatments

8)The Different Stages of Alzheimer's Disease

9)General Information

10)More Causes

11)Statistics


Why Stress Affects Everybody Differently
Name: Sarah Kim
Date: 2003-09-29 02:33:26
Link to this Comment: 6666


<Why Stress Affects Everybody Differently>

Biology 103
2003 First Paper
On Serendip

The word "stress" technically refers only to how our body reacts to stressors, different
external inputs. Many stressors are not inherently stressful. There are conscious and
unconscious things that occur in our inner world that determine whether a stressor in the external world will trigger our stress response, called mediating responses and
moderating factors. (1) Some stress is good for us and motivates us. But signs that stress has gone too far include emotional distress, sleep disturbances and difficulty concentrating. Scientific studies suggest that up to 85% of all health problems are related to stress. (2)

Stressors have 3 general categories: catastrophes, major life changes and daily hassles. Catastrophes are sudden, often life-threatening calamities or disasters that push people to the outer limits of their coping capability. These include natural disasters such as floods and earthquakes. Major life changes include death of a loved one, divorce, imprisonment, job loss and major disability. Daily hassles include everyday annoyances due to jobs, personal relationships and everyday living circumstances. (3)

Mediating processes and moderating factors determine how we react to an external
stressor. One mediating process is appraisal. Stressors can be interpreted in different ways, such as harm or loss, as threats or as challenges. When appraising the situation, aspects such as how predictable and controllable a stressor is, whether is stable or unstable, global or specific, and internal or external, affect how the individual will react to the stressor. (5) If the event is judged to be uncontrollable, it will be more stressful, if it's more stable and global, people will react in a helpless manner, if it's more internal, people will feel worse about themselves.

Another mediating process is coping. There are two main strategies of coping: problem-
focused coping and emotion-focused coping. Problem-focused coping tries to manage
and alter stressors and is more useful in situations in which a constructive solution can be found. Problem-focused coping strategies include confronting (changing a stressful situation assertively), planful problem solving (solving through deliberate, problem-focused strategies) and most importantly, seeking social support. (5)

Emotion-focused coping tries to regulate the emotional responses to stressors and is more useful in situations in which the problem must be accepted. Some of these coping
strategies include self-controlling, distancing, positively reappraising (finding positive meaning in stressful experience by focusing on personal growth), accepting
responsibility, and escaping/avoiding (often by drinking, overeating, using drugs, etc.). (5) The idea behind this mediating process of coping is repeated later in this paper when coping ability is considered a personality trait by a study by the University of Utah.

Moderating factors, as well as mediating processes, influence the strength of individuals' stress responses induced by stressors. The main moderating factor is the personality traits of the individuals. Hardiness is a trait associated with stress resistance, which consists of control (belief in people that they can influence their internal states and behavior, influence their environment and bring about desired outcomes; the most important factor in hardiness), commitment, and challenge (the willingness in people to try new activities and change). (5)

Certain personality traits relate to stressors and how each individual reacts to the
stressors. Kristina DeNeve from the University of Utah reported a study on the relation
of happiness to 137 individual personality traits. Two traits, out of 8 important traits, stood out as highly relevant to stress: tension (the tendency to experience negative emotions in response to stressors) and coping ability (hardiness or the tendency to cope positively with stressors). More happiness is related to having a personality type which copes positively with stressors and lacks feelings of tension in response to them. (4)

The remaining 6 traits which were important in relation to stress and happiness include
trust, emotional stability, desire for control, extraversion, locus of control-chance, and repressive-defensiveness. Locus of control-chance refers to the tendency to think that events happen by chance alone, while repressive defensiveness is the tendency to avoid threatening information. (4) The trait most highly associated with happiness is repressive defensiveness, which makes a good case for the old adage "ignorance is bliss."

There are several other personality traits described which are included in the moderating factors' influence on the strength of the stress response. An individual's affect is very relevant to their reaction to external stressors. A positive affectivity (extroversion) is associated with more enthusiasm and energy, leading to eustress. A negative affectivity (neuroticism) is associated with anxiety and depression, leading to distress. Also, optimism is associated with stress resistance and a lack of stress responses, such as depression. (5) In addition, an individual's self-esteem and power motivation help determine how much stress the stressor will cause.

People with certain personality traits seem to be physiologically overresponsive to stress, and therefore more vulnerable to heart disease. Traditionally called "Type A" people have some poisonous traits such as frequent reactions of hostility and anger, which negatively affects their ability to deal with stress. (3)

Another essential moderating factor is demographic variables. With age, individuals
show less positive and negative affectivity. Ethnicity can also come into play, as well as socioeconomic status, and occupational status. The higher the status, the more self
esteem an individual tends to have, leading to better resistance to depression. In addition, gender has a big impact on reactions to the stressors. For example, more women experience negative affectivity than men. Also, women use different coping strategies than men. Women tend to use emotion-focused coping strategies, self-blame, seeking of social support, and wishful thinking, whereas men tend to use problem-focused coping strategies, planned and rational actions, personal growth, and humor. (5)

Other moderating factors which affect the how strongly a person reacts to an external
stressor include health habits, genetics and early family experiences, material resources, pre-existing stressors, and their ability to use coping skills. A healthy lifestyle, including healthy diet, physical fitness, and adequate rest and relaxation, leads to resistance to stress. Features such as positive or negative affectivity, optimism, using active coping strategies, and relying on social support networks are partially inherited, while a sense of personal control, using denial as a coping strategy, and responding with anger and hostility are partially due to childhood familial experiences. (5) In general, with more money, there are more coping options available to an individual.

References

1)MSNBC Health article titled, "Stress: It's All In Your Head."

2) Kenny, Janet W. "Women's Stressors, Personality Traits, and Inner-Balance Strategies," April 2002.

3)Microsoft Encarta Online Encyclopedia 2003 article titled, "Stress (psychology)."

4)Current Directions in Psychological Science article titled, "Happy as an extraverted clam? The role of personality for subjective well-being."

5)"The Connection Between Stressors and Stress Responses."


Turning Back Time
Name: Enor Wagne
Date: 2003-09-29 04:19:44
Link to this Comment: 6667


<mytitle>

Biology 103
2003 First Paper
On Serendip

Progeria, an extremely rare disease caused by a slight genetic defect, victimizes every 1 in 4 million children. , (3). At the moment, there are twelve cases of Progeria in the US, and no more than one hundred have been reported around the world. While the child suffering from Progeria will appear to have no symptoms at birth, the tell tale signs of the fatal disease will begin to surface within a few months, (1). The common first symptom of a child who may be a Progerian is that the ends of their shoulder bones will be re-absorbed into their bodies. Soon, he or she will be underweight and undersize for his or her age. Hair loss and dental decay will follow. The disease slowly eliminates body fat. Eventually the Progerian will become afflicted by arthritis and take on the appearance of a person five to ten times their age, (6).On average, a Progerian will live to be thirteen. Usually their death will be due to a cardiovascular disease such as heart attack or stroke.

Over the past four years, a lot of progress has been made studying Progeria. Researchers have concluded the cause of this disease is most likely due to a "single letter misspelling in the genetic code on a single chromosome, which is a coiled strand of DNA within the cell". After examining twenty Progerians, eighteen were found to have the same genetic abnormality. The 19th case had a similar 'misspelling' in a nearby gene. The 20th case did not have "classic Progeria", (2).The gene which was found to be abnormal in eighteen of the cases, is responsible for making the protein called 'lamin A'. If this protein is defective, premature cell death occurs. This protein structures the inner layer of membrane surrounding the nucleus. Each Progerian examined had misshapen nuclear membranes in fifty percent of their cell. Persons examined without the disease have misshapen nuclear membranes in approximately one percent of their cells.

Since the likely cause of Progeria has been attributed to a glitch in genetic code, doctors and research scientists anticipate a cure. "It's not inconceivable that a basic treatment for Progeria could happen in the next two to three years," speculates Dr. W. Ted Brown, an expert in the study of Progeria, (2). Over $800,000 has been contributed to the Progeria Foundation. Because the disease is rare, most of the population was unaware that it even existed until articles were published in People Magazine, John Tackett (the oldest recorded survivor with Progeria at age fifteen) , (5) visited Maury Povich's daytime talkshow, and CNN discussed its tragic repercussions, (3). The supporters of the Progeria Foundation have been adamantly trying to make the public aware of its existence, in hopes to gain support and donations towards a cure. However, if a cure is on its way, will the children be merely survivors of a terribly unfortunate illness, or will they become milestones in a new theory about evolution?

A genetic mishap's ability to fast forward the process of aging is a scientific anomaly. Since Kindergarten, we have been taught of life's cycle. You are born, you become a toddler, child, teen, adult, have a midlife crisis, get old and wrinkley, and then you die. However, Progeria, with its one-gene-abnormality compresses all eight of those stages into thirteen years. What does this phenomenon say about the way we have regarded a natural human sequence?

People who lived in Roman times had a lifespan of twenty five to thirty years. The same lifespan was approximated for those living in London in the 18th century, , (4). In 1900 Americans had added thirty years onto their lives simply by introducing frequent pesticide use into the agricultural market and by advances made in the medical field. Simple conditions for health and sanitization have estimated that a WW1 baby in good physical shape can expect to live to be eighty years old. Does this increase in lifespan suggest that further progress will be made in the future to double or triple our current life expectancies? In the year 2300 will people live to be in their 200's?

Undeniably, societal adjustments would be made if the lifespan of the average person was extremely less or more. Our society practices laws that seem rational proportionally to the amount of time we live on this earth. But if we still lived to be only twenty five, would alcohol still be only legal past the age of twenty one? You get four years to drink beer in your 'twilight years' and then you die. The laws we practice and live by daily seem to imply an arbitrary moral code.

If the "misspelling" in genetic code associated with Progeria can be fixed by a doctor, allowing these children to slow down their process of aging, does that mean that a doctor could just as easily find a gene to tweek that would slow down the natural process of aging? Or is aging even natural at all?

Aging may very well be a puzzle in the midst of being solved. The evolution of man over time has, for the most part, provided a slightly changing pattern that runs well with its intention. However, diseases like Progeria seem to throw a monkey wrench into our understanding of human beings and time. While scientists may argue that the disease does not literally speed up the process of aging, and instead just replicates its signs - the visual evidence and anticipated cure debatably holds more weight. The negation of this rapid aging by simply correcting a typo in Progerians' genetic code would inadvertently mark a moment when technology caught up with evolution. This fix would allow human beings to manipulate evolution. Soon other alterations may be made to assess different problems which we had come to believe were 'natural'.

Is the evolution of man completely dependant on the evolution of knowledge and technology? By restraining ourselves to the confines of the theory of evolution, we may be neglecting to realize that there are exceptions to what we have come to know and believe is natural - even with evidence to the contrary before our eyes. Perhaps, if scientists became less skeptical of the seemingly irrational, progress would be made in areas of life never which were never even questioned. Maybe the expression, "it's not the way you look it's the way you feel" should be given a second thought. Little did we know that a burning contradiction to the theory of evolution could be found on the cover of a Hallmark birthday card.

References


1)Medlineplus, General information about health problems and diseases

2)Progeria Research Foundation, One of the few websites dedicated to the study of Progeria


3)Progeria Project, Provides articles and information about Progeria

4) Link from Berkely University Website, Interesting facts about lifespan

5) USA Today, Article about Progeria

6) CNN Link from Homepage, Detail the health issues involved with Progeria


Sleep too much?
Name: Lindsay Up
Date: 2003-09-29 08:57:16
Link to this Comment: 6668


<mytitle>

Biology 103
2003 First Paper
On Serendip

As college students, we often complain that we have not gotten enough sleep on any given night. We drink copious amounts of caffeine in order to stay awake and finish that paper. Many times, we compensate for a lack of sleep at night by taking naps after (and sometimes during) our classes. This behavior might be recognized as "normal" by many teenagers and young people. However, many college-aged people suffer from sleep disorders. The most commonly recognized among these is insomnia, or the inability to obtain an adequate amount of sleep. But often overlooked and potentially harmful is hypersomnia. Although we rarely identify it as a negative condition, many of us actually get too much sleep.

Hypersomnia is defined as excessive daytime sleepiness and/or nighttime sleep. Humans sleep for an average of eight hours a night. Those with hypersomnia may find themselves sleeping for over ten hours at a time. (2) The most common symptoms are napping at inappropriate times, difficulty waking up, anxiety, irritability, restlessness and fatigue. Some more serious symptoms may include hallucination, loss of appetite, memory loss, or the inability to hear, see, taste, or smell things accurately. The disorder can have a profound effect on one's ability to cope in social situations. (1) There is a range of possible causes for the condition, but the primary cause is described as abnormalities that occur during sleep or abnormalities of specific sleep functions. (2)

Those with hypersomnia are generally diagnosed in one of four categories by a polysomnogram, which monitors a patient during one night of rest. (2)


Post-traumatic Hypersomnia is caused by trauma to the central nervous system, such as a head injury or a traumatic accident. This kind of hypersomnia may last for a span of a few days or an entire lifetime following such an incident.

Recurrent Hypersomnia consists of episodic periods of extended sleep followed by periods of normal sleep. The length of these episodes varies. Recurrent hypersomnia is caused by dysfunction of the hypothalamus.

Idiopathic Hypersomnia has no known cause and is the diagnosis most closely associated with the sleep disorder narcolepsy.

Normal Hypersomnia is seen in people who are commonly referred to as "long sleepers," those who require more than ten hours of sleep per night as a result of genetic predisposition. (2)


Hypersomnia shares some common symptoms with other sleep disorders: narcolepsy and sleep apnea. Narcolepsy consists of episodic "sleep attacks" during the daytime regardless of one's nighttime sleep. It resembles hypersomnia in the respect that many experience onset during teenage and young adult years. Sleep apnea is a condition which causes intermittent shortness of breath during sleep. It affects people of all ages but bears a resemblance to hypersomnia in that it is caused by an abnormality of respiratory function during sleep. Like normal hypersomnia, it also tends to run in families. (1)

A variety of lifestyle conditions may add to the habit of sleeping excessively. Hypersomnia may be a symptom of some medications or withdrawal. Alcohol and drug abuse, including caffeine, may play a part in extending sleep. (1) It is estimated that approximately five percent of the population can be diagnosed with some form of hypersomnia. However, the condition is greatly underreported because so many who have it do not realize that their excessive sleeping or napping behavior is abnormal. What is more, many do not realize what detrimental implications getting too much sleep can have for one's life. Primarily, hypersomnia seriously interferes with a normal schedule. One might miss large amounts of work, school, or other important activities. Secondly, many of the side effects of hypersomnia such as decreased concentration, anxiety, and memory loss all contribute to a diminished work or academic performance. (2) Treatments for those diagnosed for hypersomnia may include the prescription of stimulant or antidepressant medications. Also, it is advised to maintain a regular bedtime/waking time, and to avoid intake of alcohol and caffeine.

Doctors say that college students have about twice as many sleeping disorders as the total adult population. (4) It is not difficult to see how teenagers and young adults could become easy targets of hypersomnia. It seems that students often find that their academic and social schedules make it impossible to maintain a regular sleeping schedule. The large quantities of caffeine, alcohol, tobacco, and other drugs that many students regard as being "normal" can seriously interfere with sleeping patterns. In addition, the amount of pressure many students find themselves in at college could seem to contribute to anxiety and depression: both mental states which have profound impacts on sleep.

It is interesting that in the discourse of hypersomnia and other sleeping disorders we often discuss what "normal" and "abnormal" sleeping behaviors are. When do we regard snoring to be a sign of a sleeping disorder? Where can we draw the line and say that someone is getting too much sleep, when everyone's' bodies are different and require different amounts? We can point out specific things that people do in their sleep that might be considered alert signs: screaming, acting out dreams, convulsions and jerks. (3) However, most people experience all of these symptoms from time to time, so it is difficult to say where normal sleeping behavior ends and a disorder begins. It seems that perhaps we could say that one has a disorder when his or her sleep begins to interfere with everyday waking life.

The importance of getting enough sleep is often emphasized. However, details on how to maintain a healthy sleeping routine are rarely imparted to teenagers and young adults. Colleges should make sure to advise students to not only get enough sleep but to try and maintain a regular sleep schedule. Sleeping through class might seem like something everyone does from time to time. It is when sleeping too much becomes a habit that such behavior can become a lifelong concern.

World Wide Web Sources

1) 1) National Institute of Neurological disorders and Stroke Homepage.,, Information about sleep disorders as related to neurology.

2) ) 2) Talk About Sleep: Idiopathic Hypersomnia a>,, Aninformational website about sleeping disorders including a forum.

3) ) 3) Bringing Secrets of the Night to the Light of Day, Idiopathic Hypersomnia a>,, Written by a doctor as a means of helping to identify abnormal sleeping behavior.


4) )
4) The Johns Hopkins Newsletter, science page a>,, an article from Johns Hopkins University about college students and sleep disorders.


The Myth of the Five Senses
Name: Laura Wolf
Date: 2003-09-29 11:06:11
Link to this Comment: 6672

<mytitle> Biology 103
2003 First Paper
On Serendip

We see with our eyes and taste with our tongues. Ears are for hearing, skin is for feeling and noses are for smelling. Would anyone claim that ears can smell, or that tongues can see? As a matter of fact, yes. Paul Bach-y-Rita, a neuroscientist at the University of Wisconsin at Madison, believes that the senses are interchangeable; for instance, a tongue can be used for seeing. This "revolutionary" study actually stems from a relatively popular concept among scientists; that the brain is an accommodating organ. It will attempt to carry out the same function, even when part of it is damaged, by redirecting the function to another area of the brain. As opposed to previous mainstream scientist's understanding that the brain is compartmentalized, it is now more acceptable that the individual "parts" of the brain could be somewhat interchangeable (1).

For the purpose of scientific exploration, are the sensory organs interchangeable as well? Could a nose function as an ear, for example? If the brain is what actually sees and the eyes serve only as information receptors, and if one could say the same about taste, smell, hearing and touch, then does it matter which external organ the sensory information is received by? Our external organs all act as receptors of the information (5), so can one type of receptor be replaced by another and still produce the same experience?

Bach-y-Rita's experiments suggest that "we experience the five senses, but where the data comes from may not be so important" (1). In the article "Can You See With Your Tongue?" the journalist was blindfolded with a small video camera strapped to his forehead, connected to a long plastic strip which was inserted into his mouth. A laptop computer would convert the video's image into a fewer number of pixels, and those pixels would travel through the plastic strip as electric current, reaching the grid of electrodes that was placed inside the man's mouth. The scientist told the man that she would soon be rolling a ball towards his right side, left side, or center, and he would have to catch it. And as the journalist stated, "my eyes and ears have no way to tell where it's going. That leaves my tongue... [which] has more tactile nerve endings than any part of the body other than the lips" (1). The scientist rolled the ball and a "tingling" passed over the man's tongue, and he reached out with his left hand and caught the ball.

If the brain can see a ball through a camera and a wet tongue, many new questions arise. What does this concept imply in terms of blindness and deafness? Rather than attempting to reverse these sensory disabilities through surgeries and hearing aids, should we be trying to circumvent them by using different receptors? Can we still trust in the idea of the five senses, or was it wrong to categorize our perception of the outside world so strictly?

In fact, the "five senses" may well be another story that should be discarded in lieu of new observation. Aside from the emerging possibility of interchanging a tongue and an eye, there is the highly accepted possibility that our original list of senses is incomplete. Many scientists would add at least these two senses to the list: the kinesthetic sense and the vestibular sense (2). The first is a sense of self, mostly in terms of limbs and their placement. For instance, I know where my right foot is without looking or feeling for it. It is something that my brain "knows". This is said to be because of information sent to the brain by the muscles, implying that muscles should be added to the list of sensory organs (although there have been cases where a human loses a limb, yet the sense of placement for it is still present in the brain). If more observations were to be collected on this subject, a more accommodating explanation could potentially be reached. Secondly, the vestibular sense is what most would consider a sense of balance.

Why were these two senses not included in our limited list? I tend to think it is the result of a lack of external symbolism. A nose or an eye is an obvious curiosity because of the question it generates: "What does this thing do?" But we have no limb or facial organ dedicated to balance or to kinesthetic awareness. They are examples of the more mysterious ways in which we experience life – and who is to say we will not find more examples to add to the list? In all probability we will uncover new senses that have been hiding just beyond our conscious realization.

On the other hand, if the vestibular sense and the kinesthetic sense occur solely in the brain, are they truly senses? Should experiences be labeled as senses without representation by an external organ? If one believes that the brain is the true sensory organ and the rest are simply interchangeable receptors, then yes, we should remain open to labeling many new "experiences" as "senses". But, is there perhaps an overlying truth that directly relates the five senses to the human experience of life? In other words, is there a reason that these external sensory organs seem "obvious" to us in our attempt to explain our perception of the world?

One way of gaining new insight is to explore the animal world of senses. Migrating animals, for example, are said to have a "sixth sense", a term which alludes to all unexplainable phenomenon – ESP, psychic ability, yin eyes and other weirdly heightened forms of consciousness. In reality, what we call the sixth sense includes any number of unrelated senses that everyday humans do not possess and therefore know little about. Perhaps there is a sense of placement on the earth, similar to the kinesthetic sense of bodily placement, which helps animals return home. Perhaps it is simply a "sense of direction" that is more developed or more substantial than what humans possess. Scientists have even conjectured that traces of magnetite, found in pigeons and monarch butterflies, could be used as a compass, enabling the animal to sense the magnetic fields of the earth (3). Those who use the term "mysterious sixth sense" rarely give details about which of these strange abilities they are referring to – the term relating to "past our understanding" is used in such a sweeping, general way that there is no one solid, falsifiable hypothesis. This term does not bring us closer to our understanding of the senses.

In addition to internal mysteries, many animals also possess external sensory organs which we do not. Fish, for instance, have an organ that runs along the sides of their bodies called the lateral-line system. It is made of tiny hair-like sensors that receive information about movements in the water. There is even the ability to distinguish between ordinary, background movement and strange movement that could signify a predator or another creature. This sense also helps the fish to "orient themselves within the current and the stream flow" (4). Interestingly, "land vertebrates... lost their lateral-line systems somewhere along the evolutionary path, all vertebrates started out with them..." (4). Of course, we no longer consider this sense to be a human perception of life because we no longer possess the organ. But has the sense remained? Perhaps the feeling of being watched, of being followed on a dark sidewalk, is a dull shadow of the sense we used to possess. It is particularly noteworthy that this "feeling" of being followed is often referred to as "intuition". How is intuition related to senses? In the same sense, how are emotions and senses the same?

New stories that could expand our categorical concepts of the senses are emerging constantly, but we seem to prefer holding onto the old concept of five senses. I would urge towards expanding that category numerically and conceptually. There is much to be explored in terms of the relation of sense and emotion, the utilizations and disabilities of the senses, and a vertebrate's need for senses compared to other types of animals, in terms of participating in life. The interconnectedness of our senses within the brain and among the external organs is a concept worthy of more attention and exploration, and it will explored more easily when the old, rather arbitrary myth of the five senses is discarded.

References

1) Discover Magazine Online, Go to the article "Can you see With Your Tongue?"

2) an article that clearly states there are seven senses.

3) an article about migration and possible explanations.

4)Discover Magazine Online Go to the article "A Fish's Sixth Sense"

5)Sensory Receptors a very informative site about more sensory organs as receptors, and other scientific explanation of the senses.


ESP: An Effort to Quantify the Magical
Name: Nomi Kaim
Date: 2003-09-29 13:41:39
Link to this Comment: 6673


<mytitle>

Biology 103
2003 First Paper
On Serendip

A self-conscious girl has a feeling of being watched in class and spins in her chair; indeed, from the back of the room, a curious admirer is following her every move. A woman randomly contemplates an old friend with whom she long ago lost contact; that evening, the friend calls with important news. A man wakes up with a sinking feeling about his day and decides to skip work; later he hears of the disastrous crash of the train he rides each morning. A retarded boy who cannot count correctly states the number of cards dropped on a laboratory floor. (1) A handful of people, perhaps more (and I among them), dream of crashing airplanes and crumpling buildings in the days before the twin towers of Manhattan collapse. (2)

What is going on here?

Extrasensory perception. The term has acquired a reputation, among many Westerners, for deception, perhaps in part due to the hoards of pseudo-"psychics" and "fortune tellers" who claim to see into what they cannot. Even the term used is under debate: intuition, clairvoyance, telepathy, telekinesis, extrasensory perception (ESP), and the layman's "sixth sense" all describe uncanny, seemingly-coincidental human insights, happenings we cannot attribute to what we know of ordinary science and hence refer to as "paranormal" (next to normal). (3) Some would call such events supernormal, even occult. And is it any wonder?

The phenomenon of ESP transcends our knowledge of the human senses. In fact, its definition is, essentially, the ability to perceive accurately something the five senses cannot detect. (4) We do not, after all, have eyes in the backs of our heads. We cannot see through solid piled-up cards to count them (especially if we can't count). We do not see, or hear, others' thoughts. Our eyes cannot see events that happen across great distances. We cannot see, or hear, or touch the future.

Yet these things happen. Clinical tests show that certain people have the ability to describe figures on a card being held by a person in another room. Such tests repeatedly yield results whose probabilities of being "lucky guesses" are one against ten-to-the-umpteenth power (i.e.,1:1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000). (5) Hardly attributable to chance! Indigenous peoples, particularly shamans (tribal healers), have claimed for years to know how to enter trance states in which they perceive animals or people who are far away –- or dead. (4) And, while clairvoyance generally involves interactions between two or more living entities, some have been known to use such "superpowers" to locate objects – such as water with a stick. (3) Though theories range from the scientific to the fantastical, we can say really very little to explain these curious phenomena. (6)

The investigation of the "paranormal" is plagued by an unfortunate, though inevitable, facet of scientific and human inquiry: the "I wouldn't have seen it if I hadn't believed it" phenomenon. In this case, it's more "I don't believe it, so I can't see it." To a great extent, we see what we believe can logically be there and overlook, or justify away, the rest. So, many pragmatic modern thinkers either deny the reality of instances of ESP, or attribute them to chance alone. (2) But it's there, even if you're not looking, and repeated results that would happen by chance only one time in trillions must, rationally (if these thinkers are so rational!) be due to something else. And resisting the evidence for clairvoyance amounts to the same thing as ignoring the fossil evidence for evolution in blind favor of creationism: it stops science in its tracks, lodges society in traditional views that are swiftly losing their foundations. Clearly, in order to investigate the workings of something, we must first suspend our judgments and assume that the something exists!

As of yet, even the most radical scientific thinkers have been unable to "prove" (to the satisfaction of the public) the existence of ESP. (2), (7) We modern Westerners like to see hard-core data before we will believe something. We want a what, a how and a why that will hold together; a clear, sequential plan we can easily follow and conceptualize. Observing that something does, in fact, exist does not satisfy us if we cannot understand it; it has to make sense for us scientifically as well as socially. ESP isn't there yet. And the definitions for terms describing paranormal phenomena seem disconcertingly vague – written in terms of what ESP is not (explainable using the normal senses) rather than what it is – because our understanding of these phenomena remains imprecise.

The "why" of ESP seems simple enough. Evolutionarily speaking, people who could perceive or foresee bad events like predator attacks or earthquakes (and thus avoid them), or sense lucky breaks like clean food or water (and thus seek them out) would be more likely to survive and pass their genes on to the next generation than those who could not. Having access to information beyond the limits of the basic human senses must have been a real survival advantage.

The "what" and the "how" do not come so easily. Though we recognize (or ought to) the occurrence and recurrence of "uncanny coincidences," what is really going on in these cases? What do we know about the scientific mechanisms of ESP? Nothing is for certain, but we have some clues. Scientists have identified two previously unknown pits, called the vomeronasal organ (or VNO) after their location – in the nose ("nasal"), behind the thin vomer bone which separates the nostrils ("vomero"). (7) The VNO appears to contain nerves that can detect pheromones. Pheromones are chemicals that trigger hormonal changes and instinctive (non-cognitive) behaviors – and we humans used to associate them with non-human animals only. (7), (8), (9) Then, gradually, people began to find that pheromones played a role in biologically-engrained behaviors like sexual attraction and menstrual synchronization. (7), (8) Still, we do not know if, or to what extent, humans really use those pheromone-detecting nerves in their VNOs. The entire vomeronasal organ could be vestigial, like the appendix. It might or might not be implicated in accounts of ESP; even if it is, we do not know how. (7) Some scientists suggest that the VNO is wired to the brain's pineal gland, which lies in the amygdala, a very primitive part of the brain known for perpetuating biological instincts. (4),(9) If an organ is linked to the brain, it is more likely to be active, for the brain is active. It sounds good, but... Nevertheless, the research is still in its early stages, the data inconclusive. Can humans detect pheromones? Consciously, or only unconsciously? Through the VNO, or by means of some other pathway? Where, exactly, do we produce pheromones? Do they even play a role in extrasensory perception? What role, exactly, and how does it work? We have a lot to learn about the biology (within a person's body) as well as about the physics (between people's bodies) of ESP.

Most likely, ESP falls among the instinctual, nonscientific, not-quite-cognitive behaviors rooted in human beings' past. After all, ESP makes perfect survival sense in the long-ago reaches of our evolution, even if our modern world of paved cities, protective walls, super-human technologies and few predators has fewer needs for life-saving extrasensory perception. The alternative medical practices, such as shamanism, of indigenous healers tap into the paranormal more than do the pills and behavioral therapies of our modern world. (4) And the tribal people have been here longer. Another key to our evolutionary history, human fetal development, supports the theory of decreased dependence on ESP over time. In the early stages of its development, a fetus has a large, defined vomeronasal organ; the VNO, along with the gill slits and tail, shrinks with time but, unlike the gill slits and tail, does not disappear entirely. (7) This observation begs the question of whether or not we really use our VNOs in ESP. If the organ still exists, is it necessarily functional? Well, is the appendix functional?

In their search to qualify and quantify the elusive quality of extrasensory perception, scientists have tended to assume that ESP involved one sense, a sense separate from the original five and far more difficult to pin down. (If this is the case – if ESP is just another sense – then the term "extrasensory perception" is inaccurate. We might have to modify our vocabulary as our observations increase.) Yet I wonder if the "ESP sense" really exists separately from the other senses. We know full well that our senses help each other out. For instance, we rely on vision to hear better (reading lips and gestures) and even to feel things better (knowing what we are touching guides the sensations we use to describe it). We cannot taste well at all without our sense of smell. We can't separate these senses just because they appear to involve different body parts! Perhaps, then, the ESP sense is a combination of various other senses, or of other senses and an unidentified sixth sense (which may or may not involve the vomeronasal organ). It seems that the interconnectedness of senses is such that a blind or deaf person might not experience ESP in the same way, and a person deprived of her ESP sense might not see or hear in the same way. In effect, ESP might strengthen to compensate for another sense that is missing, in the way that blind people develop especially acute hearing. To this effect, I know of retarded children who, lacking in cognitive and linguistic skills, possess noticeably heightened emotional awareness and intuition. The body and mind will, I believe, try to compensate for its losses by making gains in other areas as it strives toward a unified, functional whole.

I have just one more question, but it is a looming one: what about precognition? Can some people can "see" or "feel" their way into the future, into events that have not yet happened? Countless happenings – including the pre-September-11th-dreams – suggest that they can. While some pop-cultural precognitions, like those of most fortune tellers or of Nostradamus, are vague and general enough to be read in any way and so should probably be discounted, some are inarguably accurate. A horrifying dream: a crashing airplane, a tall burning building, people screaming and falling. Just coincidence? No. There is something here that is real.

Perhaps precognition really involves "reading" the minds of people across space, "seeing into" the current thoughts or plans of an old friend in England or an evil ruler in Afghanistan, plans that are later manifested as actions. In this case, precognition would not involve seeing through time so much as seeing through space. But what about those episodes of ESP that occur between a human being and a non-living object? If a person predicts the falling of a meteor, can he be described as "reading" the "plans" of a non-planning, non-conscious object? If people really can see into the future, this uproots our typical linear conception of time; it suggests that what has not yet happened here is already happening somewhere, enabling the clairvoyant among us to "see" it from a distance. These are not only biological but also physical questions: how do our minds interact with other minds and objects across space and time? Science is only just beginning to address the inexplicable forces that religions have embraced and explained for centuries.


References


1) Sacks, Oliver. The Man Who Mistook His Wife for a Hat and Other Clinical Tales. Touchstone Books, 1998.

2) Think someone's staring at you? 'Sixth sense' may be biological, review of a recent book, Sixth Sense, by Rupert Sheldrake

3) What Lies Behind Clairvoyance?, in-depth investigation of clairvoyance across the ages, by an early-twentieth-century Natural History professor

4) Clairvoyance, a thorough and intriguing, if somewhat unsupported, description of paranormal phenomena, to be taken with a few grains of salt

5) Home Page for Uri Geller, Modern Psychic, a spirited introduction to the fantastic claims of a man famous for bending spoons with his mind

6) Students at UC-Berkeley Search for a 'Sixth Sense', highly readable, but not very in-depth, description of ESP experiments conducted by students

7) Scientists Find Evidence for a Sixth Sense in Humans, thoroughly describes recent scientific findings that support the theory of ESP

8) Sixth sense detects pheromones, U. of C. researchers show, discussion of the roles pheromones might play in human psychology

9) Jacobson's Organ and the Sixth Sense: Human Extrasensory Perception?, very useful tool describing human ESP in the context of other animals' senses, and including links to definitions and descriptions of ESP-related terms

Not Cited) Sixth Sense: the Vomeronasal Organ, web paper on the VNS by an alumnus of our very own Serendip web page


ESP: An Effort to Quantify the Magical
Name: Nomi Kaim
Date: 2003-09-29 13:41:47
Link to this Comment: 6674


<mytitle>

Biology 103
2003 First Paper
On Serendip

A self-conscious girl has a feeling of being watched in class and spins in her chair; indeed, from the back of the room, a curious admirer is following her every move. A woman randomly contemplates an old friend with whom she long ago lost contact; that evening, the friend calls with important news. A man wakes up with a sinking feeling about his day and decides to skip work; later he hears of the disastrous crash of the train he rides each morning. A retarded boy who cannot count correctly states the number of cards dropped on a laboratory floor. (1) A handful of people, perhaps more (and I among them), dream of crashing airplanes and crumpling buildings in the days before the twin towers of Manhattan collapse. (2)

What is going on here?

Extrasensory perception. The term has acquired a reputation, among many Westerners, for deception, perhaps in part due to the hoards of pseudo-"psychics" and "fortune tellers" who claim to see into what they cannot. Even the term used is under debate: intuition, clairvoyance, telepathy, telekinesis, extrasensory perception (ESP), and the layman's "sixth sense" all describe uncanny, seemingly-coincidental human insights, happenings we cannot attribute to what we know of ordinary science and hence refer to as "paranormal" (next to normal). (3) Some would call such events supernormal, even occult. And is it any wonder?

The phenomenon of ESP transcends our knowledge of the human senses. In fact, its definition is, essentially, the ability to perceive accurately something the five senses cannot detect. (4) We do not, after all, have eyes in the backs of our heads. We cannot see through solid piled-up cards to count them (especially if we can't count). We do not see, or hear, others' thoughts. Our eyes cannot see events that happen across great distances. We cannot see, or hear, or touch the future.

Yet these things happen. Clinical tests show that certain people have the ability to describe figures on a card being held by a person in another room. Such tests repeatedly yield results whose probabilities of being "lucky guesses" are one against ten-to-the-umpteenth power (i.e.,1:1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000). (5) Hardly attributable to chance! Indigenous peoples, particularly shamans (tribal healers), have claimed for years to know how to enter trance states in which they perceive animals or people who are far away –- or dead. (4) And, while clairvoyance generally involves interactions between two or more living entities, some have been known to use such "superpowers" to locate objects – such as water with a stick. (3) Though theories range from the scientific to the fantastical, we can say really very little to explain these curious phenomena. (6)

The investigation of the "paranormal" is plagued by an unfortunate, though inevitable, facet of scientific and human inquiry: the "I wouldn't have seen it if I hadn't believed it" phenomenon. In this case, it's more "I don't believe it, so I can't see it." To a great extent, we see what we believe can logically be there and overlook, or justify away, the rest. So, many pragmatic modern thinkers either deny the reality of instances of ESP, or attribute them to chance alone. (2) But it's there, even if you're not looking, and repeated results that would happen by chance only one time in trillions must, rationally (if these thinkers are so rational!) be due to something else. And resisting the evidence for clairvoyance amounts to the same thing as ignoring the fossil evidence for evolution in blind favor of creationism: it stops science in its tracks, lodges society in traditional views that are swiftly losing their foundations. Clearly, in order to investigate the workings of something, we must first suspend our judgments and assume that the something exists!

As of yet, even the most radical scientific thinkers have been unable to "prove" (to the satisfaction of the public) the existence of ESP. (2), (7) We modern Westerners like to see hard-core data before we will believe something. We want a what, a how and a why that will hold together; a clear, sequential plan we can easily follow and conceptualize. Observing that something does, in fact, exist does not satisfy us if we cannot understand it; it has to make sense for us scientifically as well as socially. ESP isn't there yet. And the definitions for terms describing paranormal phenomena seem disconcertingly vague – written in terms of what ESP is not (explainable using the normal senses) rather than what it is – because our understanding of these phenomena remains imprecise.

The "why" of ESP seems simple enough. Evolutionarily speaking, people who could perceive or foresee bad events like predator attacks or earthquakes (and thus avoid them), or sense lucky breaks like clean food or water (and thus seek them out) would be more likely to survive and pass their genes on to the next generation than those who could not. Having access to information beyond the limits of the basic human senses must have been a real survival advantage.

The "what" and the "how" do not come so easily. Though we recognize (or ought to) the occurrence and recurrence of "uncanny coincidences," what is really going on in these cases? What do we know about the scientific mechanisms of ESP? Nothing is for certain, but we have some clues. Scientists have identified two previously unknown pits, called the vomeronasal organ (or VNO) after their location – in the nose ("nasal"), behind the thin vomer bone which separates the nostrils ("vomero"). (7) The VNO appears to contain nerves that can detect pheromones. Pheromones are chemicals that trigger hormonal changes and instinctive (non-cognitive) behaviors – and we humans used to associate them with non-human animals only. (7), (8), (9) Then, gradually, people began to find that pheromones played a role in biologically-engrained behaviors like sexual attraction and menstrual synchronization. (7), (8) Still, we do not know if, or to what extent, humans really use those pheromone-detecting nerves in their VNOs. The entire vomeronasal organ could be vestigial, like the appendix. It might or might not be implicated in accounts of ESP; even if it is, we do not know how. (7) Some scientists suggest that the VNO is wired to the brain's pineal gland, which lies in the amygdala, a very primitive part of the brain known for perpetuating biological instincts. (4),(9) If an organ is linked to the brain, it is more likely to be active, for the brain is active. It sounds good, but... Nevertheless, the research is still in its early stages, the data inconclusive. Can humans detect pheromones? Consciously, or only unconsciously? Through the VNO, or by means of some other pathway? Where, exactly, do we produce pheromones? Do they even play a role in extrasensory perception? What role, exactly, and how does it work? We have a lot to learn about the biology (within a person's body) as well as about the physics (between people's bodies) of ESP.

Most likely, ESP falls among the instinctual, nonscientific, not-quite-cognitive behaviors rooted in human beings' past. After all, ESP makes perfect survival sense in the long-ago reaches of our evolution, even if our modern world of paved cities, protective walls, super-human technologies and few predators has fewer needs for life-saving extrasensory perception. The alternative medical practices, such as shamanism, of indigenous healers tap into the paranormal more than do the pills and behavioral therapies of our modern world. (4) And the tribal people have been here longer. Another key to our evolutionary history, human fetal development, supports the theory of decreased dependence on ESP over time. In the early stages of its development, a fetus has a large, defined vomeronasal organ; the VNO, along with the gill slits and tail, shrinks with time but, unlike the gill slits and tail, does not disappear entirely. (7) This observation begs the question of whether or not we really use our VNOs in ESP. If the organ still exists, is it necessarily functional? Well, is the appendix functional?

In their search to qualify and quantify the elusive quality of extrasensory perception, scientists have tended to assume that ESP involved one sense, a sense separate from the original five and far more difficult to pin down. (If this is the case – if ESP is just another sense – then the term "extrasensory perception" is inaccurate. We might have to modify our vocabulary as our observations increase.) Yet I wonder if the "ESP sense" really exists separately from the other senses. We know full well that our senses help each other out. For instance, we rely on vision to hear better (reading lips and gestures) and even to feel things better (knowing what we are touching guides the sensations we use to describe it). We cannot taste well at all without our sense of smell. We can't separate these senses just because they appear to involve different body parts! Perhaps, then, the ESP sense is a combination of various other senses, or of other senses and an unidentified sixth sense (which may or may not involve the vomeronasal organ). It seems that the interconnectedness of senses is such that a blind or deaf person might not experience ESP in the same way, and a person deprived of her ESP sense might not see or hear in the same way. In effect, ESP might strengthen to compensate for another sense that is missing, in the way that blind people develop especially acute hearing. To this effect, I know of retarded children who, lacking in cognitive and linguistic skills, possess noticeably heightened emotional awareness and intuition. The body and mind will, I believe, try to compensate for its losses by making gains in other areas as it strives toward a unified, functional whole.

I have just one more question, but it is a looming one: what about precognition? Can some people can "see" or "feel" their way into the future, into events that have not yet happened? Countless happenings – including the pre-September-11th-dreams – suggest that they can. While some pop-cultural precognitions, like those of most fortune tellers or of Nostradamus, are vague and general enough to be read in any way and so should probably be discounted, some are inarguably accurate. A horrifying dream: a crashing airplane, a tall burning building, people screaming and falling. Just coincidence? No. There is something here that is real.

Perhaps precognition really involves "reading" the minds of people across space, "seeing into" the current thoughts or plans of an old friend in England or an evil ruler in Afghanistan, plans that are later manifested as actions. In this case, precognition would not involve seeing through time so much as seeing through space. But what about those episodes of ESP that occur between a human being and a non-living object? If a person predicts the falling of a meteor, can he be described as "reading" the "plans" of a non-planning, non-conscious object? If people really can see into the future, this uproots our typical linear conception of time; it suggests that what has not yet happened here is already happening somewhere, enabling the clairvoyant among us to "see" it from a distance. These are not only biological but also physical questions: how do our minds interact with other minds and objects across space and time? Science is only just beginning to address the inexplicable forces that religions have embraced and explained for centuries.


References


1) Sacks, Oliver. The Man Who Mistook His Wife for a Hat and Other Clinical Tales. Touchstone Books, 1998.

2) Think someone's staring at you? 'Sixth sense' may be biological, review of a recent book, Sixth Sense, by Rupert Sheldrake

3) What Lies Behind Clairvoyance?, in-depth investigation of clairvoyance across the ages, by an early-twentieth-century Natural History professor

4) Clairvoyance, a thorough and intriguing, if somewhat unsupported, description of paranormal phenomena, to be taken with a few grains of salt

5) Home Page for Uri Geller, Modern Psychic, a spirited introduction to the fantastic claims of a man famous for bending spoons with his mind

6) Students at UC-Berkeley Search for a 'Sixth Sense', highly readable, but not very in-depth, description of ESP experiments conducted by students

7) Scientists Find Evidence for a Sixth Sense in Humans, thoroughly describes recent scientific findings that support the theory of ESP

8) Sixth sense detects pheromones, U. of C. researchers show, discussion of the roles pheromones might play in human psychology

9) Jacobson's Organ and the Sixth Sense: Human Extrasensory Perception?, very useful tool describing human ESP in the context of other animals' senses, and including links to definitions and descriptions of ESP-related terms

Not Cited) Sixth Sense: the Vomeronasal Organ, web paper on the VNS by an alumnus of our very own Serendip web page


The Truth about Schizophrenia
Name: Alice Gold
Date: 2003-09-29 14:11:43
Link to this Comment: 6675


<mytitle>

Biology 103
2003 First Paper
On Serendip

Schizophrenia is a mental disorder that affects one in every one hundred people worldwide (2). It is defined as a psychotic disorder usually characterized by withdrawal from reality, illogical patterns of thinking, delusions, and hallucinations, and accompanied by various degrees of emotional, behavioral, and intellectual disturbances. There are numerous myths associated with schizophrenia, concerning what it is about, its causes, and the actions of those suffering from it. However, it is time to put these ideas to rest.

There tend to be huge misconceptions concerning the causes of schizophrenia and the actions of those suffering from this disease. It should be known that this disease is not a form of demonic possession, nor is it caused by evil spirits or witchcraft. Although in Greek, schizophrenia translates as "split mind," it has been established that those diagnosed with the disease do not have split personalities (3). It is also a myth that people with schizophrenia are more likely to be violent. In general, people suffering from schizophrenia, as well as any other mental illness, are no more dangerous than healthy individuals (1). Though schizophrenics show a slightly elevated rate of crimes of violence, these are usually the ones who are not receiving proper treatment. Schizophrenia usually strikes people in their prime. Generally, men are affected between the ages of sixteen and twenty, whereas women are affected between the ages of twenty and thirty (1).

Not only is schizophrenia an inherited disease, but is also considered to be genetically complex. Scientists say that an environmental "trigger" must be present as well to bring on the disease. Possible triggers include complications during the mother's pregnancy or delivery, in addition to prenatal exposure to virus, specifically in the fifth month in which most brain development occurs (1). It is believed that complications during pregnancy or delivery increase the threat of the disease, most likely due to damage of the developing brain.

There are other factors at hand when determining the causes of a disease such as schizophrenia. In terms of biochemistry, sufferers of the disease appear to have what is referred to as a neurochemical imbalance. However, current medications for schizophrenia now target three different neurotransmitter systems; these being dopamine, serotonin, and norepinephrine (2). Another cause of schizophrenia is the type of blood flow to the brain. Schizophrenics tend to have difficulty "coordinating" activity between various areas of the brain. For example, when thinking or speaking, most people demonstrate increased activity in the frontal lobes and a lessening of activity in the area of the brain used for listening. On the contrary, people with schizophrenia show the same increase in activity of the frontal lobe, however, there is no decrease of activity in other areas (2). Thirdly, molecular biologists have found that schizophrenics have an irregular pattern of certain brain cells (2). Given that these cells are developed before birth, this discovery further supports the idea that schizophrenia is formed during the prenatal period.

Over the years, as families try to cope with the reality of having a loved one suffer from schizophrenia, they have come up with various early warning signs that correspond with the illness. These signs include unexpected hostility, depression, and a flat, reptile-like gaze (2). Others noted that schizophrenics exhibit unexpected hostility, as well as bizarre behavior, noticeable social withdrawal, and drug and alcohol abuse. Additional warning signs include the inability to express joy, or cry, as well as several others (2).

Though the extremity of cases involving those who suffer from schizophrenia tend to vary, it has been proven that if medication is taken properly, it makes a huge difference. Schizophrenics should not be labeled as crazy, hostile, or unpredictable. In most cases, they tend to be more harmful to themselves than other people, and do not necessarily have to be feared. With the proper treatment, they are as normal as anybody else. Misconceptions can be extremely dangerous, so in cases of uncertainty, it is never hard to get the facts.

References


1)What Causes Schizophrenia?

2)Schizophrenia: Get the Facts, a very helpful site

3)Schizophrenia


How the Pendulum Swings: The nature-nurture debate
Name: Su-Lyn Poo
Date: 2003-09-29 14:15:22
Link to this Comment: 6676


<mytitle>

Biology 103
2003 First Paper
On Serendip



     One of the most intriguing science-and-culture debates of the twentieth century is that of the origin of behavior. The issue that has its roots in biology and psychology is popularly framed as the "nature versus nurture" debate. At different points in time, consensus has swung from one to the other as the supposed cause of our actions. These changes are not only the result of an internal dynamic but were subject (as they are today) to external influences, most notably politics and developments in other academic disciplines. The oversimplified polarities in this case-study illustrate an important characteristic of the larger scientific process. In search of a more refined theory, these are the necessary stepping stones in the attempt to get it 'less wrong'.



      Historical developments of a political nature have had a significant impact on the way the nature-nurture debate developed. Social Darwinism is a doctrine based on genetic determinism and natural selection, advocating a laissez-faire capitalist economy and promoting eugenics, racism and the inherent inequality of such a society. Extending Darwin's theory of evolution to social thought and political philosophy, the biologically-deterministic view culminated in the extremism of Nazi Germany. After the horrors of World War II, the debate swung in favor of "nurture", with American psychologists taking up a rhetoric of environmental influences on behavior, emphasizing the learning process. In turn, the European school of ethology arose in opposition to the environmentalists, focusing on innate behavior (that is, their genetic origins). While this divergence was eventually resolved, according to Barlow (1991)1 the subsequent development of sociobiology was subject to disagreements that were far more political in nature. Its proponent, biologist Edward O Wilson (1975)2, "speculated incautiously on the genetic basis of human social behavior, and often with regard to highly complex, situation-sensitive behavior" (Barlow, 1991)1, which drove the groups involved in the nature-nurture debate back into their opposite corners. Wilson was accused of being "politically motivated, even if he were himself unaware of it" while his own critics, as Barlow (1991)1 points out, were openly political in their approach to science. The Sociobiology Study Group, for example, applied Marxist philosophy to their practice of science, emphasizing environmental influences above biological. They were shown, along with the rest of the world, that environmentalist determinism in the form of social engineering in the Soviet Union, is as dangerous as genetic determinism.

      Opponents in the debate also positioned themselves along traditional academic lines. The development of behavioral psychology, a crucial component in the history of the nature-nurture debate, was itself highly influenced by the biological sciences and all that they espoused. As Rem B. Edwards (1999)3 notes:

Behaviorism arose in psychology out of frustration with older introspective approaches to mind and consciousness that appeal to direct awareness of mental states and processes, and out of the desire to turn psychology into a proper natural science with an empirical methodology and subject matter, one that makes claims that are publicly verifiable or falsifiable in repeatable sensory experience.

John Watson4 emphasized that the subject matter of psychology must be observable behavior, an approach that was developed by B F Skinner5, who incorporated numerous laboratory experiments into his studies. For both psychologists, mind and consciousness were held to be non-existent or at least irrelevant to psychology as they envisioned it. Ethologists, in turn, set themselves apart from the psychologists by focusing on the naturally occurring behavior of animals, which they viewed as a more 'objective' pursuit than the artificiality of laboratory experiments, a fact that no doubt failed to impress many behaviorists, who were of the mind that their research was the more objective of the two (Barlow, 1991)1. It seems ironic that shortly after the behaviorists began to focus on external causes of behavior in their bid for scientific status, biologists James Watson and Francis Crick discovered the double helix design of the DNA, opening the way for new understandings of the internal causes of behavior (Bettelheim, 1998)6. Laboratory advances in DNA research followed, together with increasing interest in the genetic causes for behavior. Once again, the balance shifted in favor of "nature". The development had eclipsed psychology's quest to 'become' a science (Richelle, 1993)7, but it was clear that the debate within 'science proper' was profoundly influenced by work that had been done outside its jurisdiction. As psychology stood straddling the division between the natural sciences and the social sciences, E O Wilson's attempts to encourage a synthesis of approach to behavior between the social sciences and the biological sciences, in the form of sociobiology (Wilson, 1975)2, met with great resistance from the social sciences, who valued the traditional divide (Barlow, 1991)1.



      Biological and environmental determinism, particularly as they pertain to human nature, have both had their time and their supporters, but both are fundamentally problematic. Attempts to find innate and unchanging (thus inescapable) traits across species to be applied to human behavior are flawed. As Jaggar and Struhl (1995)3 write:
Biological determinist theories of human nature are not just empirically unconfirmed; they also fail to acknowledge what is most distinctive of our species. The human genetic constitution determines highly developed learning and cognitive capacities that allow humans to respond flexibly rather than instinctively to environmental problems, as well as to develop a range of distinctively human cultural characteristics.

Behaviorism, on the other hand, as a good example of environmental determinism, is unable to account for consciousness, for creativity and for human agency. Whitney (1995)3 notes: "'environmental cause' does not mean 'easily changed,' and 'genetic cause' does not mean 'unchangeable'." To the extent that both schools of thought strive for a water-tight theory that is both universal and immutable, they represent oversimplified polarities.

      Given the way in which the nature-nurture debate has progressed, it is easy to perceive science as fickle at best and reactionary at worst. However, a closer look at the trajectory that the debate has taken since reveals that the scientific process has not been completely haphazard. A far less polarized view of the debate is expressed in a Newsweek article: "Biology, in short, doesn't determine exactly what we'll do in life. It determines how different environments affect us" (Cowley, 1995)8. The Independent similarly reported (Morrish, 2003)9 in a book review of Matt Ridley's 'Nature Via Nurture' that genes have to be environmentally modified:
The gene combines with environmental factors to make a statistical tendency. ... If you place 30,000 genes in an infinite number of different environmental settings you get an unknowable range of interactions and outcomes. Ridley concludes that such a genome makes free will possible.

Simplified polarities are the necessary stepping stones in the process of discovery to more refined theories of greater complexity.

      A likely result of this is that different academic disciplines will find their goals converging, as biology and psychology did and as biology and cultural studies might in the future. What is required for such a synthesis is the recognition of the respective roles of the different fields in the quest to understand behavior, the recognition of which is able to pursue what kinds of questions in terms of knowledge and methodology. For example, as Frans de Waal predicts (1999)10, the development of the neural sciences in understanding how the brain works will make an important contribution to the study of behavior, and the adoption of the evolutionary paradigm by the social sciences will enable them to engage in an on-going conversation on behavior among the scientific community. A subfield has already developed in the latter, in the form of memetics (Laland & Brown, 2002)11, though it has yet to be accepted by social scientists.

      Yet, synthesis is only the end-product; convergence is the process by which we will arrive at synthesis. The process is a messy one, requiring the perpetual redefinition of academic boundaries as new fields emerge and existing ones fade into the background, no doubt entailing much squabbling over who belongs where. This raises an important question about the way in which science is defined and the way in which it has evolved and continues to evolve. In the foreseeable future, the contention lies between the natural sciences and the social sciences –misnomers, or convenient labels that are not meant to reflect an external reality? The nature-nurture debate alone has demonstrated that these contentions are mediated by external influences such as politics and developments within each field, and that these can have great impact on science as it is understood and practiced.





References



  1. Barlow, George W. (1991). Nature-nurture and the debates surrounding ethology and sociobiology. (Animal Behavior: Past, Present, and Future). American Zoologist, 31(2), 286-296.

  2. Wilson, Edward O. (1975). Sociobiology : the new synthesis. Cambridge, Mass. : Belknap Press of Harvard University Press.
  3. Reich, Warren Thomas. (1995). Encyclopedia of Bioethics (Ed.), New York: Macmillan Pub. Co. : Simon & Schuster Macmillan ; London : Prentice Hall International.

  4. Wikipedia: online encyclopedia entry on John B Watson

    Watson, John B. (1913). Psychology as the Behaviorist Views it. Psychological Review, 20, 158-177. Retrieved from http://psychclassics.yorku.ca/Watson/views.htm

  5. Wikipedia: online encyclopedia entry on B F Skinner

    Skinner, B F. (1947). 'Superstition' in the Pigeon. Journal of Experimental Psychology, 38, 168-172. Retrieved from http://psychclassics.yorku.ca/Skinner/Pigeon/

  6. Bettelheim, Adriel. (1998). Biology and behavior. CQ Researcher, 8(13), 291-308. Retrieved from Lexis-Nexis.

  7. Richelle, Marc N. (1993). B F Skinner: A Reappraisal. Hove, UK: Lawrence Erlbaum Associates.

  8. Cowley, G. (1995, March 27). It's time to rethink nature and nurture. Newsweek, 125(13), 52-53. Retrieved from Lexis-Nexis.

  9. Morrish, John. (2003, April 27). Books: Don't keep your baby in a soundproof box, Mr Scientist; Nature Via Nurture by Matt Ridley. Independent on Sunday, Sunday features, 19. Retrieved from Lexis-Nexis.

  10. De Waal, Frans B.M. (1999). The end of nature versus nurture. Scientific American, 281(6), 94-99. Retrieved from Expanded Academic.

  11. Laland, Kevin; Brown, Gillian. (2002, August 3). The Golden Meme: Memes offer a way to analyse human culture with scientific rigour. Why are social scientists so averse to the idea. New Scientist, 175(2354), 40-43. Retrieved from Expanded Academic.



The Truth About SARS
Name: Elizabeth
Date: 2003-09-29 15:25:28
Link to this Comment: 6677


<mytitle>

Biology 103
2003 First Paper
On Serendip

The Truth About SARS

People in general are both fascinated and paranoid of the onset of new infectious diseases. While films such as "Outbreak" are smash hits at the box office, when an actual disease becomes apparent people often react with a kind of mass hysteria. Last year, a new illness reared its evil head. While the name "SARS" has become fairly well-known, the actual facts behind the illness are not as widely talked about.
SARS is an acronym for Severe Acute Respiratory Syndrome. The illness usually first becomes evident with a temperature above 100.4 degrees Fahrenheit, general malaise, and body aches. This adds to the difficulty of identifying SARS; the general signs of the illness are so similar to more common ailments such as influenza and pneumonia. After a period of two to seven days, SARS patients usually develop a dry cough that eventually escalates to the point where insufficient oxygen is reaching the blood stream. In roughly ten to twenty percent of cases, infected persons will require mechanical ventilation. This two to seven day period is generally considered the incubation period (#1).
The treatment of SARS remains a gray area. Currently all treatments are fairly similar to treatments given to patients ailing from serious community acquired atypical pneumonia. Health care professionals are experimenting with new medications to see if other methods are more effective; however, a concrete treatment is still unknown. The antiviral medications oseltamivir and ribarivirin have been used, often in conjunction with steroids. However, there have been no controlled clinical trials using these medications, so their rates of success still remain virtually unknown < "1">1)yahoo.com
While an absolute and final answer to the question of "Where does SARS come from?" has yet to be found, scientists have begun making important observations about its roots. Scientists have detected a previously unrecognized coronavirus in SARS patients, and this has become the leading hypothesis for the cause of SARS. Coronaviruses are a group of viruses that, when viewed under a powerful microscope, have a halo or crown-like (hence corona) appearance. Viruses under these headings are known to commonly cause mild to moderate upper respiratory illness in humans. In animals, coronaviruses are associated with respiratory, gastrointestinal, liver, and neurological diseases. Also, new ways to detect SARS have been discovered. Serologic testing for the SARS syndrome can be performed using indirect fluorescent antibodies or enzyme-linked immunosorbent assays that are specific for the antibody that was produced after infection. A reverse transcriptase polymerase chain reaction (known as RT-PCR) test can also detect SARS in specimens such as serum, stool, and nasal secretions. Also, viral cultures and isolation have been used to detect SARS < "1">1)yahoo.com.
While many conclusions have been drawn about SARS, such as its incubation period and symptoms, the unknowns concerning the syndrome remain unsettling. The fear that people have concerning the syndrome is not unwarranted. According to the World Health Organization, as of June 5, 2003, an estimated 8,403 cases have been reported with a total of 775 deaths attributed to the syndrome < "2">2)world health organization. The countries that have the highest reported amount of infected persons are as follows: Canada (218 cases, 31 deaths), China (5,329 cases, 336 deaths), Hong Kong (1,748 cases, 284 deaths), Singapore (206 cases, 31 deaths), Taiwan (677 cases, 81 deaths), United States (69 cases, no deaths), and Vietnam (63 cases, 5 deaths). Other countries that have reported cases of SARS include Australia, Brazil, Colombia, Finland, France, Germany, India, Indonesia, Italy, Kuwait, Malaysia, Mongolia, New Zealand, Philippines, Ireland, Korea, Romania, South Africa, Spain, Sweden, Switzerland, Thailand, and the United Kingdom, however, there have been fewer than ten incidents in each nation.
In wasn't until taking Biology 103 that I began to question science. I'm a self-proclaimed "English and history person", who always understood the ambiguity of literature and questioning the past results of historical events. I always considered the field of science to be quite the opposite. Science, unlike other academic pursuits, was reliable. However, I'm now beginning to realize how unreliable science really is. To finally comprehend that all science is is lots of summaries makes it a lot less intimidating to me, however, also somewhat unsettling.
The same is true of medicine. As someone previously fairly unfamiliar with medical terms and what lies behind diseases, I always believed that no matter what, doctors, like I thought scientists, had all the answers. This happens to not be so. Concerning SARS, scientists and medical professionals have merely made many summaries of observations. Like science, there is no certainty. Yes, it has been found that SARS is relatable to coronaviruses, however, coronaviruses are linked to many other illnesses. Coronaviruses were already known to cause mild to moderate upper respiratory illness in humans, so grouping SARS with such illnesses may be true, however, fails to single out SARS to any one cause, thus not really proving anything specific to the syndrome. This is highly unsettling to someone who previously put all her faith in doctors and scientists. Never before had I questioned such esteemed professionals, but now feel that by not questioning them I'll never be aware of the actual truth.
Perhaps someday the root causes of SARS will be known. As with all ailments, there was always a time when certain things that seem obvious now were unknowns. However, until that day comes, humanity will continue to rely on the observations of scientists and doctors to diagnose and treat SARS.


References

1)yahoo.com, comprehensive SARS info site

2) href="http://www.who.int/csr/don/2003_09_24/en/">world health organzation, the World Health Organization official site


The Giant Panda Paradigm
Name: Abigail Fr
Date: 2003-09-29 15:32:49
Link to this Comment: 6678


<mytitle>

Biology 103
2003 First Paper
On Serendip


The Giant Panda is a creature of mystery. Adults and children alike appreciate it for its cute, fuzzy, lovable qualities, but it is an animal that is in desperate need of immediate attention. Scientists know the basics: how and what they eat, where and how they live, and how they reproduce. The fact remains, however, that this universally loved national symbol of China is facing the threat of extinction. What accounts for this fact and what can be or is being done to protect the panda from such a fate? This paper will discuss the characteristics and lifestyle of the panda as well as issues and questions that arise as a result of the threat of their extinction.

Pandas have made their homes in China for centuries; but because of increased development and forest clearing in the lowlands, they have been forced further and further into the mountain ranges over the years ((1)). They inhabit damp forests in those mountains that border on land that farmers increasingly wish to use ((6)). These forests have a dense understory of bamboo and are characterized by heavy rains and dense mist ((1)).

One problem facing pandas is their seeming difficulty in mating successfully. Females have a low frequency of ovulation (once a year in the spring) and the males demonstrate an infamous apathy toward females in heat ((2)). When mating is successful, the female giant panda will give birth, after 96 to 160 days, to a cub that is one-nine hundredth the size of her ((1)). Cubs open their eyes after six to eight weeks, nurse for about nine months, and stay with their mothers for up to three years before venturing out on their own ((1)).

Giant Pandas enjoy a simple lifestyle of sleeping, looking for food, and eating. The monotonous activity seems to be largely due to the amount of food that they must consume: twenty to forty pounds of bamboo daily ((6)). Pandas have inefficient digestive systems, and must therefore spend more than ten hours a day eating the amount of food needed for necessary nutrients ((1)). While their dental structures have adapted to the bamboo diet their digestive systems remain closer to those of carnivores ((6)). This results in a low percentage of food digestion in comparison to the amount that it actually ingests ((6)).

The Giant Panda is currently threatened in a number of ways. The first threats are to their food sources. The Bamboo Rat is a minor, but existent problem that feeds on bamboo roots, killing plants on an individual level ((6)). Bamboo also undergoes phases of growing and then dying as part of the renewal cycle ((7)). This process is not a problem in itself, except for the fact that whereas the pandas might move to a different location to feed, they are running out of places to move because of the expansions of farmland and increased forest clearing ((7)).

The greatest threat of all to the Giant Panda is man. The abovementioned land clearing for farms, residential and commercial areas coupled with prowling poachers are the two most serious threats to the panda and its habitat ((3)). Efforts to set up reserves for the pandas have sparked conflicts with locals. When a reserve is established, people are often not compensated for the loss of land that they have used for years, and they are tempted to continue to use the land illegally ((3)). Their continued use of the land defeats the whole purpose of having a reserve.

China must somehow find a balance between conservation and development. The threat of the Giant Panda's extinction is a serious reality. It is estimated that there are about one thousand left in the wild; another one hundred forty live in zoos and breeding centers in China and a few other countries ((1)). Breeding pandas in captivity has proven to be a difficult task ((7)). Although it is a help to be able to observe these private animals in order to perhaps better understand how we can help them in the wild, the most pressing concern is that their habitats in the wild be saved and sustained.

The idea of cloning has arisen as one of China's desperate attempts to save the Giant Panda ((5)). Cloning raises some serious questions and concerns on both my part and the part of many experts. A small group of experts feel that the process of artificial insemination and raising pandas in the zoo for future release in the wild is not working well enough or fast enough ((4)). In 2002, China predicted that they would have their first cloned panda in the next two years ((8)). Scientists successfully placed a panda embryo in the uterus of a cat, but an article from 2003 stated that the cat died soon after ((5), (8)). Scientists have been forced to turn to surrogate mothers of different species because they have found pandas to have difficulties carrying young to term ((8)). Cloning is a touchy ethical issue, and I would hope that the experimentation would be kept to a minimum as we take the giving and taking of life into our own hands. I do not feel that enough is understood at this stage about cloning to attempt to save an entire species using this method.

There is a group of scientists who are very optimistic about the possibilities that may accompany cloning, but there are many, including myself, who are concerned that the focus of saving the Giant Panda species should be directed elsewhere. The most serious threats to their extinction, the destruction of their habitat, must be addressed. Experts opposed to cloning insist that the only way to guarantee the species' survival is if we not only solve the problem of the diminishing panda population, but also and more importantly, to find ways to guarantee the preservation of their environment by directing more attention to saving forests ((5)).

Also remember the problems within a species when it is in danger of becoming extinct. One finds a lack of diversity within the species, a lack that cloning simply could not return to the species. Animals rely on hereditary diversity in order to continue ((9)). This lack of diversity cannot be improved by the technology of cloning, even hetero-cloning, because it can only copy an already existing animal ((9)).

Another factor to consider is the question of how much we should involve ourselves in continuing any species. How much of their current endangered status is due to human interference and how much is due to the natural order of things? Many, many species have come and gone in the biological history of our world. It is true that humans are becoming more and more of an interference, but we have seen species run their courses throughout the centuries. I am not saying that I would like to see the panda population disappear by any stretch of the imagination, but I would be interested to hear what others have to say about our place in or perhaps interference with something like a "natural order."

The bottom line of all this talk of conservation is that we should concern ourselves with the preservation of the habitat of the Giant Panda. The results of other attempts at continuing the species, such as cloning, can only be temporary. Yes, we would maintain at least a small number of the species that could be seen on display in captivity, but we would not be addressing the real issue. Cloning cannot solve any of the problems that pandas face in the wild ((7)). By working to conserve their habitat, we can give the panda the best opportunity to continue on its own, apart from our direct intervention with the species itself.


References

1) 1)Smithsonian National Zoological Park, about Giant Pandas

2) 2)Giant Panda News and Events, Giant Panda in the News

3) 3)WWF Endangered Species, Panda conservation

4) 4)WWF, Future Outlook of Giant Panda

5) 5)CBS news, Chinese to clone pandas?

6) 6)Everything you need to know about the Giant Panda, It's all here

7) 7)China.org, Conservation programs for pandas

8) 8)Space daily, Discusses warnings about replication

9) 9)China through a lens, Experts who worry about cloning say why


Science and the Judicial System
Name: La Toiya L
Date: 2003-09-29 16:02:54
Link to this Comment: 6679


<mytitle>

Biology 103
2003 First Paper
On Serendip

Science and the Judicial System are two concepts that at face value seem to be very distinct and unique in their own nature, but at their cores they share interesting similarities and connections. They each propose a different way of understanding how we comprehend and place order. In this paper I'll address my understanding of both concepts, analyze their theories, backbones and failures, and then bring them both together through connections hopefully to support my idea that they are both inextricably connected to what we call life and its relationship to the human mind.

Science is a controversial subject very much like Judicial System. Although Science is largely composed of observation, experiments and their results, it raises controversy because imagination and perspective play a key role in those interpretations. As we know that imagination and perspective vary with each person due to education, background, and experience; how is it possible that we can assign a concrete truth to such a varied conceptualization. Thus, we cannot formulate any concrete truth. In this sense I see Scientists more as Philosophers. Another issue I find when dealing with traditional scientific theories is that Science often fails to provide theories and explanations for phenomenon's that hold truth and validation in both a scientific context and the context of the human mind. I feel that Science often caters to a "black and white" way of formulating answers; it fails to recognize the gray areas. Often times people try to find the most common and accepted ways to support their theories and in doing so they adapt to the standard and more traditional ways of viewing the world. This leaves less room for creativity and exploration of the mind when trying to formulate "truth". "A body of assertions is true if it forms a coherent whole and works both in the external world and in our minds." Roger Newton (1)

The Judicial System poses a similar problem to that of traditional science. I believe the laws in tour justice system are far too clear cut. There are a lot of gray areas when it comes to crimes committed, political decision making, and societal issues. I feel our constitution, which is what our laws are based on, is too limited and that poses a problem because a lot of the pressing issues in our society such as abortion and gun control lie on right and wrong border lines. It's hard to come to a resolution because of the strict and limited language of our laws and also because of the fact that there's more to these problems than laws; they involve emotions, perceptions, culture, and perspectives; none of which are taken into consideration in legislation. The controversy with Pro-Life or Pro-Choice is controversial and complex because there are so many ways to examine the issue, all of which have valid points depending on which light you're looking at it under. Abortion is both a societal issue as well as a political issue. It involves high sensitivity because of the direct connection to our emotions and personal values. Politics and laws also play a major role in this debate because so many of them have been passed concerning this issue. The Government on many levels is dealing with the issue of abortion. The courts, federalism, judicial review and the separation of powers are all involved in and dealing with this issue. In 1973 the Supreme Court declares abortion as a constitutional right. (2)At the same time it's illegal by law to kill someone, and a fetus is alive if we biologically consider a cell to be alive. So this case really depends on how one looks at it. This poses a problem because an agreement and a middle ground are almost impossible to reach because people specifically those with opinions about it, can only see the credibility in their value and position. So, in this case the right or wrong depends highly on personal perspective and values.

Gun control is deeply rooted in controversy. There are two conflicting sides those in favor or gun regulation and against it. It's an issue for our nation as a whole but it stem from the division of this country's mixed cultures. Those who have grown up in a culture where hunting is a family and cultural tradition are strongly against gun control, but for people who didn't grown up with hunting as a sport don't see the same value. This conflict is rooted not only in value but also politics.

Both science and the judicial system produce gray areas when trying to understand and rationalize. Science and the judicial system are inextricably connected to life. We systematically try to put life in a box to create order, order insures a comfort, and that comfort often gets in the way of open-mindedness. The human mind by itself is a convoluted vast universe. We as scholars, scientists, and human kind need to understand and that by assigning truths, right or wrongs we are limiting the extend of our intellectual capacities.


References


1)The Truth of Science, Physical Theories and Reality, An article from Harvard University Press

1)An Overview of American Abortion Laws, A thorough explanation of the laws concerning abortions


How does Memory Work? Can We Improve Our Memories?
Name: Flicka Mic
Date: 2003-09-29 16:26:53
Link to this Comment: 6680


<mytitle>

Biology 103
2003 First Paper
On Serendip

How does memory work? Is it possible to improve your memory? In order to answer these questions, one must look at the different types of memory and how memory is stored in a person's brain.Memory is the mental process of retaining and recalling information or experiences. (1) It is the process of taking events, or facts and storing them in the brain for later use. There are three types of memory: sensory memory, short-term memory, and long-term memory.

Sensory memories are momentary recordings of information in our sensory systems. They are memories evoked through a person's five senses: sight, smell, sound, taste, and touch. Although sensory memory is very brief, different sensory memories last for different amounts of time. Iconic memory is visual sensory memory and it lasts for less than a second. Echoic memory is auditory sensory memory and it lasts for less than 4 seconds. For example, if a person smells a certain smell, the olfactory tract in their nose sends signals to certain parts of the brain called the limbic system. (2) This system helps store the memory of the smell in the brain so that when the person smells the smell again, he or she will remember it.

Short term memory (also called working memory) is the recording of information that is currently being used. However, short term memory only lasts about twenty seconds. George Miller, who calculated the human memory span, found that it can contain at any time 7 chunks (any letter, word, digit, or number) of information. (2) When the brain receives signals of information, the information can be repeated over and over until it is stored, therefore creating a "phonological loop". (4) However, unless a repetition of the information occurs, it will be lost.

Long term memory is the capacity to store information over a long period of time. The capacity for long term memory is unlimited since it can be stored one minute ago or one year ago, and the information can still be retrievable at any time. Some scientists believe that parts of long term memory are permanent while others will eventually weaken over time. (3) Long term memory can be divided into three sections: procedural memory, declarative memory, and remote memory. Procedural memory includes motor skills such as learning how to ride a bike or how to drive a car. "Such memories are slow to acquire but more resistant to change or loss." (4) Declarative memory is used to remember facts, such as names, dates and places. It is easy to learn but also easy to lose. Finally there is episodic memory, which is the record of events that a person stores throughout his or her experience. Recent studies show that these events, as soon as they occur, are sent to a temporary part of the brain called the hippocampus, and that over time they are moved to the neocortex for permanent storage. (5).

When speaking about memory, one needs to look at the parts of the brain that are
involved in memory storage. The hippocampus is a place in the brain that is used to
"transfer memories from short-term to long term memory". (1) It also helps store spatial memories with the thalamus. The thalamus is a "collection of nuclei that relays sensory information from the lower centers to the cerebral cortex". (7) In addition to spatial memories, the thalamus helps store emotional memories with the amygdala. The amygdala is a nuclear formation that helps store both conscious and unconscious emotions, and it is part of the limbic system. The prefrontal cortex is the area of the brain that stores motor skills as well as knowledge of social behavior and the demonstration of personality. The cerebellum is the main part of the brain concerned with motor coordination, posture, and balance. The hippocampus, the amygdala, and the cerebellum are all part of the brain's limbic system.(7)

Now that we know how memory works, we can address the issue of how a person can improve his or her memory. There is a learning technique called mnemonics, which is designed to help people "remember information that is otherwise quite difficult to recall." (5) There are many ways to use mnemonics to help one's memory. Some scientists believe that speaking while you are reading will help you to absorb the material. Others believe that writing information down will help your brain to recognize it and store it more easily. Another effective way to remember facts is to associate it with other facts or other objects, which you can link back to the original piece of information. So, there are in fact many ways that one can improve one's memory if one tries.

In conclusion, memory is the storing of information in the brain over a certain
period of time. The three kinds of memory (sensory, short term and long term) use
different parts of the brain to store information, and therefore can store it for different lengths of time. A person can improve his or her memory through techniques called mnemonics, which use patterns and associations to help the brain store information more accurately. Memory is also a relevant topic to many biological issues today. New studies are constantly being done to connect memory to dreams and to the deterioration of the brain in Alzheimer's disease. But what we know about memories is that they are directly connected to the brain and the signals the brain receives when one learns new information. One day, hopefully, new technology might allow us to look inside a person's brain and see their memories or even retrieve memories that they have pushed away.


References

1) Memory Basics

2)U of A Cog Sci Dictionary

3) Brain Power

4) Memory and the Brain

5) UTCS Neural Nets Group Research

6) Mind Tools-Introduction to Memory Techniques

7)Memory in the Brain


Safe, By a Hair
Name: Talia Libe
Date: 2003-09-29 16:36:56
Link to this Comment: 6681


<mytitle>

Biology 103
2003 First Paper
On Serendip

Talia Liben
Biology 103
September 20, 2003
Safe, By a Hair

"A Hair perhaps divides the False and True; and upon what, prithee, may life depend?"
Omar Khayyam

Ron Williamson came within five days of being executed for a crime he didn't commit. He lost twelve years of his life to death row. He is not alone. Chillingly, sixty-eight percent of all death penalty cases are reversed on appeal (Liebman). In the few years since its application in the field of criminal forensics, DNA testing has proven to be the most effective tool ever devised to protect the innocent and convict the guilty.
In 1969, Frederick Miescher identified the substance known as DNA (deoxyribonucleic acid). The components of DNA were rapidly discerned: four nitrogenous bases. Two of the bases, adenine (A), and guanine (G), are purines, and the other two, cytosine (C) and thymine (T), are pyrimidines. In the 1940's, Oswald Avery established that DNA was used to transmit hereditary traits. A decade later, James Watson and Francis Crick famously united to solve the mystery of DNA's shape. They found that A and T can only lie adjacent to each other, C and G can only pair with each other, and together, the separate strands coil into a spiral staircase, called a double helix. The building blocks of life itself had been discovered, but the information remained for many years without any practical application.
Twenty-one year old Debra Sue Carter was brutally raped and murdered in 1982. Jailhouse snitches fingered Ron Williamson, a mentally unstable neighbor of Carter's. A state analyst claimed to link four hairs from the murder scene to Williamson. Williamson's inexperienced court appointed lawyer had never tried a capital case. Despite witnesses who placed Williamson elsewhere, he was found guilty and sentenced to death by lethal injection (Scheck 130-157).
About the time of Carter's murder, geneticist Alec Jeffreys was attempting to identify the segments of genetic material that varied the most from one person to the next, and to discover a method that could make those areas visible. Jeffreys developed a revolutionary technique which we know as "DNA fingerprinting," a laboratory procedure accomplished in six steps: 1) Isolating the DNA from the nucleus by using a detergent to wash away superfluous material. 2) Cutting the DNA into pieces of variable lengths with chemicals called restriction enzymes. The differing lengths of the fragments, due to varying amounts of nucleotides, are called RFLP (restriction fragment length polymorphisms). 3) An agar plate is charged with an electric field. The DNA pieces, having negative charge, drift toward the positive bottom end and separate on the way. 4) The fragments of the DNA spiral staircase are torn apart, separating each purine from its matching pyrimidine. 5) Radioactive markers search out complementary DNA sequences, and attach themselves, marking the spot. 6) The marked DNA is placed next to x-ray film, giving a "signature" unique to that strand of DNA. If two different samples show bands of DNA at the same spot, the geneticist can confirm that the DNA in both lanes is from the same person (Roberts).
Williamson's appellate attorneys sought DNA fingerprinting for the DNA from the murder scene. The DNA from the semen of Debra Carter's rape did not match the DNA from Ron Williamson's blood sample. Not one of the hairs, the most damaging evidence against Williamson, was linked to him by DNA. The test proved, conclusively, that Ron Williamson was not the assailant. Science prevailed over an alleged confession, false witnesses, and ineffective defense counsel. DNA brought him justice.
Fortunately for Williamson, there was sufficient DNA available from the crime scene to conduct the DNA fingerprinting. One significant disadvantage of the RFLP method is that it requires a relatively large amount of DNA. In the messy world of real crime scenes, DNA can be scarce. What was needed was a way to amplify a small sample of DNA. Biochemist Kary Mullis had a breakthrough inspiration that solved this problem. Mullis took a large strand of DNA from which he wanted to isolate a small sequence, and placed it in a test tube along with two small DNA markers. These primers attached themselves to the DNA, delineating the portions of strands Mullis wanted to study. The DNA was only copied between the two markers places on the DNA strand. Mullis then allowed this segment of the DNA molecule to copy itself repeatedly, until he had millions of copies of the sequence (NCBE). Mullis had solved the problem of amplification. As a result, DNA testing can now be performed on specimens that are old, degraded, or of limited quantity.
Interest in DNA testing had been spurred primarily by the desire to find a cure for inherited diseases. Applying the methods to forensics is an innovation that could revolutionize our criminal justice system. Already, national databases are being created which house genetic profiles of convicted felons (Niezgoda). It is possible that newborn babies will all have their genetic profiles sequenced and stored so that for certain crimes, no suspects would be needed in order to attain DNA samples. Investigators could simply install the "fingerprint" into the databank and search for a match. The criminal justice system could allow DNA as evidence showing that someone, genetically, can't help committing certain illegal acts. However, the application of these daunting scientific advances clearly has the potential to infringe upon our individual privacy rights. It is unknown whether society will choose to implement many of the forensic possibilities that these scientific breakthroughs allow. If it does, some day we may be able to execute or exonerate based solely on DNA evidence, or, perhaps, even solely on predisposition. As Omar Khayyam so presciently wrote almost 1000 years ago, life may depend on a single droplet of blood, or even just one hair.


Works Consulted


• Connors, Edward, Thomas Lundregan, Neal Miller and Tom McEwen. "Convicted by Juries, Exonerated by Science: Case Studies in the Use of DNA Evidence to Establish Innocence After Trial." U.S. Department of Justice, Office of Justice Programs (June1996). http://www.ncjrs.org/txtfiles/dnaevid.txt
• Lander, Eric S. "DNA on the Witness Stand." Access Excellence @ The National Health Museum (1999). http://www.accessexcellence.org/AB/WYW/lander_1+html
• Liebman, James S. "A broken System: Error Rates in Capital Cases, 1973-1995." The Justice Project (June 2000). http://www.TheJusticeProject.org
• NCBE. http://www.ncbe.reading.ac.uk/NCBE/MATERIALS/DNA/lampcrmodule.html
• Niezgoda, Stephen and Barry Brown. "The FBI Laboratory's CODIS Program" (July, 2000) http://www.promega.com/geneticidproc/ussymp6proc/niczgod.htm
• Roberts, Reid.
http://www.college.ucla.edu/webproject/micro7/studentprojects7/Reid/DNA/DNA.html
• http://www.law-forensic.com/dp_links.htm
• http://www.deathpenaltyinfo.org
• http://www.college.ecla.edu/webproject/micor7/studentprojects/Reid/DNA/DNA.html
• http://www.innocentproject.org/index.php


Limb Transplants -- Modern Miracle or Future Frank
Name: Adina Halp
Date: 2003-09-29 17:01:02
Link to this Comment: 6683


<mytitle>

Biology 103
2003 First Paper
On Serendip

We all know that transplants save lives. Liver, heart, renal, and other organ transplants are hardly controversial. But what happens when transplants do not save lives? What happens when they actually endanger them? At least twenty-one hands and arms have been transplanted since 1998 (and one in 1964) (1). Sure, the cosmetic and functional value of having a new hand could seem like a miracle to those without hands or arms, but do these benefits outweigh the risks?

Limb attachments are not uncommon. Dr V Pathmanathan and his team, who transplanted a left arm onto baby Chong Lih Ying from her twin sister who had died at birth, had already performed over 300 such operations (2). The controversy occurs when the limb is not simply reattached, but is transplanted from one person to another. This is because limb transplant patients, like any other transplant patients, need to be given anti-rejection medication, immunosuppressive therapy (1), so that the body's immune system does not recognize the new limb's tissue as foreign and destroy it (3). In fact, Chong Lih Ying was the only limb transplant patient not to receive immunosuppressive drugs. Because her arm was transplanted from her twin, there was very little risk of rejection (2).

As the name suggests, immunosuppressant drugs given to limb transplant patients greatly lower the body's immune system (4). This puts limb transplant patients at a much greater risk of cancer, infections, and other disorders (5), as has been the case in renal and liver transplants (6). Even with these drugs, the patient still has a great risk of rejection. Six weeks after Jerry Fisher's hand transplant, he had already experienced three episodes of rejection, a common and expected occurrence in limb transplant patients (7).

To avoid rejection, and to regain functions of the limb, limb transplant patients must follow a strict regime of intense physical therapy. During the period immediately preceding his hand transplant, Jerry Fisher underwent a two-hour physical therapy session six days a week, as well as therapy exercises on his own every two hours (7). Even so, normal functions of the limb come slowly, and according to test results to date, a transplanted limb will never have the full function of a limb with which one was born (6).

Transplant recipients must also undergo intense psychological therapy in order to view the hand as part of the self and not to associate it with the deceased body from which it came. They must also be able to deal with the fact that the limb could be lost yet again in the case of rejection or if the immunosuppressant drugs were to put their lives in grave danger (1). This was the case for Clint Hallam, the world's first hand transplant patient (aside from the recipient of the unsuccessful 1964 operation in which primitive immunosuppressive drugs were used). In 2001, Hallam's new hand was amputated. The doctors involved claim that it was due to Hallam's lack of commitment to the taking of immunosuppressant drugs and undergoing physical therapy (4) but Hallam claims that it was due to rejection and "mental detachment" (7). No matter where the blame lays, the truth is that the operation was unsuccessful and that this is a real risk that transplant patients must face.

The next most recent limb transplant took place in January of 1999 (4) – just under five years ago. We therefore cannot know the long term effects of limb transplants. Of renal and liver transplants, which have a much longer history and broader base than do limb transplants, only 30-60% last at least fifteen years before a second transplant is needed. (6) Heart transplant patients, which require similar dosages of immunosuppressant drugs to those of limb transplant patients, have an annual risk of lymphoma of 0.3%. Assuming that this risk is the same for limb transplant patients, Matthew Scott, for example, who was 37 when he received his new hand, has a lymphoma risk alone at 12.9%, assuming his natural life expectancy is 80 (8), as well as risks of other diseases. It must also be taken into consideration, however, that renal, liver, and heart transplant recipients are typically already sick when they receive their transplants. Limb transplant patients, though missing limbs, are otherwise in good health. They could therefore have a lower risk of disease.

The picture thus far looks dismal, but it is important to remember that for many amputees, life itself can be dismal. People missing one or both hands are unable to perform, or perform with difficulty, many tasks that the rest of the population takes for granted, such as shaving, cooking, and carrying large objects. They are also greatly debilitated in their capacity for human touch, which is so dependant on the hand (9). There are also cosmetic benefits of having two hands. Although these, in comparison to the functional benefits, are small, they must still be taken into consideration. The constant stares and unusual treatment received by those without two hands can be traumatizing.

Although limb transplant patients may never regain the full function of their new limbs, limb transplants make possible much more than do prostheses. After just two months, Jerry Fisher could toss a ball, use a paddle, tie and untie his shoes, and lift and carry a 35-pound crate. He was also ecstatic about his newfound ability to "Pick up the baby every morning just to hold him." After two years, Matthew Scott could throw a baseball, swing a light bat, write his name, feel the sensations of hot and cold, tie his shoes, pick up checkers, and use his cellular phone (7).

After reviewing the near exhaustive list of disadvantages and the comparatively short list of advantages, one would be tempted to render limb transplants simply unethical and selfish on the parts of both surgeon and recipient. After all, lives are not at risk with the loss of a limb as they are when organ transplants are necessary (6). However, it is not the quantity of disadvantages versus advantages that is at issue. Rather, it is their quality. One can never know the feeling of being without one or more limbs until one or more limbs are lost. One cannot judge Jerry Fisher as being selfish for wanting to pick up his baby son. One should not accuse the doctors who perform these highly controversial operations of being simply ambitious when they are greatly contributing to the field of medicine. As long as they have informed the patients of both the advantages and the risks of this new surgery and have psychologically tested the patients before going through with the operation, they have fulfilled what I feel are their ethical obligations.

I must therefore conclude that although limb transplants are not for everyone who is without a limb, they are nonetheless ethical. For these people, death is a risk worth taking. Although they do somewhat resemble the works of Frankenstein, to their recipients, limb transplants are modern miracles.

References

1)Hand Transplant, A plethora of information about hand transplants put together by Brown University students.

2)Time will tell for baby given dead twin's arm, News Article on IOL, South African news, classifieds, and information site.

3)Man Gets First Double-Arm Transplant, Article originally on ABCNEWS.com, on Marylin's Transplant Page, which includes over 300 news articles about transplants.

4)Hand Transplant History, A history of hand transplants and hand transplant technologies on the official transplant website.

5) Surgeons perform another successful hand transplant, A site combining the history of hand transplants with an article about Jerry Fisher, America's second hand transplant recipient.

6) ASSH / Hand Transplantation, A discussion on the ethics of hand transplants on the American Society for Surgery of the Hand site.

7) Arm-Hand Tx 2001, A selection of new articles associated with www.handtransplant.org.

8) bmj.com Benatar and Hudson 324 (7343): 971, A case study of two situations where limb transplants were considered but not performed.

9) Longing for Human Touch, Article originally in the Los Angeles Times, on Marylin's Transplant Page.


Partial Birth Abortion
Name: Katherine
Date: 2003-09-29 18:22:15
Link to this Comment: 6684


<mytitle>

Biology 103
2003 First Paper
On Serendip


In the continuing debates on the legality and morality of abortion, "partial birth" abortions have become a hot topic. What exactly is a partial birth abortion? Nebraska state legislation defines it as "an abortion procedure in which the person performing the abortion partially delivers a living unborn child before killing the unborn child and completing delivery" (1). While this definition may be fine for legal purposes, it still does not address the actual procedures; we still do not know what an actual partial birth abortion procedure entails.

The most common procedure is called Intact Dilation and Evacuation, or D&E. D&E involves dismembering the fetus inside the uterine cavity and then pulling it out through the already dilated cervix (1) . Another less common, but more controversial method is the dilation and extraction method, or D&X. This procedure requires a woman to take medication several days in advance to dilate the cervix. Once the cervix has dilated, she returns to complete the procedure. When she returns, the physician turns the fetus around in the uterus so that it is positioned feet first, and then delivers the fetus until only the head remains inside the mother's body. At this point, the physician punctures the base if the skull and suctions out the contents of the fetus' head, causing the skull to collapse. The dead fetus is then removed from the woman's body (2). In each case the head (or more) is left inside the woman's body because in order for a birth to have occurred under common law the head of the fetus must leave the mother's body. Under the current interpretation of the United States Constitution, a person must be born in order to be protected by the government, so by leaving the head in the mother's body the procedure is considered to be legally viable (1).

Proponents of a ban on partial birth abortions cite what they see as the extreme cruelty of the procedures as violating the constitutional rights of the fetus. They believe that birth should be defined as occurring as soon as any part of the fetus' torso above the navel is visible, or when any of the fetus' body has left the mother (1). Many argue that since the fetus in undoubtedly alive during the procedure, the issue of whether or not an actual birth has occurred should be of little consequence (3). Since partial birth abortions are performed late-term, many of the fetuses could in fact be self-sustaining outside of the mother's body.

Those who oppose the ban argue that it jeopardizes the health and safety of the mother. Since all partial birth abortion bans presented in Congress and state legislatures have been vague as to which procedures they prohibit, the overall threat to women's health is too great and would place an "undue burden" on the women in question (1). Late term abortion procedures that are not partial birth involve dismembering the fetus inside the uterus without cervical dilation, which can leave behind fetal tissue, or require the head to come out of the mother's body uncollapsed, which can result in a live birth (1). In either of these procedures, there is a higher risk of puncturing the uterus or damaging the cervix than there is with partial birth methods (2). Banning partial birth abortions would leave these procedures as the only option for late term abortion. Additionally, none of the partial birth bans have included clauses allowing such procedures if they are necessary to the mother's health and well-being. This means that if a woman had to have a late term abortion to save her own life, she would be forced to choose a riskier procedure. Since few women choose to terminate after seven to eight months of pregnancy for non-health related reasoning, it follows that most women seeking a late term abortion would be put in a difficult position. This is what the anti-ban groups mean when they refer to an undue burden on the mother's health (4).

When I began my investigation, I was sure of my position on abortion rights, and convinced that little could change my mind, regardless of what the procedures actually involved. However, when reading the case made by the pro-ban side of the argument, I could not help but agree with certain things they said. Does birth really define life? I'm not so sure that I agree that it does. Does this mean that I completely disapprove of partial birth abortion? No – I still feel that a woman should be able to choose the safest method available, and in the case of late term abortions, partial birth procedures have obvious benefits. The information I gathered has, however, caused me to question my unconditional support of the procedures – should elective abortions really be allowed if the procedures are as, well, unpleasant as partial birth methods are? What I once thought were clear cut lines between the legal, the biological and the sentimental aspects of the issue have blurred.


References

1)Partial Birth Abortion Laws , A listing of federal and state laws and proposed legislation concerning partial birth abortions.

2) Abortion, Partial Birth , Basic descriptions of the process from a relatively unbiased viewpoint.

3) Partial Birth Abortion" Is It Really Happening? , Pro-ban viewpoint with illustrations of D&X procedure.

4)Partial Birth Abortions: Myths and Facts , Cited facts/myths appear in a pop up window accessible through the page. Anti-ban viewpoint.


Choroidal Neovascularization
Name: HoKyung Mi
Date: 2003-09-29 19:15:54
Link to this Comment: 6685


<mytitle>

Biology 103
2003 First Paper
On Serendip

Choroidal Neovascularization

If you had to give up one of your five senses, which would it be? Would you give up your ability to see? A startling number of people lose their eyesight due to an eye disorder known as choroidal neovascularization. And soon I may be one of them. Although there is no known cure for this unfortunate disease, studies have been conducted to find the appropriate surgical treatment.

The outer portion of the 2.5 cm human eye is composed of three primary layers of tissue. The outermost layer is called the sclera, which acts as a protective coating. Within this layer the transparent cornea is present in the front area of the eyeball. Under the sclera is the choroid where the majority of blood vessels and the iris are located. The light-sensitive layer is known as the retina.

As mentioned, the choroid contains most of the eyeball's blood vessels. It is also the layer prone to bacterial and secondary infections. Choroidal neovascularization is a process in which new blood vessels grow in the choroid, through the Bruch membrane and invade the subretinal space. Because there is currently no medical treatment for this disease this abnormal growth can easily lead to the impairment of sight or complete loss of vision.

Three main diseases that cause choroidal neovascularization are age-related macular degeneration, myopia and ocular trauma. The Wisconsin Beaver Dam Study showed that 1.2% of 43-86 year old adults with age-related macular degeneration developed choroidal neovascularization. The study also proved that choroidal neovascularization was caused by myopia in 5-10% of myopes. Ocular trauma, another cause of choroidal neovascularization, is for reasons unknown found more often in males than females. More than 50 eye diseases have been linked to the formation of choroidal neovascularization. Even though most of these causes are idiopathic, among the known causes are related to degeneration, infections, choroidal tumors and or trauma. Among soft contact lens wearers choroidal neovascularization can be caused by the lack of oxygen to the eyeball. Unlike age-related macular degeneration, age is irrelevant to this cause.

Although no medical treatments have proven to be a cure for choroidal neovascularization, particular antiangiogenic substances such as thalidomide, angiostatic steroid, and metalloproteinase inhibitors are currently being tested. Through surgical testing, partial removal of choroidal neovascularization proved to be useless. Therefore the focus has been placed on photodynamic therapy, a procedure approved by the Food and Drug Administration.

In choroidal neovascularization patients, the fluid and blood along with the formation of new blood vessels form scar tissues which are trying to repair damages but are ultimately the cause of blindness. Photodynamic therapy is a treatment meant to stop the fluid as well as stunt further growth of the blood vessels among patients. Photodynamic therapy is performed in two phases. In the first phase Visudyne, a special dye that only attaches itself to abnormal blood vessels underneath the retina, is injected. Then a laser which does not damage the retina activates a compound which closes the anomalous blood vessels located in the eye. CNV has been seen to disappear 24 hours after the procedure. Unfortunately, CNV has also been seen to reappear 2-3 months later in almost all the patients and long-term benefits are still unknown. However, in a year-long Treatment of Age-related Macular Degeneration study of 609 patients16% of treated patients and 7% of placebo patients had visual improvement.

Another type of treatment that is being tested in a study called the Submacular Surgery Trials is an experimental procedure known as submacular surgery. This procedure is performed from the inside of the eye in order to work on the retinal tissues to remove and replace the vitreous fluid. The downside of this procedure is that in order to heal the patient must be face-down for several weeks after the fluid is replaced.

It is most unfortunate that there is still no effective medical treatments nor any completely successful surgical treatments because I was recently diagnosed with choroidal neovascularization in both of my eyes. Although the knowledge I have gained by researching this disease has been personally enlightening, the facts are frightening as well. But to remain optimistic, it is somewhat comforting to know that there are studies such as the Wisconsin Beaver Dam Study and the Submacular Surgery Trials working towards a cure.

References

1)Unified Medical Language System, Medical term dictionary
2)Submacular Surgery, Information about submacular sugery
3)The Royal College of Ophthalmologists, Information about photodynamic therapy
4)Barnes Retina Institute, Education website on photodynamic therapy
5)Ocular Photodynamic Therapy for Choroidal Neovascularization, Description of ocular photodynamic therapy
6)Eye (anatomy), Explanation and overview of the eyeball
7)eMedicine, Journal article on subretinal neovascular membranes
8 )eMedicine, Journal article on choroidal neovascularization


Bipolar Disorder and the "War on Drugs"
Name: Christina
Date: 2003-09-29 19:19:41
Link to this Comment: 6686

Bipolar disorder, also known as, "manic-depressive illness," is a brain disorder that results in unusual shifts in a person's mood, energy, and ability to function. More than two million American adults (or, about one per cent of the population aged eighteen and older in any given year) are afflicted by this affective disorder (1). Yet, because it cannot be revealed by a blood test or other physiological means, patients may suffer for years before it is properly diagnosed and treated. Fortunately, once one is diagnosed with bipolar disorder, the acute symptoms of the disease can be effectively mitigated by lithium and certain anticonvulsant drugs, the most popular being Depakote (also known as valproate).


However, not all drugs are created equal. The New York Times recently featured an article elucidating that Lithium, the first drug utilized to treat bi-polar disorder, is more conducive to preventing suicide in people who have manic-depressive illness than Depakote, what has become the most commonly prescribed drug (2).. The new study, published in The Journal of the American Medical Association, found that patients taking Depakote were 2.7 times as likely to kill themselves as those taking lithium (2).. Although studies conducted prior to this have concluded that lithium could in fact prevent suicide, this report is the first to compare suicide and attempted suicide rates in lithium and Depakote users (2).

Approximately fifty years ago, lithium "opened the modern era of psychopharmacology (3)." Its therapeutic effect is indeed very rapid. Administered in the form of lithium carbonate, it is most potent in treating the manic phase of a bipolar affective disorder; once the mania is eliminated, depression usually does not ensue (4). Such information is supported by many open studies, and at least ten controlled, double-blind studies. One study proclaimed that mania was reduced by 64 per cent, and depression, 46 per cent (3). The duration of both manic and depressive recurrent episodes was also reduced (by 19 and 32 per cent, respectively). The most striking impact was found for the hospitalization rate, which fell by 82 per cent (3). This has considerable economic significance, as hospitalization accounts for a major proportion of direct costs in major psychiatric illness. It is important to note that all of this evidence far exceeds the available support for possible alternatives to lithium treatment, including application of anticonvulsant, antipsychotic, or sedative agents.

Still, investigators have yet to discover the pharmacological effects of lithium that are responsible for its ability to eliminate mania. Many posit that the drug stabilizes the population of certain classes of neurotransmitter receptors in the brain (particularly serotonin receptors), preventing wide shifts in neural sensitivity, and in turn, influencing mood (4). Unfortunately, however, some patients cannot tolerate the side effects of lithium, and because of the potential danger of overdose researchers have been searching for alternative medications. This spurred the trend of prescribing Depakote.

In a review published by the American Psychiatric Association, valproate (Depakote) was reported to be more efficacious than lithium among manic patients with mixed symptoms (5). Moreover, the side effects were minimal. For example, sedation or gastrointestinal distress were common initially but typically resolved with continued treatment or dose adjustment. Depakote also has a, "wide therapeutic window;" in other words, inadvertent overdose is uncommon, and even intentional overdose is less noxious than an overdose of lithium. Therefore, it is only in rare instances that Depakote is rendered life-threatening.

Nevertheless, Dr. Frederick K. Goodwin, the senior author of the study featured in The Journal of the American Medical Association and director of the psychopharmacology research center at George Washington University, argues, "Lithium is clearly being underutilized...the real tragedy is that a lot of young psychiatrists have never learned to use lithium (2)." Simultaneously, however, Dr. John Leonard, a spokesman for Abbott Laboratories, the maker of Depakote, questioned the findings; he noted that the studies looking back at patients' records were inherently flawed, and not as reliable as studies in which patients were randomly assigned by researchers to take one drug or the other (2). Based on this potentially flawed study, I don't believe that doctors should suddenly stop prescribing Depakote altogether; as aforementioned, there are several benefits to taking Depakote, including a very low-risk of enduring life-threatening side-effects.

Yet, based on the fact that lithium has proven successful for so many years, I also agree that it has become underutilized. It is imperative that young physicians are continually taught how and when to prescribe lithium. I personally believe the solution is that bipolar disorder must be treated on a case-by-case basis. Evidence presented lends itself to this. For example, it was asserted earlier that Depakote is more efficacious in treating manic patients with mixed symptoms. Therefore, if a patient manifests mixed symptoms, Depakote should be more carefully considered. However, if prescribed, the patient should still be monitored for surfacing symptoms of suicide. If these symptoms emerge, his or her Depakote prescription should be discontinued or minimized. Likewise, if lithium is initially prescribed and not effectively treating a particular patient, Depakote, or a combination of these two treatments should be sought. With regard to treating bipolar disorder, there is indeed a "war on drugs"; however, tenacious monitoring of patients and critical treatment experimentation and evaluation may help physicians soon find peace.


Sources Cited
1. National Institute of Mental Health: Bipolar Disorder
http:// www.nimh.nih.gov/publicat/bipolar.cfm

2. New York Times, 9/17/03: An Older Bipolar Drug Is Linked to Fewer Suicides in a Study (Denise Grady)

3. The British Journal of Psychiatry, 2001. Long-term Clinical Effectiveness of Lithium Maintenance Treatment in Types I and II Bipolar Disorders (Leonardo Tondo, MD)
http://bjp.rcpsych.org/cgi/content/full/178/41/s184

4. Physiology of Behavior (textbook, 7th edition, Neil R. Carlson)

5. American Psychiatric Association. Practice Guidelines for the Treatment of Patients With Bipolar Disorder; Part B: Background Information and Review of Available Evidence
http://www.psych.org/clin_res/bipolar_revisebook_5.cfm


Tattoos and Their Adverse Reactions
Name: Romina Gom
Date: 2003-09-29 20:28:37
Link to this Comment: 6689


<mytitle>

Biology 103
2003 First Paper
On Serendip

While tattoos have been around for centuries, Egyptians would tattoo themselves as a symbol of fertility and strength. In recent years they have become increasingly popular, especially among teenagers. They range in size, designs, colors and location. However, as the popularity grows, so do the concerns over the safety and risks of tattoos. The United States Food and Drug Administration (FDA) does not regulate tattoos leaving the burden of tattoo safety and regulations up to individual cities and states. Some of the risks that come with getting a tattoo are infection at the site of the tattoo, allergic reaction to the tattoo dye, the spread of disease such as HIV and Hepatitis C, granulomas and keloid formation.

A tattoo is a series of puncture wounds made with a needle that carries dye into different levels of the skin (1). Infections can occur when a tattoo parlor does not use proper sanitation procedures. Since the FDA does not monitor tattooing and regulations can vary from state to state, in 1992 the tattoo industry created the Alliance of Professional Tattooist (APT), a non-profit organization to address the issues of tattoo health and safety. The APT attempts to monitor and standardize infection control procedures. It even gives several seminars a year on tattoo safety. However, membership is not required for a practicing tattooist and tattoo shops are not required to follow the same sterilization practices as other places that use needles, such as hospitals and doctors' offices (2). For places that do not follow these rules, the risk of infection is greater and can pose serious side effects to the person getting the tattoo.

According to a report published in March 2001, people with tattoos are nine times more likely to be infected with hepatitis C than those who are not tattooed (2). This report became mainstream news a year after its publication when Pamela Anderson came out saying that she had become infected by sharing a tattoo needle with her ex-husband Tommy Lee. Hepatitis C is a blood borne disease that can be spread when tattoo needles are used on multiple people and not thrown away and equipment is not sterilized properly. Seventy five percent of people infected with hepatitis C will develop long term infection that attacks the liver, leading to cirrhosis, liver failure, and liver cancer (2).

Allergic reactions, although rare, can occur. Since the FDA does not regulate tattoos or the dyes used for them, some dyes that are not meant to be in contact with human skin is used in order to make the various shades used in color tattoos. While most tattoo dyes are made from color additives that have been approved for cosmetics, none have been approved for skin injections. Sometimes in order to make a new shade, non approved pigments are added to the dyes (3). Tattoo ink manufacturers are under no obligation to label the ingredients used and these non approved pigments are sometimes made with printers' ink or car paint. The degree of the allergic reaction can differ from person to person. The most common is just a skin irritation or swelling that can be troublesome for the person because there is no way to get the dye out of the skin. However, it can be fairly easy to treat with over the counter medicine. Allergic reactions do not have to occur right away. Some people experience these reactions after they have had several tattoos or after years of having a tattoo.

Other adverse reactions can include granulomas, which is when the body rejects the tattoo as a foreign object and forms nodules around it. Nodules are small knot like protuberances made up of a mass of tissue or aggregation of cells (4). Another reaction could be keloid formation, which is when scars grow beyond their natural boundaries; however, most people report this happening after they have had a tattoo removed (3).

If someone is dissatisfied with a tattoo, it is important to note that the removal of one may be even more painful than the tattoo itself and it can be very expensive. A tattoo that costs $50 can cost about $1,000 to remove. There are different methods to remove a tattoo including laser treatments, abrasion, scarification and surgery. Laser treatments lighten the tattoo and can take several visits over the span of weeks or months to work. A common side effect is a lightening of the skin's natural color to the affected area (3). There have also been reports of people suffering from allergic reactions after laser treatment. This happens because the laser can cause the tattoo dye to release allergenic substances into the body. Another method is dermabrasion is when the top layers of skin are eroded using a wire brush or sanding disc. This process can be very painful and can leave a scar. And scarification is when the tattoo is treated with an acid solution so that a scar is left in its place.

When a person chooses to get a tattoo, one must be aware of the risks associated with it. A spur of the moment or uneducated decision could lead to complications later on. When choosing a tattooist, it is important to ask questions and make sure that they are following proper sanitation procedures. A safety cautious tattooist will gladly share this information with you. If not, go somewhere else.


References

1)WebMD, Tattoo Problems

2)WebMD Articles, Anderson Says She has Hepatitis C, an article by Michael Smith, MD

3)US Food and Drug Administration, Tattoos and Permanent Makeup

4)Dictionary.com, Definition of Nodules


Methylphenidate: Calming Chaos or Cultural Genocid
Name: Melissa Ho
Date: 2003-09-29 22:43:30
Link to this Comment: 6692


<mytitle>

Biology 103
2003 First Paper
On Serendip

Methylphenidate: Calming Chaos or Cultural Genocide?
Melissa Hope

Energetic, rowdy, animated. These adjectives, often used in describing the routines and milieu of the child, are now not as accurate as they once were. Words such as focused, calm, and attentive can be applied more readily. The differentiating characteristic between these two groups—methylphenidate.

A central nervous system (CNS) stimulant, methylphenidate—more commonly known as Ritalin—is drug prescribed in the treatment of Attention-Deficit/Hyperactivity Disorder (AD/HD) ((1)). AD/HD, by definition, is "developmentally inappropriate behavior, including poor attention skills, impulsivity, and hyperactivity" sustained for more than 6 months, appearing usually during childhood2 ((2)). Figures estimate an approximate 3-5% of children are affected by the disorder. Differing views, however, exist about the legitimacy of the majority of these diagnoses. In light of this, the object of this assessment is to examine the bodily and societal implications of methylphenidate.

The need for Ritalin and other CNS stimulants arises from a decreased amount of dopamine—a hormone closely linked to the motivational process((3)). A deficiency of this hormone can lead to difficulty in focusing and agitated behavior, among other traits1 ((1)). Methylphenidate, serving as a stimulant, augments the release of this hormone. The resulting state is similar to that after caffeine, on a milder scale, or amphetamines1 ((1)). This attribute can lead to the somewhat addictive nature of the drug.

"Ritalin, Ritalin, seizure drugs, Ritalin. So goes the rhythm of noontime for Mary Jane Kemper, nurse at Donald McKay School in East Boston, as she trots her tray of brown plastic vials and paper water cups from class to class, dispensing pills into outstretched young palms"4 ((4)). This scene, taken from a New York Times article, is steadily becoming a commonplace background. In recent years, the number of children diagnosed as AD/HD has increased drastically—more than four million children5 ((5)). Statistics behind the disorder are rather shocking:

- "The use of medication to treat children between the ages of 5 and 14 also increased by approximately 170 percent."

- "The number of preschool children being treated with medication for ADHD tripled between 1990 and 1995."

- "The number of children ages 15 to 19 taking medication for ADHD has increased by 311 percent over 15 years."

- "The U.S. produces and consumes about 85 percent of the world's production of methylphenidate". 6 ((6))

Simply, the trend demonstrates an increasing diagnosis and treatment rate of AD/HD. The affect, in turn, is a sizable circulation of methylphenidate.

Controversial theories and incongruous studies present two perspectives on the long-term impact of methylphenidate use. The National Institute on Drug Abuse (NIDA) has been pursuing further studies to determine whether AD/HD can lead to increased risks of substance abuse and addiction. Two theories, examine the study using differing catalysts for addiction—medications used in the treatment of AD/HD and the disorder itself7 ((7)). The former follows along the premise that over time the brain becomes somewhat desensitized to the stimulant. With time, a greater quantity is required to achieve the rewarding properties of the medications. The long-run implications can dictate dependence.

Conversely, recent studies from Harvard Medical support the theory that by treating cases of AD/HD through medication, the risk of substance dependence would be reduced by eighty-four percent7 ((7)). One, however, must question the accuracy of the original diagnosis (as misdiagnoses for AD/HD is not uncommon), the size of the sampling group, as well as the long-term affects—mentally and physically.

Amidst a generation familiar with AD/HD—be it directly or indirectly, the effects ripple through society. Scenes, such as the one described earlier, demonstrate the ever-present fixture of "Wonder Drugs" in our society. As a diagnosed generation leaves the supervised clutches of elementary school, they bring with them the knowledge, usage, and prescriptions for methylphenidate. Whether it is the enterprising high school student looking to make a "quick buck", a college student looking for a means to pull an "all-nighter", or the young adult looking for an inexpensive release, the line between medicinal and recreational use is obscured.

This recreational use recalls a time earlier with the frequent use of cocaine and other amphetamines in the 1960s and 1970s. The definitive characteristic shared between the three substances is the fact that the body cannot distinguish among them8 ((8)). "But unlike cocaine, which garnered a name as a social drug used at clubs and parties, Ritalin tends to be taken when people are alone and want to squeeze more hours out of the day"8 ((8)).

AD/HD has become a cultural phenomenon of sorts—provoking issues surrounding the legitimacy of the disorder, as well as the secondary problems resulting from the treatment. Inescapably, our society has changed as a result. Is this for the better, as recognition and treatment allow children to engage in life with a greater ease? Alternatively, have these pharmacopeias led to a societal demise? Is it acceptable to engineer chemically a balanced and happy child?

References

1)Methylphenidate, National Institute on Drug Abuse information sheet,
2)3) Methylphenidate works by increasing dopamine levels, BMJ General Medical Journal article,
4) For School Nurses, More Than Tending the Sick, New York Times,
5) Wonder Drug Misused ABCnews.com,
6) Statistics confirm rise in childhood ADHD and medication use, Education-World statistical information,
7) Medications Reduce Incidence of Substance Abuse Among AD/HD Patients, NIDA journal article,
8) Ritalin Abuse Spreads to Adults The Gazette (Montreal),


Why Do We Blush
Name: Maria Scot
Date: 2003-09-29 23:37:43
Link to this Comment: 6697


<mytitle>

Biology 103
2003 First Paper
On Serendip

I have blushed easily all my life. I simply accepted it as unavoidable that
whenever I spoke in class, arrived somewhere late or was singled out for praise or
correction that my face would redden significantly. As a young child I simply assumed
that everyone blushed as much as I did, and that it was only my unusually pale skin that
made my tendency towards blushing more apparent. But this is not, in fact, the case.
Some people blush more than others do and some families blush more than others do (2). Some attribute blushing to social phobia, though it differs in that it
is not accompanied by a change in pulse rate or blood pressure (1).
Blushing is generally thought to be a response to embarrassment, but is the emotion that
triggers blushing as broad and general as "embarrassed"? Or are there more nuances to
the emotional cause of what Darwin termed "the most peculiar and most human of all
expressions" (2)? My personal experience is that I tended to blush not
exactly when embarrassed per se, but rather whenever I felt I was making, or had made,
myself vulnerable to the criticism of others. When something I had done, such as arrive
late, broke a social rule. What I could not understand was the purpose blushing served;
what use could this phenomenon have? It became clear as I researched the issue that
one's propensity for blushing was directly linked to one's sensitivity to the opinion of
others (4). However, actual phenomenon of blushing is an
appeasement behavior designed to signal to the rest of the group that the individual in
question realizes their social transgressions and asks for the group's approval or
forgiveness (1). People, like myself who blush frequently, have an
oversensitive and therefore inaccurate perception of what constitutes a breach of decorum
resulting in more frequent episodes of blushing than someone who did not perceive
themselves to frequently commit social transgressions. The source of negative self-
attention that results in this need to appease the group and by extension which leads to
blushing were divided into categories: threats to public identity, scrutiny and the
accusation of blushing (3). All of these result in negative self-
attention and the sense that some social norm has been breached, resulting in the
perceived necessity for an appeasement behavior, in this case, blushing.

Threats to public identity or a perceived negative reaction of other's often leads to
blushing (3). Indeed, many people cited situations in which they have
been caught or doing something of which they are ashamed as leading to blushing href="#3">(3). This is consistent with blushing as an appeasement behavior. The
person caught doing something that they perceive to be "shameful" or "improper" would
feel the need to signal to the rest of their group that they recognize their transgression.
That they reject their actions because they share the values of the groups other members
and therefore that the group should accept them despite their mistake href="#1">(1). Babies, for example, who have no sense of social norms or how they
are perceived by others, do not blush at all (2). Blushing increases,
though, when strangers witness something that an individual views as unflattering or
which puts them in a negative light. For example, when three people together watched a
video of one of them singing, the person who had been recorded blushed much more than
the strangers (5). I personally remember the torture of being sent to
theatre camp and forced to sing at the end of the summer program. The only way that I
could get through the song was to stand sideways on the stage looking away from the
audience, into the wings. The sight of all the strangers watching me was simply more
than I could take.

Scrutiny and receiving large amounts of attention may also lead to blushing even
though it may not be negative attention (3). The most obvious
example of this being when adolescents of the opposite gender are in one another's
presence. This is less a response to a negative reaction on the part of the observer, but
rather a fear of insufficiency on the part of the blusher (3). The
obvious conclusion to draw from this is that being the center of attention, positive or
negative, will lead to a heightened sense of self-awareness. The blusher may feel shame
or humiliation if they are the subject of negative attention, for example a publicly
chastised student. The blushing would then be intended to apologize, to signal their
awareness of the inappropriate nature of their behavior to all who saw it href="#3">(3). It is a fairly effective way to mitigate further attack, and people tend
to see it as a conciliatory gesture (6).

The accusation of blushing has been seen to increase the blusher's state. The
inference that 'you are blushing' hence 'you must have done something worth blushing
about'. The expectation to blush can become a self-fulfilling prophecy, the same is true
of verbal feedback that blushing is in fact taking place. This is due to the fact that a
propensity to blush is a serious source of anxiety to an individual who from past
experiences expects blushing to take place (7). In general, having
one's blush pointed out to a given individual makes them much more socially
uncomfortable, though it often seems to be the source of amusement for those who are
not blushing (7).

While the exact causes of blushing vary widely from individual to individual, I
feel that my own personal experiences with blushing are very much in keeping with the
sources three situations conducive to blushing that were discussed above. If blushing is
indeed an appeasement behavior, it explains much of why, despite it's apparent lack of
use, that it plays a role in our culture. It is an interesting link between one's physical self
and one's mental self. What one finds embarrassing or worth apologizing for can be seen
in an involuntary physical response.

Sources

1) 1)Stein, D J. Bouwer, C. Blushing and social phobia: a neuroethological speculation. Medical Hypothesis 1997; 49, 101-108.

2) 2)Darwin, C. The Expression of the Emotions in Man and Animals. Chicago:Chicago University Press, 1872/1965.

3) 3)Leary M R, Cutlip W D II, Brit T W, Templeton J L. Social blushing. Psychological Bull 1992; 3: 446-460.

4) 4)Self-conciousness, self-focused attention, blushing propensity and fear of blushing, An article dealing with the the role that self-awareness plays in the cause and frequency of blushing

5) 5)Empathetic Blushing in Friends and Strangers, An article dealing with the issue of blushing out of sympathy or empathy for another

6) 6)Blushing may signify guilt, An article exploring the role that blushing plays in ambiguous situations of guilt or wrong-doing.

7) 7)The impact of verbal feedback about blushing on social discomfort
and facial blood flow during embarrassing tasks
, An article exploring how being made aware of one's blushing tendancies by others affects the individual who is blushing.


Early Childhood Cognitive Development
Name: Brianna Tw
Date: 2003-09-30 00:16:42
Link to this Comment: 6698


<mytitle>

Biology 103
2003 First Paper
On Serendip

America has many programs for graduating students that are involved with education and children. While any college student can appreciate education, I suspect that few understand the importance of early childhood development. Having committed to apply for a position in Teach for America, I want to better understand why it is so important to "get 'em while they're young."

In 2001, the US Department of Education, Academy of the Sciences, and the Foundation for Child Development conducted a study on early childhood development. Several interesting, scientific ideas and trends on childhood development emerged from the study. The questions surrounding this research were: how important is the early life of a child? What early years are most important? Why are later years not more important? In order to better plan education policy, discussing these questions is necessary.

The portion of the study I find most convincing is that regarding neuroplasticity. Neuroplasticity, or brain plasticity, is the brain's ability to reorganize neural pathways based on new experiences. (1) Simply put, every day we experience and learn new things. In order to incorporate this new information into our brains, the brain must reorganize the way it processes that information. Thus, as we learn things, the brain changes.

Neuroplasticity is important because, while it continues throughout the life of every individual, it is closely linked to the rate of brain development/growth. During rapid periods of brain growth, synaptic pruning occurs. Synaptic pruning is the elimination of synapses in the brain that are weaker facilitating growth of a stronger, more efficient brain. (2)As the brain grows, starting with a newborn, its neurons will develop synapses, which link neurons to neurons and transmit information through one another. At first this growth is uninhibited. However, as the infant reaches toddler age, the brain begins to eliminate some synapses between neurons in order to help the brain transmit information more efficiently. The synapses and neurons that were activated most during growth are the ones that will be preserved. This process helps to create a brain better equipped to absorb knowledge. (1) To optimize synaptic pruning, some studies show that "a specific learning process" may help the brain form itself into a more useful tool. (2)

The Eagar to Learn study found five criteria that are based on neuroplasticity. The first, attention, stresses the need to hone the development of this skill during the first five years of life- the years with the most rapid brain growth. Second, priming, or the acquisition of knowledge through sight and sound, may reshape the synapses of the brain based on stimulus. Practice, the fourth criterion, also uses neuroplasticity to absorb knowledge into the brain. The fourth and fifth criteria are related to language. Learning and rule learning both demonstrate that the sooner these elements of education are specified and controlled, the more likely the brain will have a better synaptic pruning process. In other words, the more weak synapses will be eliminated. Eagar to Learn reports that once English orthography, a component of rule learning, is set in the brain, the brain may become "strongly resistant to change." (3)

Based on these criteria, in combination with the understanding of the process of neuroplasticity and synaptic pruning, several observations may be made. First, we understand that the greatest amount of synaptic pruning occurs during the most rapid period of growth for the brain, or during the earliest years of a child's life. Second, when "a specific learning process" is provided for a child, the more successful synaptic pruning becomes. Thus, we may conclude that in order to prepare the brain of a child for the most proper education, it is best to develop programs that educate children during the earliest years of life.

This conclusion has been acknowledged in education policy. President Bush, in his education campaign No Child Left Behind, has acknowledged the importance of early childhood cognitive development. (4)Additionally, his criteria for reading education and tutoring focus on many of the criteria presented by the Eager to Learn foundation. Judging from these studies, my observations, I feel that one may confidently conclude that addressing education of children while they are youngest is most beneficial not only for their education, but also for the physical development of their brains. This information is useful in many fields of study- the sciences, sociology, political science, law, etc. Neurosciences still have much to develop on early childhood cognitive development. However, presently, the information seems to facilitate the creation of a proper education for young children.


References

1)Neuroscience Consultant, Prepared by Erin Hoiland

2)Synaptic Pruning in Development, Online Version of a Text


3)Eager To Learn , Study, Online Version of Text

4)US Department of Education , President Bush's Initiatives


Malaria and Global Responsibility
Name: Manuela Ce
Date: 2003-09-30 00:24:53
Link to this Comment: 6699


<mytitle>

Biology 103
2003 First Paper
On Serendip

The United Nations has declared 2000-2010 the "decade to roll back malaria." The social, economic and human effects of this disease are dramatic: 40% of the world's population is currently at risk for malaria, and it kills an African child every 30 seconds(7). The presence of malaria, as that of most other endemic tropical diseases, is directly related to the precarious living conditions of people in developing countries, but is also a cause that hinders growth and development, "In Africa today, malaria is understood to be both a disease of poverty and a cause of poverty." (6). This essay aims to show the connections between disease and society in specific regards to malaria, as well as the need for a more comprehensive analysis of cultural, environmental and socio-economic factors in scientific study to attain better understanding of the implications of malaria and find better preventive measures and possible cures.

The Disease: Malaria is a life threatening, parasitic disease, where the female Anopheles mosquito (who takes blood to feed her eggs) transmits the parasite from human to human. Transmission can also occur through infected needles among drug users and occasionally, in blood transfusions. It is a protozoal infection (protozoa are single-celled organisms). There are four types of species of Plasmodium protozoa that cause human malaria: Plasmodium falciparum, P.vivax, P. ovale and P.malariae. Malaria caused by P. falciparum is the most serious (3). The initial stage of the disease is characterized by nausea, muscular pains, headaches, fatigue, slight fevers and diarrhoea, and later gives way to more serious intermittent fevers. Because of the vagueness of these symptoms, misdiagnosis is common. More acute forms of malaria cause organ failure, convulsions, spleen enlargement, anaemia, impaired consciousness, persistent coma and death.

The History: The name malaria (bad air) comes from the early belief that tropical swamps caused the disease. It is one of the most ancient infectious diseases recorded; as early as the 5th century BC, Hippocrates (often referred to as the "father of medicine") recorded observations of malarial fevers. Investigation in the disease gained importance in the late 19th century not only because of its relatively high prevalence in countries such as Italy, but by the growing presence of mainly English, French and Spanish soldiers and colonizers in Southern Asia, Africa and the Americas, all areas of high risk. In the 1880's, Alphonse Laveran, a French army-surgeon, was the first to distinguish the malarial parasite and recognize it as the cause. Later in the same decade, the British Dr. Ronald Ross showed that the Culex mosquito transmits malaria in birds, finding for which he earned the 1902 Nobel Prize in Physiology. Ross' discovery allowed his Italian contemporaries Giuseppe Bastianelli, Amico Bignami and Giovanni Battista Grassi to find that malaria in humans is also transmitted by mosquitoes, though of a different species: the Anopheles. Sanitation as a basic preventive measure resulting from these observations became a key factor in significantly diminishing death from malaria, which allowed the completion historical endeavors, such as the Panama Canal, a major agent of economic development.

Social, Economic and Human Impact: Along with tuberculosis and HIV/AIDS, malaria is one of the most important public health challenges that hamper development in the world's poorest regions. Malaria exists in 100 countries around the world, the vast majority of which lie in the least developed, tropical areas of Central and South America, Asia, and most prevalently, Africa, where at least 90% of
the annual 1 million deaths due to the disease occur(4). The majority of those who die are children under five years of age and pregnant women. Acute cases of malaria amount to 300 million per year. Roll Back Malaria, a global partnership between the World Bank, UNICEF, and UNDP to "halve the world's malaria burden by 2010", states in its website, "Malaria, is one of the major public health challenges eroding development in the poorest countries in the world. Malaria costs Africa more than US$ 12 billion annually. It has slowed economic growth in African countries by 1.3% per year, the compounded effects of which are a gross domestic product level now up to 32% lower than it would have been had malaria been eradicated from Africa in 1960." Consequently, the development of many nations and of the entire Sub-Saharan region, along with the quality of life of millions of people would greatly increase with the control, prevention and cure of malaria, "for developing economies this has meant that the gap in prosperity between countries with malaria and countries without malaria has become wider every single year." (5). In countries burdened by malaria, foreign and local investment decrease, workers are unable to attend their jobs, and the tourist industry remains underdeveloped. Also, more importantly and harder to explain in economic terms, malaria causes great human suffering. Children cannot regularly attend school, and often cannot reach their potential due to the neurological damage caused by malaria. This is when scientific research becomes an issue of social justice.

Possibilities... Although scientists are hopeful, there is no proof of an effective vaccine currently on the market, and it may be years until one is proven viable and can be administered. Drugs are expensive (particularly because those who are most affected are poor), and resistance to them presents heavy economic load, for, "The average cost of discovering and getting a new drug to the market is estimated to be at least $500 million." (2). Mosquitoes are also developing resistance against insecticides. Therefore, while there needs to be an active search for new drugs, and especially a vaccine, wider scale and inexpensive measures must be taken. The Roll Back Malaria global partnership encourages countries to promote the usage of mosquito nets treated with insecticide (ITNs), especially in homes where there are young children and pregnant women. Stronger health care systems that extend their coverage to rural areas need to ensure that pregnant women can receive prompt Intermittent Preventive Treatment. In addition, the availability of updated drugs and treatment (artemisinin combination therapies, for example) once there is infection among the population may ensure survival of individuals and well-being among societies. This is, in large part, the responsibility of the public sector, while globally, debt relief and aid for countries burdened by malaria is in discussion. The private sector, pharmaceutical industries included, may use their networking systems for educational purposes and for distribution of medicine. The donation of capital now will result in increased productivity later, and reduce human suffering, which in a globalized world, is the responsibility of all.


References


1)Salud y desarrollo. Aspectos socioeconomicos de la malaria en Colombia, From the Virtual Library Luis Angel Arango, a review of a book Publisher on the socieconomic impacts of malaria in Colombia

2)Introducing MMV, the Medicines for Malaria Venture, Projects for low cost prevention

3)Encyclopaedia Britannica Online , Definition, discovery, history

4)Malaria Disease Info, UNDP-World Bank-WHO special programme for research and development, basic info sheet

5)RBM Information Sheet , A more comprehensive info sheet, focused on Africa

6)Roll Back Malaria , A global partnership

7)World Health Organization , self explanatory


Placebos: Can a Sugar Pill Cure?
Name: Julia Wise
Date: 2003-09-30 11:07:29
Link to this Comment: 6709


<mytitle>

Biology 103
2003 First Paper
On Serendip


Placebo: the word is Latin for "I will please." Originally it started the Vespers for the dead, often sung
by hired mourners, and eventually "to sing placebos" came to mean to flatter or placate (1). Later, the
term was used for any kind of quack medicine. Today, it is a medicine that has no value in itself, but
improves a patient's condition because the patient believes it to be potent.
Belief in a swallowed sugar pill or saline injection has been shown to produce real reactions. 80% of
patients given sugar water and told it is an emetic respond by vomiting (1). People often show an
allergic response to something they believe they are allergic to, even if it is only plastic flowers. Does
this strong reaction hold true for more serious medical conditions, then?

There are three explanations as to why placebos may work. The first, called the opoid model, says that
the positive response is a result of endorphins released in response to swallowing a pill, etc. The
second is the conditioning model, which holds that the important factor is not the medicine, but contact
with a medical professional. Because patients are used to getting better after they go into a doctor's
office and talk to someone in a white coat, they are psychologically conditioned to get better after
contact with the medical environment. The last is the expectancy model, in which patients improve
because they expect the placebo to have a certain effect.

There are even more arguments, though, as to how the placebo effect has been exaggerated or
fabricated. Some studies include additional treatment along with the medication, so simply being in a
study may produce results (1). Some studies on placebos often show similar rates of success for a drug
and a placebo, but do not include a control in which no treatment is used. In such studies, it is
impossible to tell what improvement was actually due to the placebo and what would have happened
anyway (3). Patients may also tend to report improvement because they think this is what is expected.
This is especially true with poorly designed response forms with more options for improvement than
worsening. Many illnesses, like colds, improve by themselves given time. Others, like depression and
chronic pain, fluctuate. Thus improvement in these types of illness might well have happened without
any medicine or placebo. Indeed, there are some who argue that antidepressants have value only as
placebos, and should be debunked (4).

Other problems exist in testing placebos' effectiveness. They cannot be used in studies on life-
threatening or degenerative illnesses, since taking an inactive treatment rather than a real one could do
patients real harm. Tests in which patients know they may be taking placebos show different results
from tests in which they are given only a drug. Here the same effect is seen with negative effect -
people react to the treatment they think they are getting. Patients have been shown to react less to real
medicine if they know there is a 50% chance they are actually getting sugar pills (1).
The first report on the placebo effect was made in 1955 by Henry K. Beecher. German researchers
recently took a closer look at the 15 studies on which Dr. Beecher based his report, and found that
much of the "evidence" in favor of placebos was inaccurately reported in his report. For example, he
reported that 30% of patients in one study improved after taking a placebo, but neglected to mention
that 40% worsened (3).

The argument began within the medical community, but awareness of the debate has spread to laymen.
Humor writers at The Onion (5) recently wrote an article joking that the FDA had approved a
prescription placebo called Sucrosa to treat everything from bipolar disorder to erectile dysfunction.
And despite the controversy surrounding placebos, they are also widely present in medical practice. It
is common, for example, for doctors to prescribe antibiotics for flu and viral colds (1). They know full
well the treatment can do nothing to cure the cold, but the purpose is to placate the patients. In this
case the use of placebos is actually harmful, since needless use of antibiotics diminishes their potency by
creating resistant bacteria (2).

Does the effect really work, then? Are doctors just wasting patients' money on useless medicines?
Maybe the "effect" is nothing more than a result of skewed studies. Some conditioned responses are
quite provable, but showing what caused an improvement in a patient taking a placebo is nearly
impossible. We can prove that Beecher's study was flawed, but continuing studies will no doubt
continue to alternately support and debunk the theory. The reality is probably this: hey, it's the medical
community. They'll be debating for many years to come.

References

1) The
Mysterious Placebo Effect
, an article from Modern Drug Discovery
2) The Mysterious Placebo, from the
Skeptical Inquirer
3) The Placebo Effect: Fact or Fiction?
4)
Listening to Prozac but Hearing Placebo: A Meta-Analysis of Antidepressant Medication
, an article
from Prevention & Treatment
5) The Onion, FDA Approves Sale of Prescription
Placebo


Test
Name: Paul Grobstein
Date: 2003-09-30 16:15:08
Link to this Comment: 6732

test


The Mozart Effect
Name: Margaret T
Date: 2003-09-30 22:28:41
Link to this Comment: 6736


<mytitle>

Biology 103
2003 First Paper
On Serendip

Ever since human intelligence has been a factor for survival, people have been trying to think of new, innovative ways to increase their mental capabilities. In the past, people have taken pills, prepared home-made concoctions, and have even shaven their heads to clear their minds. Even now, new ideas, such as magnetic mattresses for better blood circulation to the brain, are patented and sold promising mental wellness and stability – and making money for the inventor. When scientists find something that enhances intelligence the general public is interested.

This is perhaps why a small study out of the University of California, Irvine procured so much attention. In 1993 Gordon Shaw, a physicist, and Frances Rauscher, a former concert cellist and an expert on cognitive development, studied the effects the Mozart Sonata for Two Pianos in D Major had on a few dozen college students. They performed this study to see whether "brief exposure to certain music could increase a cognitive ability" (3). They study took thirty-six college students and divided them up into three groups. Each group spent ten minutes listening to different sounds: the first group listened to the afore mentioned Mozart sonata, the second group listened to a tape of relaxation instructions and the third group sat in silence. Directly following these ten minutes the students were tested on spatial/temporal reasoning (more specifically the Stanford-Binet Test). Simply put, the "subject has to imagine that a single sheet of paper has been folded several times and then various cut-outs are made with scissors" (3). The object for the students is to correctly guess the pattern of cut-outs if the paper were unfolded.

In the end, the scores of the group that was listening to Mozart were significantly higher then those of the other two groups. The Mozart group had an average eight to nine points higher when the tests were translated into spatial IQ scores. They also found, however, that this affect lasted for only ten to fifteen minutes. The scientists concluded that the "benefits to special/temporal reasoning would require complex rather than repetitive music," however, did not go as far as to say that this music must be that of Mozart. They also made it clear that these findings were indeed isolated to the special/temporal realm and did not translate to other areas of intelligence such as verbal reasoning or short-term memory.

This was indeed a fairly informal study, performed on a mere thirty-six people – a small group from which to make "less wrong" conclusions based on observations. This, however, did not seem to matter to the general public. In 1993, when this study was written up in Nature both the media and the general population couldn't believe it. This was an easy, inexpensive way to increase your intelligence; and it was "proven". The concept exploded. Soon there were products on the market. CDs with titles like "Mozart for Meditaion" and "Mozart for the Mind" could be found at any major CD retailer. There was a significant jump in the amount of Mozart played by orchestras. In a couple of years the assumption was made that if the Mozart Effect worked on adults than it stands to reason that it would help babies as well. A toy company produces a teddy bear whose stomach played Mozart quietly to help a baby sleep. Former Georgia Governor actually requested 105,000 dollars to five classical-music compact discs to parents of all newborns in the state. There was a true surge in Mozart popularity as people began more and more to believe in this small study out of the University of California.

Two years later in 1995 Rauscher and Shaw once again performed another test almost exactly the same as the one performed in 1993. This time there were more than twice as many subjects – 79 college students. The test was also longer, lasting a total of five days. The setup was mostly the same: the larger group was slip up into three smaller groups. The first group listened to Mozart, the second listened to something new every day (e.g. minimalist music, reading, and dance music), and the third group once again sat in silence. On the first day of listening the outcome was similar to that of the first study. The Mozart group performed better than the others. The next four days, however, the scores were quite even. Despite this obvious flaw in the outcome, both the media and the general public wouldn't be swayed from their beliefs in the Mozart Effect.

It wasn't until 1999 that another large study on the Mozart Effect was done. This time Christopher F. Chabris used 714 subjects and compared silence to listening to Mozart. After ten minutes of listening to Mozart the subjects were given tests that can either be classified as abstract reasoning or spatial/temporal reasoning. There were no differences in the abstract reasoning portion of the test and the Mozart group performed slightly higher than the silent group on the spatial/temporal section. These numbers and differences, however, when taken from such a large group were irrelevant. "Exposure to ten minutes of Mozart's music does not seem to enhance general intelligence or reasoning, although it may exert a small improving effect on the ability to transform visual images" (4). Also, within these relatively insignificant numbers it also appeared that the enhancement was essentially restricted to a single task. Two other experiments performed around this same time also support these findings, and offer new insights. The first of these two found that listening to Mozart or to a passage from a Stephen King book increased the subjects' performance in the spatial/temporal reasoning, "but only for those who enjoyed what they heard" (4). The second, like the first, showed that enjoyment also was a major factor in the increases seen. This study took 8,120 British schoolchildren performed better when they listened to popular, hip-hop music compared to Mozart. Needless to say, the results began showing a very different trend when these other factors were added in.

Almost a decade after the first study was released, Chabris' counter argument was heard. Frances Rauscher said to her defense "that many researchers who tried to repeat the experiment failed because they measured the effect on general intelligence instead of on spatial/temporal abilities, or the ability to identify various shapes" (1). However, as years go by, more and more studies like Chabris' are coming out, and they are having similar results to Chabris'. "'It's really not a mystery,' said University of Toronto psychologist Gabriela Husain. 'Music affects how energetic and happy people feel. And it's well known that people who feel vigorous and in a good mood score better on test'" (6). These sentiments are now widely shared by the media and the general pubic. The new inventions centered around Mozart's music are becoming more scarce, and it's much harder to find "Mozart for the LSATs" at an average CD retailer. After spending almost a decade in the spot light, The Mozart Effect is beginning to loose ground, and will soon be replaced by a new study proving that skittles stimulate the right-frontal lobe and every parent will be happy to give their kids some candy.

References

1)

2)

3)

4)

5)

6)


EPILEPSY and THE BLOOD TYPE DIET
Name: Anna Katri
Date: 2003-10-01 01:47:51
Link to this Comment: 6740


<mytitle>

Biology 103
2003 First Paper
On Serendip

Are people with certain blood types more susceptible to chronic seizures than others? Can a simple diet reverse this medical condition? And why didn't anybody think of this before?


There's a myriad of fad diets out these days: Atkins, the fruit juice diet, Russian Air Force diet, and the Zone to name a few. However, the most recent craze is, "The Blood Type Diet", based on the book, Eat Right 4 Your Type by Doctor Peter D'Adamo. The diet focuses on an individual's genetic makeup (blood type) in determining which foods are best digested. D'Adamo heads up the Institute for Human Individuality (IfHi), which "seeks to foster research in the expanding area of human nutrigenomics. The science of nutrigenomics (naturopathic medicine) seeks to provide a molecular understanding for how common dietary chemicals affect health by altering the expression or structure of an individual's genetic makeup" (1). On the website, the "five basic tenets of nutrigenomics" are listed as:

1. Improper diets are risk factors for disease.

2. Dietary chemicals alter gene expression and/or change genome
structure.

3. The degree to which diet influences the balance between healthy and
disease states may depend on an individual's genetic makeup.

4. Some diet-regulated genes (and their normal, common variants) are
likely to play a role in the onset, incidence, progression, and/or severity
of chronic diseases.

5. "Intelligent nutrition" - that is, diets based upon genetics, nutritional
requirements and status - prevents and mitigates chronic diseases. (1).

The Blood Type Diet is founded upon the microscopic observation of how ABO types break down different foods, suggesting that one person's nourishment may be another's poison. The book examines the demographic distributions of different blood types, and proposes that "the variations, strengths and weaknesses of each blood group can be seen as part of humanity's continual process of acclimating to different environmental challenges" (2). D'Adamo asserts that blood groups "evolved as migratory mutations," with type O being the most "ancient" of the ABO group, and housing the largest population (40-45%), second to type A (35-40%), dwindling in B (4-11%), with the rarest being AB (0-2%). People with type O blood (hunter-gatherers) are encouraged to be carnivores, while type A's can survive solely as vegetarians. Explaining the origin and spread of blood type B, D'Adamo states, "Two basic blood group B population patterns emerged out of the Neolithic revolution in Asia: an agrarian, relatively sedentary population located in the south and east, and the wandering nomadic societies of the north and west" (2).. Most Jewish populations have average blood type rates of B; specifically, B group is most frequently found in Europeans: Asians, Poles, Russians, and Hungarians.

The book stresses that certain blood types are more susceptible to specific diseases than others, because of dangerous agglutinating lectins which attack the blood stream and lead to disease. Specifically, people of blood type B are more prone to hypoglycemia, stress (type B's show higher than normal cortisol levels in situations of stress), MS, lupus, chronic fatigue syndrome, auto-immune and nervous disorders. D'Adamo writes that type B's "sophisticated refinement in the evolutionary journey;" was "an effort to join together divergent peoples and cultures. Usually type B's can resist the most severe diseases common to modern life" (2)., i.e., heart disorders and cancers; however, their systems are more prone to exotic immune system disorders, in this case: epilepsy.

About 1% of the world's population are affected by seizures. A person who experiences seizures is not an "epileptic" but rather suffers from the disorder epilepsy. Epilepsy is a chromosome abnormality or inherent genetic trait where "chronic or spontaneous, abnormal and excessive discharge of electrical activity from a collection of neurons arises in the brain as electrical misfirings" (4).. The exact cause of epilepsy has yet to be specifically determined, thus characterizing it as an idiopathic disease, or a disease without any real identifiable origin. The electrical misfirings, which arise within the cerebrum, are usually traceable to some form of injury as a child to one or more of the brains lobes. Via an EEG machine, it's been discovered that seizures seem to originate most often in the temporal lobe, occurring in the gray matter of the brain. The gray matter in the brain is composed of cell bodies of neurons, the white matter is composed of axons of neurons, coated with insulation made from fat (hence the white color). The focus is the damaged gray matter, which is abnormally excitable, and when it spontaneously discharges, the result is a seizure.

According to D' Adamo, B group is prone to magnesium deficiency, which plays a crucial role in this disorder. "Magnesium acts as a catalyst for metabolic machinery in the B's blood type. B's systems are very efficient at assimilating calcium, and thus risk creating an imbalance between their levels of calcium and magnesium" (5). Believe it or not, this seemingly simple imbalance can lead to nervous disorders and many skin conditions (my sister has grand mal seizures and eczema). B's also have severe neurological reactions to vaccinations; because their nervous systems produce an enormous amount of B- antigens, when a vaccine is introduced into the system, there is a cross reaction, which, as D'Adamo points out, "causes the body to turn and attack its own tissues. These war-like antibodies think they are protecting their turf. In reality, they destroy their own organs: inciting an inflammatory response" (5).

What exactly happens in the brain when someone has a seizure? The first seizure is directly related to the location of the focus (the damaged gray matter in the brain); with time, the electrical explosion continues to travel rapidly throughout the brain, becoming more pronounced, more dramatic, like a forest fire spreading from tree to tree. This activity spreads along the surface of the brain cells by the sequential opening of tiny pores, which act like channels, permitting small, charged particles of sodium and calcium to enter the nerve cell. This wave of sodium and calcium ions entering the nerve cell sequentially along the surface of other cells leads to electrical excitation. Drugs that block these channels decrease the spread of abnormal electrical activity. Conversely, a lack of calcium and sodium ions, or an imbalance in the system will causes abnormal electrical activity.

"Balancing the system," is the foundation of "Eat Right For Your Type." Foods such as corn, buckwheat, lentils, peanuts, and sesame seeds affect the efficiency of the metabolic process, resulting in fatigue, fluid retention, and hypoglycemia (severe drop in blood sugar after eating a meal). The gluten found in whole wheat and wheat germ adds to the digestive and distribution problems. One of the "non brain" causes of epilepsy is a disturbed glucose metabolism (often associated with diabetes). Simple sugar used by the brain is an important form of energy. To produce glucose, the body needs insulin. Too much glucose (hyperglycemia) or too little creates the imbalance needed to trigger seizures. One of the key foods B blood types should avoid, D'Adamo says, are beans: lentils, garbanzos, pintos, and black eyed peas. Why? They interfere with the production of insulin.

A second cause of the chronic seizures disorder known as epilepsy is an electrolyte disturbance: occurring when the levels of salt in the blood stream (i.e. sodium chloride) fall too low. This can happen when bodily fluids are lost through severe diarrhea or vomiting, after extended exertion. D'Adamo attributes diarrhea to nutrient deficiency in essential fatty and folic acids (5). To compensate for this, lecithin (a lipid), choline, serine, and ethanolamine (phospholipids) supplements should be taken. While rye, corn, buckwheat, tomatoes, olives, and adaptogenic herbs (used to increase concentration and memory retention) should be avoided at all costs.

Grand Mal seizures, or Tonic Clonic seizures are perhaps the most severe and debilitating over time. To paint a picture of what happens when a person experiences a Tonic Clonic seizure, let me take you back to my first day of senior year in high school... Everyone is gathered in the auditorium for an opening day speech by the Headmaster. Mary, my 16 year old sister, 12 at the time, had had a rough morning waking up. She was tired, and my parents forced her to choke down some Farina (warm wheat-meal). It is It is early morning, and sitting in the top row, Mary gave a little cry as the air was forced out of her lungs. She slumped in her seat so her head fell on the boy next to her. Thinking she was playing a trick, he gently pushed her. Mary falls to the ground, unconscious and unresponsive as her body begins to stiffen - this is referred to as the Tonic phase. She begins to jerk - Clonic phase, as the electrical explosion spreads to both sides of her brain. The breathing slows and stops. She bites her tongue, frothing at the mouth. Her skin turns bluish gray as her air supply is cut off, putting enormous stress on her heart.

This moment can be absolutely terrifying for a family member to watch. Grand Mals reek utter havoc on the body, and often, when the affected wakes up, she is completely exhausted, feeling as if she has run a marathon. A common misconception about children is that they need excessive amounts of physical exercise. However, D'Adamo points out that stressful situations, fatigue, and unbalanced nutrition have been shown to trigger seizures, and B blood types should focus more on strengthening and toning exercise then strenuous physical exertion (substitute yoga for field hockey). Children are most prone to seizures when they wake in the morning, as their body desperately needs nutrients, what is eaten is essential. Mary, a B blood type (my brother is also a B and has Tonic Clonic seizures) had a bowl of wheat farina, which inhibits the production of insulin. We were in a rush that morning and enormous pressure was on her (she's pokey) to get out the door and off to school. In the car she tried to sleep but was restless, complaining of a headache. Mary also has very low blood pressure and had not had any juice to drink for breakfast, instead, had a glass of milk, perhaps causing an imbalance of electrolytes, or salt ions in her blood stream. Because B's are very efficient in assimilating calcium, they risk creating an imbalance between calcium and magnesium in their systems: magnesium being the chief catalyst for the metabolic machinery in B blood types. The summation of observations here? If there is not enough magnesium in B's digestive system, it cannot metabolize food properly and thus lacks any of the appropriate nutrients needed to run the body. If an agglutinating food is the first thing eaten (such as Farina), it attacks the blood stream, interfering with the production of insulin. An excessive amount of calcium in the blood first thing in the morning would create the imbalance between magnesium and calcium. A flux of calcium ions entering the nerve cell, coupled with the inability to produce insulin (hypoglycemia) is the exact recipe for an electrical storm inside the brain.

Thirty-five years ago, in a proprietary formula used for bottle feeding babies, "the vitamin B6 was inadvertently destroyed during sterilization, causing widespread seizures in infants. The newborns were cured with a B6 supplement, but this situation dramatically shows the impact B vitamins have on the nervous system" (6).. By the way, babies who are breast fed by mothers eating a low B6 diet can also have seizures. Why isn't there more information about the glaring connection between seizures and nutrition? There is a "seizure diet" on the market, the Ketogenic diet, however, it's only recommended for children 1-6 years (7)., and even then in extreme cases. Does this indicate that there's no hope for people who suffer from seizures and that they will be on medication for the rest of their lives? I don't know. Generally speaking, epilepsy is still a mystery to scientists, and in more than half of all cases of people with recurring seizures, scientists have yet to identify a cause. Research is slow, and due to the severe impact a seizure has on the brain- participants are scarce. Nervous disorders seem to occur when our systems are out of wack, or out of balance. D'Adamo's assertions ring of truth, and I believe there's matter to his words, matter worth looking into.


References

1) D'Adamo, Peter. Eat Right 4 Your Type. New York: Putnam Pub Group, 1996. Learn the whole philosophy/ anthropology behind the Blood Type Diet, including the foods your type best metabolizes, and natural options for treating and preventing disease.

2)IfHi Homepage, A very informative and enlightening web site about the group's dedication to naturopathic medicine and their goal in benefitting mankind through the development of new applications and practices in Naturopathy.

3)Eat Right 4 Your Type Home page, This site has a long excerpt from D'Adamo's book, "Eat Right 4 Your Type Encyclopedia." In it, D'Adamo outlines the origins of the ABO blood groups and talks about the prevention and treatment of diseases specific to the different types. An excellent introduction into this arena of ideas.

4)Epilepsy Home Page, Provides information about different types of seizures, support groups, treating the disease, stories, and new research on the disease. Please visit this site so you may become more educated about this little known disease.

5)Doctor D'Adamo's Nutritional List, This is a wonderful site to visit, even if you don't believe anything I've said. Scroll down lists of symptoms and trace their origins to vitamin deficiencies, at the same time, take notes on how to get yourself back on the nutritional track.

6)Article by Dr. Aesoph on B vitamins, The importance of B vitamins to our body's health should never be taken for granted. The site provides detailed information on the essential vitamin, B6. Doctor Aesoph offers some compelling reasons to start taking supplements....now.

7)Ketogenic Diet UK Home Page, Provides basic information about the Ketogenic diet, which is high in fat, and sometimes used as a last resort in young children to aid in curing seizures.


Attention Deficit Disorder
Name: Bessy Guev
Date: 2003-10-01 13:45:41
Link to this Comment: 6744


<mytitle>

Biology 103
2003 First Paper
On Serendip


Attention Deficit Disorder is a neurobehavioral disorder that affects thousands of people in the United States. Over the past decade, media focuses have been primarily on children with the disorder and the effects of the traditionally used medication, Ritalin. It is important to note that A.D.D. does not target only children, but it also greatly affects adults because it is not a condition than can be outgrown or cured. Furthermore, it has become critical, since more doctors have become specialists on this disorder, thus presenting the many ways in which it affects the life of a human being. The identification of Attention Deficit Disorder dates to the early 1900's when it was called "minimal brain dysfunction"; researchers found that children with encephalitis and soldiers who had received some brain damage (after World War I), demonstrated hyperactivity, impulsivity, and conduct disorders. (1) Consequently, researchers made the assumption that since brain injury could cause hyperactivity then all hyperactivity would be caused by brain damage. After many years of new observations, this statement has been shown to be untrue; however, there are still many misconceptions and rumors about the causes of A.D.D., which limit the general understanding of the disorder.

The topic of A.D.D. is of great interest to me since two of my siblings have been diagnosed with the disorder. This first assignment has given me the opportunity to explore the causes and the many faces of A.D.D. I found myself to be one of many people who believed many rumors and misconceptions to be true about A.D.D. as well as learning about the newest most commonly accepted observations and conclusions about the causes of this disorder. For example, A.D.D. does not occur in one form only; in fact, there are two major types of A.D.D.:
Inattentive: In general, people with this type have trouble keeping focus and attention and are not consistent with hyperactivity.
• Often failure in paying close attention to details/making mistakes in assignments
• Difficulty in retaining attention in tasks
• Seems not to listen/forgets daily activities
• Failure to follow instructions or finishing assignments
• Constantly loosing belongings

Attention Deficit Hyperactivity Disorder: In general, people with this type are in constant overactivity and are highly impulsive, which leads to the inability to remain focused and attentive.
• Fidgety and squirmy/ not being able to stay seated
• Feeling restless
• Often "on the go" or acts if "driven by a motor"
• Often talking excessively
• Impatience/Difficulty waiting for a turn or in lines (2)
In looking at the types of the disorder it is inevitable to ask: What are the causes of A.D.D.?

Contrary to what many people think, A.D.D. is not an illness nor is it a sign of low intelligence. (3) As other rumors suggest, it is also not caused by bad parenting. A prime example is my own family; two of my siblings have A.D.D. and I do not, although we were all raised by the same parents. In fact, there are several suggested factors which possibly contribute to the cause. For example, women who have an intake of drugs and alcohol affect the development of a fetus; thus, the child is likely to have cerebral damage that would lead to A.D.D.. Furthermore, the diet is a critical factor because it also affects the functioning of the brain. When the body does not receive an intake of the necessary nutrients, the brain suffers. Studies have also shown that "a child is [seventy] percent more likely to have A.D.H.D. or A.D.D. if they have a parent with either disorder." (4) This is most interesting because it indicates that the disorder may be genetic. This latter observation calls for further investigation since A.D.D. may become a greater concern to the overall public. The genetic factor implies that this condition is inevitable and that it cannot be contained. Since these factors are only observations and no definite factor(s) has been established as the cause(s) of A.D.D., there is much disagreement among scientists; thus allowing for additional research on this condition.

Most concepts surrounding A.D.D. appear to be negative since media focus is primarily concentrated on the problems of it. Despites its negative influence on people's lives, there are many people who have managed to lead successful lives. Many adults with A.D.D. have learned to live with the condition without having been previously diagnosed in their youth; many have used their hyperactivity as a tool for their success. However, focus remains on children with the condition and not so much on the adults. Perhaps equal investigation should be made of adults with A.D.D. in order to create a broader view and better understanding of the disorder; deeper understanding relies on the source of the problem as well as the positive and negative outcome.


Sources

1) 4)ADD WareHouse , Facts about Attention Deficit Disorder
2) 3)Better Health Channel
3) 1,2)Born to Explore , The Other side of ADD
4) www.add-adhd-helpcenter.com/adhd_causes.htm


Beating the Binge
Name: Stefanie F
Date: 2003-10-01 13:53:18
Link to this Comment: 6745


<mytitle>

Biology 103
2003 First Paper
On Serendip

Beirut, Pong, Quarters, Flip Cup, the Name Game, and 7-11 doubles are just a few of the names given to what is quickly becoming the new great American past-time for young people, drinking to excess. College-age students across the country have taken to channeling their energies into the creation of drinking games like these, without perhaps looking at the consequences of such creatively destructive behavior.

In the United States, forty-four percent of persons ages eighteen to twenty-one are enrolled in colleges or universities (1). According to recent statistics released by the Health and Education Center, forty-four percent of college students are categorized as heavy drinkers. Alcohol abuse is one of the biggest issues on college campuses nationwide, but what is it that makes excessive alcohol consumption such a concern in the year 2003?

Excessive alcohol consumption is often known as binge drinking. Binge drinking is defined as the consumption of at least five or more alcoholic beverages for men and four or more alcoholic beverages for women in a row on a given occasion (2). Studies show that in addition to the forty-four percent of college students who binge drink, one third of high school seniors also admit to having binged at least once in the two weeks prior to being surveyed. The greatest question posed, is why does such a destructive activity appeal in particular to this age group?

One might initially assume that all people in this age bracket are prone to participate in binge drinking. However, while forty-four percent of college students binge drink, only thirty-four percent of students the same age who are not enrolled in a college or university binge drink. There may be several reasons why those people who are submersed in academic environments are more likely to participate in excessive alcohol consumption.

The effects of alcoholic beverages are incredibly appealing to students who are enrolled in institutions of higher learning. Often these students are thrust into social situations to which they may not be accustomed. Alcohol consumption in many ways makes students feel more comfortable in the new collegiate social scene by creating a false sense of calm or euphoria, and use of alcohol throughout a studentís four year college experience often begins during a studentís freshman year.

The presence of alcohol on college campuses is overwhelming, and the availability to all students despite their legality is even more surprising. A running joke on most college campuses is that everyoneís favorite type of alcohol is either free or cheap. Students of legal drinking age are always more than willing to purchase alcohol for those who are not of legal drinking age. Many underage students will also go to great lengths in order to obtain alcohol by purchasing falsified identification or frequenting establishments near their respective campus which may have lax serving policies.

Alcohol is a depressant, which causes increased relaxation and decreased inhibition. Alcohol absorption begins immediately. The tissue in the mouth absorbs a very small percentage of the beverage when it is first consumed. Around twenty percent of the beverage is then absorbed by the stomach, and the remainder is absorbed by the small intestine, which distributes the alcohol throughout the body (2). The rate of absorption of the beverage is dependent on the concentration of the alcohol consumed, the type of drink, and whether the stomach is full or not. Carbonated beverages tend to intoxicate more quickly because they speed the process of absorption. Conversely, having a substantial meal will slow down the process of absorption.

The kidney and lungs together expel 10% of alcohol consumed, and the liver has the task of breaking down the remaining alcohol into acetic acid. The body only has the capability to expel 0.5 oz of alcohol, which is equivalent to one shot, glass of wine, or twelve ounce can of beer, per hour (3). Therefore, by definition, a binge drinking woman would have consumed four times the amount of alcohol her body is able to expel per hour. The altered state of mind that is caused by overindulgence can lead to any number of dangerous, potentially life-threatening situations.

Studies show that binge drinking is the cause of 1,400 deaths, over 500,000 injuries, and 70,000 cases of sexual assault/date rape each year (4). In addition to such serious personal risk, students under the influence also negatively affect their own educations and the educations of others by causing disruptions in both the academic and residential spheres of college and universities.

At the beginning of this paper I posed the question, what is it that makes excessive alcohol consumption such a societal concern in the year 2003? I think that the sheer number of articles and studies I found presented by both public and private organizations would answer this question; people are finally noticing a potential problem. These statistics speak for themselves. Collegiate binge drinking is an issue which must be addressed by colleges and universities in the United States. However, there is no evidence that a person who binge drinks in college will continue binge drinking after graduation. Certainly, some students continue alcohol abuse after graduation, but a predisposition for that condition should be taken into consideration, and I would venture to say that it is a small percentage of students who suffer problems of alcoholism and alcohol abuse later in life. I believe that this particular age group is prone to rebellion and experimentation. Some propose that lowering the legal drinking age to eighteen once again would remedy the situation. However, I believe that carefree behavior and to a certain extent, irresponsibility are inherent to this particular age group, and is merely a part of human maturation.


References

1) United States Census Bureau

2) The Health and Education Center

3) How Alcohol Works

4) Can We Change the Entire College Drinking Culture


Extroversion, Introversion, and the Brain
Name: Natalya Kr
Date: 2003-10-01 17:18:32
Link to this Comment: 6765


<mytitle>

Biology 103
2002 Third Paper
On Serendip

The terms "extrovert" and "introvert" are often used to describe individuals' interpersonal relations, but what do these terms mean precisely, and is there a neurobiological basis for these personality traits?

The terms originated from psychologist Carl Jung's theory of personality. Jung saw the extrovert as directed toward the outside world and the introvert as directed toward the self (1). He characterized extroverts as being energized by being around other people and drained by being alone and introverts as the opposite (1). He recognized that most people shared characteristics of both introversion and extroversion and fell somewhere along a continuum from extreme extroversion to extreme introversion (1).

Richard Depue and Paul Collins, professors of psychology at Cornell University and University of Oregon, define extroversion as having two central characteristics: interpersonal engagement and impulsivity (2). Interpersonal engagement includes the characteristics of affiliation and agency. Affiliation means enjoying and being receptive to the company of others and agency means seeking social dominance and leadership roles, and being motivated to achieve goals (2). They also closely link extroversion to "positive affect" which includes general positive feelings and motivation (2). Extroverts, they claim are more sensitive to reward than punishment whereas introverts are more sensitive to punishment than reward (2). According to Depue, "When our dopamine system is activated, we are more positive, excited, and eager to go after goals or rewards, such as food, sex, money, education, or professional achievements", that is, when our dopamine system is activated, we are more extroverted, or exhibit more "positive emotionality" (7).

What Depue and Collins refer to as "positive emotionality" is not precisely what Carl Jung referred to as extroversion. Positive emotionality is the willingness to pursue rewards, to be more stimulated by reward than punishment. Extroversion, according to Carl Jung is enjoying the company or others and being oriented toward the external world and energized by interactions with other people. In the vernacular, an extrovert is someone who has many friends, seems to be around people all the time, and is socially dominant. While positive emotionality and extroversion describe two separate attributes, they are fundamentally related. The desire to pursue goals and being more sensitive to rewards than punishments is an integral part to enjoying relationships with other people, building large networks of friends and being socially dominant. Other people, or groups of people can be punishing of those who attempt to befriend them. One of the possible punishments an individuals risks by attempting to form a friendship is social rejection. For someone with low positive emotionality, this punishment would be enough to deter them from even attempting to form new relationships. Similarly, to achieve social dominance, one must risk losing face in front of one's peers for the possibility of appearing confident and original. Again, there is a risk of social rejection, perhaps a greater risk than in the first scenario, but the reward is also greater: the admiration of one's peer group. Someone with high positive emotionality would be willing to take this risk whereas someone with low positive emotionality would not.

In their article, "Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion", Depue and Collins argue that there is a strong case for a neurobiological basis of extraverted behavior, because it closely resembles a mammalian approach system based on positive incentive motivation which has been studied in animals (2). Animal research has provided evidence to support the theory that a series of neurological interactions are responsible for variable levels of reaction to an incentive stimulus. First, the incentive is recognized in a series of signals between the medial orbital cortex (the eye), the amygdala (the emotional control center) and the hippocampus (memory center) (3) (2). Next the brain evaluates the intensity of the incentive stimuli in a series of interactions between the nucleus accumbens, ventral palladium, and the ventral tegmental area dopamine projection system (2). This creates an incentive motivational state which can be motivate a response by the motor system (2). Differences in individuals incentive processing are thought to be due to differences in the ventral tegmental dopamine projections which are directly responsible for the perceived intensity of the incentive stimulus (2). Genes and past experience are the sources researchers believe most affect a person's dopamine projections and so, the perceived intensity of incentive stimuli and the persons motivation to pursue the incentive: their degree of extroversion.

Drugs like cocaine, alcohol, or prozac, all affect these processes and also an individual's degree of extroversion. They can artificially correct an ineffective dopamine system and make someone feel more sociable or motivated to pursue a goal. Low levels of serotonin, correlated with depression, may make people more responsive to dopamine and more susceptible to dopamine-stimulating drug use such as the use of cocaine, alcohol, amphetamine, opiates, and nicotine (7).

Impulsivity, which Depue and Collins link to extraversion, can in its extreme case cause attention deficit/hyperactivity disorder, pathological gambling, intermittent explosive disorder, kleptomania, pyromania, trichotillomania, self-mutilation, and sexual impulsivity, as well as borderline personality disorder, and antisocial personality disorder (4). Jennifer Greenberg and Eric Hollander, M.D., in their article "Brain Function and Impulsive Disorders" characterize impulsivity as "the failure to resist an impulse, drive or temptation that is harmful to oneself or others" (4). One can see why Depue and Collins see impulsivity as being linked with positive emotionality: this definition of impulsivity is almost the same as their definition of positive emotionality (more sensitivity to reward than punishment). The only addition is the inability to determine when the punishment outweighs the reward. According to Depue, "the extreme extrovert, then, is someone who has high dopamine reactivity and, as a result, easily binds rewarding cues to incentive motivation. That person will appear full of positive emotion and highly active in approaching rewarding stimuli and goals. The low extrovert will find it difficult to be so motivated and will require very strong stimuli to engage in rewarding activities" (6). It is interesting to consider that the same quality that in moderation is looked at as ambition, in excess is considered a failure to resist an impulse, drive or temptation that is harmful to oneself or others. Clearly, this extroversion/impulsivity/incentive motivation is a very influential trait which must be kept in balance to maintain emotional well-being.

The brain structures research has indicated are active in controlling impulsivity are the orbitofrontal cortex, nucleus accumbens, and amygdala regions, many of the same ones that mediate extroversion (4). Damage to these structures often results in impaired decision-making and increased impulsivity (4). In their article in the Journal of Psychiatric Research, David S. Janowsky, Shirley Morter and Liyi Hong relate novelty seeking and impulsivity to increased risk of suicidality and they correlate depression with an elevated degree of introversion (5). Impulsivity is linked to an increased risk of overt suicidality because it allows patients to avoid considering the long-term consequences of their actions (5). Research has indicated that introversion decreases as depression improves and continued introversion is associated with increased risk of relapse into depression (5). Even recovered depressed patients scored lower (more introverted) than never ill relatives or normals on the Maudsley Personality Inventory Extroversion Scale (5). Janowsky et al. infer that the social isolation associated with introversion may compound the depressed patient's need for a social support network (5). Still, the connection between introversion and depression remains ambiguous because other research has shown no correlation between them (5).

An interesting question that arises is to what extent these traits of introversion and extroversion are genetic and to what extent they are learned by through interaction with one's environment. Depue claims that it is likely that genetics make up for 50-70% of the difference between an individual's personality traits, he says, "The stability of emotional traits suggests that the extent of the interaction between environment and neurobiology is in part determined by the latter. Experience is interpreted through the variable of biology" (6). While experience may modify our response to incentives, our hardwiring, in terms of dopamine production and absorption remains intact.

It seems likely that past experience would play a larger role than Depue indicates in incentive motivation. Experiencing social rejection would seem to discourage more risk-taking in the future, but perhaps, for those with high positive emotionality, and the effective dopamine production and absorption system that it implies, the reward for succeeding is so great that they will continue the behavior even if they fail repeatedly. Perhaps there is a neurobiological basis for having "tough skin" or "thin skin"; for being resilient or oversensitive.
What happens if someone has the neurobiological make-up of someone with high positive emotionality and then suffers a traumatic, punishing, experience which damages their self-confidence. Do they become a frustrated extrovert? According to Depue, such a person would be in the 30-50% of the population whose personality is not directly related to genetics and the functioning of their dopamine reuptake receptors. What happens to this person? Do they suffer from cognitive dissonance, wanting to take more risks but unnaturally wary from what experience has taught them? Does there brain chemistry alter to adapt to their behavior or does their behavior eventually adapt to fit their brain chemistry in spite of past experience, or do they suffer internal turmoil and rely on drugs to free themselves of inhibitions and allow themselves to pursue rewards as they would naturally have done?

Neurobiologically, drugs and alcohol add something new to the mix. Do introverts or suffering extroverts self-medicate with them, or get a prescription for Prozac? First of all, all of these scholars have treated introversion as something of a disease to be medicated, which seems strange considering that Jung appeared to have a fairly egalitarian approach to introversion and extroversion. He treats them as different lifestyles rather than a disease and the lack of one. He defines introversion as enjoying solitude and the inner life of ideas and imagination; hardly a negative description. Depue and Collins probably came up with the term "positive emotionality" because they wanted to describe the quality which those typically thought of as extroverts tend to possess in a social context, but which those termed introverts may also have but manifest in different ways: the desire and ability to achieve goals. Janowsky, on the other hand, refers to introversion as a trait marker for depression, which decreases as depression improves, not as a personality trait of healthy, well-adjusted individuals.

What is introversion, and is it a bad thing? In the Jungian way of thinking most of us are at least somewhat introverted, and a good thing too, or else we would rarely get any studying done. Low positive emotionality is not really the same thing as introversion, although an introvert could have low emotionality in social settings and be more demoralized by fear of rejection than motivated by the prize of friendship or social dominance. This type of introversion is more than likely linked to depression because it does deprive a person of a necessary source of emotional release: the social support network. On the other hand, being introverted can mean that you keep a small, close circle of friends, which would definitely constitute a social support network, and there is no reason to believe that this is unhealthy or even abnormal. The term introvert implies that one is emotionally satisfied by a mostly internal life. The only really conflictual state seems to be the repressed extrovert, one who would like to forge more social relationships but is too intimidated to do so, but this is most probably not the case with all those classified as introverts.

The terms "extrovert" and "introvert" may be inherently problematic. They are so well established in the vernacular now that they have connotations that were probably never intended, for example that the extrovert is always the "life of the party" or that introverts are social outcasts. This may be the reason Depue and Collins chose to use the term "positive emotionality." Using a new term gave them a fresh opportunity to be absolutely clear about what they meant. Their term, however, still does have relevance in relation to the notion of extroversion because extroversion depends on some degree of positive emotionality.

The evidence for a neurobiological basis for all of these traits is strong. Animal research has supported the idea of a network of brain structures communicating signals in order to process and respond to incentives in the environment. In particular, there is convincing evidence that the production and absorption of the neurotransmitter, dopamine, affects the perceived intensity of the incentive stimulus, and so, how motivated the subject is to pursue the stimulus. The changes that occur within the dopamine system and their affect on personality is easily observable in people under the influence of drugs that positively affect the dopamine system like cocaine or alcohol: their fears and anxieties vanish and they are able to pursue goals (although perhaps not higher level ones) in an uninhibited fashion. In a less uninhibited way, the same affect is observable in people taking antidepressants: they are no longer dissuaded from pursuing goals by fear of negative consequences; they regain an ability to "look on the bright side" and focus on the positive aspects of achieving goals rather than the negative repercussions of failing to achieve them. It would be difficult to dispute that there is a relationship between positivity and goal oriented behavior and the dopamine system, but the reason why the dopamine system has this affect on personality remains unknown, and the precise interactions between the dopamine system and the rest of the brain, body, and the exact effect on behavioral patterns are yet to be discovered.


References

1) 1 Up Info: Extroversion and Introversion, Psychology and Psychiatry

2) Neurobiology of the Structure of Personality: Dopamine, Facilitation of Incentive Motivation, and Extraversion. Depue, Richard and Collins, Paul.

3) The American Heritage Dictionary of the English Language, Third Edition. Houghton Mifflin Company, New York: 1996.

4) Greenberg, Jennifer and Hollander, Eric. Brain Function and Impulsive Disorders. Psychiatric Times. March 1, 2003.

5) Janowsky, David S.; Morter, Shirley and Liyi Hong "Relationship of Myers Briggs type indicator personality characteristics to suidicality in affective disorder patients". Journal of Psychiatric Research, Volume 36, Issue 1, January-February 2002, Pages 33-39.

6) Encyclopedia Britannica Online: Development and Life Course: It's All in Your Head.

7) Cornell University: Science News: "Cornell Psychologist finds chemical evidence for a personality trait and happiness".

8) Web Paper: "Personality: a Neurobiological Model of Extraversion" by David Mintzer


Persistent Resistant Germs
Name: Rochelle M
Date: 2003-10-06 12:07:37
Link to this Comment: 6808

<mytitle> Persistent Resistant Germs

"At the dawn of a new millennium, humanity is faced with another crisis. Formerly curable diseases... are now arrayed in the increasingly impenetrable armour of antimicrobial resistance."

Director General of the World Health Organization After the discovery of penicillin and streptomycin3. One of the things he observed was that the Staphylococcus aureus developed cell walls that became increasingly resistant to the penicillin. This meant that the offspring of the bacteria would come back and multiply if most of the parent cells were not killed off with the first set of treatment. These offspring would have a stronger resistance and it would be much more difficult to kill them off. Bacteria have the ability to change their cell wall in order to protect themselves from antibiotics. They also exchange genes among themselves. Because of this ability, various types of bacteria have formed immunity to drugs that are commonly used to treat diseases.

Have no fear, though; there are many steps that one can take in order to maintain one's health. The first is of course is proper nutrition--you have eat healthy to be healthy! Observe personal hygiene, such as washing hands regularly. And one should also try to get plenty of sleep. It is when we sleep that our body recuperates and rejuvenates itself. Also, should one be traveling to a country where disease transmission through insects is at a high rate, make sure not to do a lot of early morning or nighttime outdoor activity


Biology and Philosophy of Love
Name: Lara Kalli
Date: 2003-10-10 19:49:28
Link to this Comment: 6881

<mytitle> Biology 103
2003 First Paper
On Serendip

What does it mean to love another person? This question is one that virtually every person has asked himself at some point; virtually every school of thought that exists has attempted to provide an answer of some sort. In this paper I will explain my own attempt at answering that question, from the perspective of an amateur philosopher; then I shall delineate the answers that some biologists have given. We shall see that, while at first these two sets of answers might appear to be quite different, there are in fact some interesting and notable similarities.

I have heard many different accounts of what it is to love someone - to care truly for that person's best interest, to be willing to sacrifice one's own life for that person's well-being, and so on, the list is infinite. To be sure, these accounts all have a measure of validity; there are many different forms of love. However, there is one aspect that all of them have in common, which is the same point at which I think they fail to capture what it really is to love someone: they are too altruistic. Humans, it seems to me, are essentially self-centered creatures; and I do not intend that statement to have the extreme negative connotations that usually accompany the term "self-centered". I mean it in the most literal sense: humans are centered around the self. Much as we may try, the self is un-transcend-able. At this point in scientific and spiritual progress, we cannot ever truly experience anything through another person's frame of reference - all that we can know for certain is that which we think and feel. Thus, it makes no sense to speak of love as a sort of "leaving the self".

How, then, are we to think about it? I offer this alternative: so as to avoid the mistake of treating love as a form of altruism, we should think of loving another person as the act of loving oneself through another person - in other words, we love the people that make us feel best about ourselves, that bring out the best in ourselves. It is important to note that by no means does this definition entail that we do not genuinely care about the people we supposedly love. We can see this as follows: by this definition, it is essential that we like the people we love (it would be impossible for someone I did not like truly to make me feel good about myself); we want the people we like to be happy; we are best suited to making other people happy by being happy ourselves; we cannot be happy unless we like ourselves. And how can we accomplish this feat? By seeking out the company of those people who, for whatever reason, make it easier to like ourselves. Upon reflection, this account seems to me to be the only one that allows us to love others without requiring that love to be a pure act of altruism.

And what does biology have to say about love? First of all, it seems to be widely agreed amongst biologists who study the subject that love is an essential part of human functioning. Dr. Arthur Janov, author of The Biology of Love, brings up a developmental fact essential to understanding this point: "The right hemisphere, which is larger than the left, is the site of feelings and emotions and of holistic, global thinking. Thoughts, planning, and concepts are the domain of the left hemisphere. The right brain is largely mature at the second year of life; the left brain is only beginning its maturation at that time. Feelings pre-date thoughts. In terms of evolution we are feeling beings long before we are thinking ones." (1) Furthermore, it has been shown that neglect, or lack of love, has a serious impact on human ability to survive and develop properly. (2) Dr. Janov notes that infants who are neglected have brains that are significantly different from normal brains: the number of stress-hormone receptors, for example, are much lower in the brain of a neglected infant, which entails a higher level of stress - and therefore unhappiness - in that person. (1)

Biology, as of this point in time, has successfully determined what processes exactly occur in the brain when one loves another person. However, there are studies that have been done that show some interesting correlations. Dr. Helen Fisher posits a dramatic increase in the amount of dopamine and norepinephrine present in the brain when one first becomes infatuated with another person, which would account for the feelings of euphoria, giddiness and so on that one would experience at that point. (3) Another study showed that, in the brains of people who had recently fallen in love, serotonin levels were significantly higher than those in the brains of the control group. (4) Yet another study demonstrated the possibility of a correlation between the ability of adults to bond emotionally with one another and the presence of the hormone oxytocin, which is normally associated with human reproductive processes such as lactation and, interestingly, male and female orgasm. (5)

How can these findings be applied to my theory as outlined above? Most notably, there is a correlation between the notion of loving another person as a form of self-love and the types of chemicals that scientists have found to be present in the brains of people who are in love. All of the chemicals stated above are associated not only with being in love but with other forms of gratification. Oxytocin, as stated above, is released in the brain during orgasm; dopamine is associated with pain relief (6) and euphoric feelings in general, as is evidenced by the role it plays in the effects of amphetamines and cocaine; serotonin is associated with feelings of calm and happiness. In other words: when we are in love, chemicals associated with pleasure are released into our brains; loving another person is comparable to self-gratification. To love another person in the philosophical sense is to love oneself; to love another person in the biological sense is to give oneself pleasure.

References

1. The Biology of Love; online excerpt of Dr. Janov's book 2. article on love and its biological necessity to human life 3. interview with Dr. Helen Fisher 4. study on the role of serotonin in love 5. study on the role of oxytocin in love 6. article on chemical nature of pain and pleasure


A Mad Artist Or Does He Really Hear Yellow? (And W
Name: Mariya Sim
Date: 2003-11-05 10:51:15
Link to this Comment: 7120


<mytitle>

Biology 103
2003 First Paper
On Serendip

"The sound of colors is so definite that it would be hard to find anyone who would express bright yellow with bass notes, or dark lake with the treble," wrote Wassily Kandinsky in Concerning the Spiritual in Art (1). As the reaction to his book proved, he was largely underestimating the public's propensity to deride any assertion made with assurance, especially one so seemingly subjective. When Kandinsky followed this comparatively general statement with detailed profiles for each basic color - light red, for instance, was "warm," gave one "a feeling of strength, vigor, determination, triumph," and corresponded to the "sound of trumpets, strong, harsh, and ringing" - some of his more outspoken readers suggested that the artist was more fit for an insane asylum than for a painter's career. While today we cannot deny Kandinsky's artistic merits, an interesting question remains. Were the details of his descriptions mere metaphors resulting from the vividness of his imagination? Or was there a physical experience behind such an extravagant way of seeing the world?

It turns out that you don't have to be an artist to have an experience of reality akin to that of Kandinsky. Psychologist Carol Crane, for instance, always sees the letter c in tawny crimson (2), blue accompanies the sounds of the piano for professor of English Sean Day (2), and when journalist Allison Bartlett thinks of a year, she has a distinct vision of a horseshoe with different months distributed over it (3). All of them (including Kandinsky) have a neurological condition called synesthesia - a peculiar mingling of the normally separate senses. For synesthetes, the stimulation of one modality causes a perception in one or more different modalities, so that colors evoke sounds, tastes conjure up shapes, etc.

The name synesthesia is composed of two Greek root words, syn meaning "together," and aisthesis meaning "to perceive" (2). This "together-perception" is relatively rare: some researchers say that approximately 1 in 20,000 is affected (3), others suggest that as many as 1 in 200 may experience a basic form of this condition (4). As one can easily see, there is not much agreement among the scientists, and this is the case not only among the researchers, but also among the synesthetes themselves. The variations of synesthesia are virtually endless and difficult to categorize - though some attempts have been made (4) - because although synesthetic perceptions of one person are consistent over time, it is almost impossible to find two people with identical experience. In other words, if the letter a has always been dark blue for one synesthete, it has always been mustard yellow for another, light green for the third, and so forth.

There are only a few facts about synesthesia on which the researchers have come to an agreement. Synesthesia is genetic and is linked to the x-chromosome (5). It is more common in women than in men, with the ratio ranging from 2:1 to 8:1 (2). And, most importantly, two recent experiments evince that the elusive phenomenon is real. In 1993 Simon Baron Cohen, an experimental psychologist at the University of Cambridge, and his research group conducted a study that proved that synesthetic perceptions are consistent over time. They provided a group of synesthetes and a group of non-synesthetes with a list of numbers, words, and phrases and asked them to record the color each evoked. A week later they repeated the test without warning with the control group, whose responses were only 37 percent consistent with their initial answers. The same follow-up test was given to the synesthetes a year later - 92 percent of them gave an answer identical to one they gave previously (2). Baron-Cohen's later research shows that synesthesia can be measured using positron-emission tomography and functional magnetic resonance imaging. Thus, in synesthetes with colored hearing, areas of the brain connected with visual image processing as well as the areas responsible for sound processing become activated in response to sound stimuli, while in non-synesthetes the visual processing centers do not (6).

At this point one may ask a serious question: why should we care? While synesthesia is a fascinating and unique phenomenon, its very rarity should preclude it from being a general concern. Moreover, it is a condition that is "abnormal" only because it is rare. Tests have shown that synesthetes are mentally stable (7), they are physically and socially active, their secondary sensory experiences do not (except in some extreme cases, when the synesthetic colors, sounds, etc. are particularly unpleasant) interfere with their daily life (3). Thus, synesthesia is not an "abnormality" that requires medical treatment; on the contrary, most synesthetes enjoy having a multi-variable experience of reality and would never want to lose it. It seems that scientists should have ceased being interested in this phenomenon, having satisfied their curiosity as to its objective existence and having raised the awareness of it both in medical circles and in society, so as to affirm that synesthetes are not mentally or psychologically deficient. However, this is not the case. Groups of dedicated scientists in the world's major research institutes and universities are still studying synesthesia and have no intention of ceasing their efforts any time soon. Why? To answer this we have to ask another question: why is it that the synesthetes' experience of the world is so different from the non-synesthetes'.

The researchers have not found a single unifying explanation for this difference. However, major theories advanced by them suggest that, far from being an unusual and therefore mostly irrelevant phenomenon, synesthesia may actually provide a path to solving some of the most fascinating mysteries of how the human brain works. One of the researchers suggesting this possibility is Richard Cytowic, whose studies mark the modern revival of interest in synesthesia. Cytowic postulates that the synesthetic neural processes occur not in the neocortex (an area of the brain associated with "higher-level" thinking) but in the limbic system (regarded as more primitive in evolutionary terms and responsible for emotional, rather than logical, processes). Based on his experiments with synesthetes using the radioactive xenon method, which showed a great decrease in the subjects' cortical function during synesthesia, he proposed that neuroscience had greatly underestimated the importance of emotion in human thinking process. While conventional models of the brain describe neocortex (and rational thinking) as influencing the limbic system (and emotional or associative reasoning), synesthetic processes show that the two are interdependent. Moreover, as Cytowic suggests, the limbic system, in fact, plays the more important role in the process due to the fact that there are more projections from the limbic system into the neocortex than otherwise. (7)

Some may argue that synesthetes cannot act as convincing examples for such a generalization, since apparently their perception of the world is entirely different from that of the non-synesthetes'. Cytowic, however, advises that we should cease thinking of neural processes in terms of terminal events (for instance, of human perception of a color as the final stage of a linear neurological process, whose origin is in the cortex), but instead focus on the intermediate stages of neural transformations. What may then be inferred is that synesthesia is "a premature display of a normal cognitive process" (7). That is, perception is by its very nature holistic, and we are all, in fact, synesthetes, although only few people can "sample" not only the final isolated product (i.e. taste, color, smell), but also the work-in-progress, as it were. (7)

Although another synesthesia researcher, Simon Baron-Cohen, proposes that the brains of synesthetes and non-synesthetes are physically different (and in that disagrees with Cytowic), his explanation of the neurological basis for the condition suggests that this difference is not inherent, but acquired. According to Baron-Cohen's theory, synesthetic experiences are the result of an overabundance of neural connections in the brain. While in the normal human brain there exist separate unconnected modules responsible for different sensory perceptions, in the synesthete's brain there are connections between these centers, causing the unusual mingling of several modes of perception (6). Nevertheless, this theory does not imply that there are two distinct "kinds" of people - those with synesthetical connections and those without - but, rather, that all human infants are born synesthetic. In the majority of humans the connections between different modules are "trimmed" as they grow older, but in some they remain intact as the result of genetic predisposition (8). Therefore, synesthesia may still be helpful in gaining a better understanding of human neurological makeup.

Peter Grossenbacher, a psychologist at Naropa University, Colorado, offers yet another explanation for synesthesia. Unlike Baron-Cohen, he believes that the human brain does not have to have a physically different structure to be able to arrive at synesthetic perceptions. Grossenbacher advances a concept of uninhibited "feed-backward" neurological connections in synesthetes' brains. In the human brain there are "feed-forward" connections, which carry information from lower-level single-sense areas of the brain to the high-level processors, and "feed-backward" connections, which transport the processed information from these multi-sensory processors back to the single-sense areas of the brain (9). Normally, the processed information returns only to an appropriate single-sense area, thus allowing one to focus on a single sensory perception. However, in the brains of the synesthetes the neural pathways of the "feed-backward" connections may become disinhibited, allowing the information to return to more than one area and thus effecting a simultaneous perception in several sensory modalities (3). While in the synesthete's brain this disinhibition happens naturally, in the non-synesthete's brain certain hallucinogenic drugs (like LSD or mescaline) can induce a similar process (2). From this it could be inferred that most activity in the human brain depends not so much on its physical structure as on the exchanges of neurons between its different areas.

Vilayanur Ramachandran and Edward Hubbard of the University of California at San Diego support Grossenbacher's theory to some extent. Their research findings suggest that a process they term "cross-activation" occurs in the synesthetes' brains. Due to an imbalance of chemicals travelling between various areas of the brain, the normally inhibited "cross-talk" (the exchange of information between two separate modules) becomes disinhibited, so that one area produces an effect on the other. This theory accounts for the fact that certain forms of synesthesia (like sound-and-color connections) are more common, while others (like taste-and-shape) are less common. With the more common forms, the areas of the brain responsible for processing the different sensory perceptions are closer to each other, thus facilitating cross-activation, while the opposite is true for the less common forms (10). However, unlike Grossenbacher, Ramachandran and Hubbard do not consider this process of "cross-activation" an abnormality. Rather, they see it as a basis for something common to all humans, namely, for our propensity for metaphor, which is based on our ability to extract a common denominator from seemingly distinct sensory properties. According to them, this "extraction" can only occur if there is a time when all information, as it were, flows together in the brain, that is, when different sensory modes are engaged in cross talk. Since this predisposition for metaphor, and thus creativity, is essential to a human, synesthesia cannot be considered simply an obscure abnormality, but a more vivid manifestation of the process common to all of us (10).

Paradoxically, what these studies suggest is that we are all "closet synesthetes." While synesthesia par excellence is relatively rare, it would not be unfounded to suppose that cross-modal references occur in everyone's brain, although it is difficult to pinpoint them in non-synesthetes due to their diminished state (11). What implications then does synesthesia have for our perception of reality and what does its existence suggest about the nature of reality in general?

The synesthetes claim that not only objects, but also abstract concepts have a multitude of sensory qualities: color, sound, shape, smell, etc. Could it be that the nature of reality in general is multi-variable, that all phenomena objectively possess several qualities, some of which are more pronounced and others less? Some of Ramachandran's and Hubbard's research indirectly supports this assertion. They presented their subjects with two drawings (one blob with soft curved outlines and another with jagged sharp-edged outlines) and two nonsensical words (bouba and kiki). 99 percent of the subjects associated the curved blob with bouba and the sharp-edged with kiki (10). From this it may be inferred that there is some connection between the visual curviness/jaggedness and the auditory softness/sharpness, and, thus, that there are inherent connections between sensory modalities that we normally separate (for example, between visual and auditory modalities).

If the nature of reality is indeed multi-variable, it follows that humans normally suppress some of the qualities of perceived objects during the processing of the neural signals in the brain. Synesthetes, on the contrary, do not suppress most modalities due to a genetic predisposition. However, it is interesting that some synesthetes report that they can diminish their synesthesia by focusing on something other than the object or concept that evokes their secondary perceptions. Thus, Sean Day, who normally sees colors when he hears music, claims that if he closes his eyes or consciously concentrates on something other than the sound, the colors go away (4). Would it be possible for a non-synesthete to do something diametrically opposite, that is, to develop a conscious perception of the "abnormal" sensory variables of a given object? In this respect, it is interesting that some blind people report perceiving color and light via tactile experiences (11), while color-blind synesthetes report seeing vivid synesthetic colors (10).

Holistic nature of sensory perception (albeit unconscious) is important because it allows for a more complete understanding of the world, but also because it presupposes the existence of a certain common basis of reference that makes communication and understanding between humans easier. Synesthesia suggests that every person's conscious perception of reality is different from everyone else's (and that is also probably true - to a lesser extent - for non-synesthetes). On the other hand, some researchers have found common tendencies even in the seemingly endless variety of synesthetic perceptions (4), which may in turn suggest that our experience of reality is not as subjective as other researchers (among them Grossenbacher) would have us believe. But even if the results (the perceived colors, smells, tastes) of the neurological processing of sensory input are different (if each of us suppresses and, therefore, experiences different modalities), in the earlier stages of the process, when cross-activation takes place, our experience of the world is still much the same in its multi-variability. This is true even if one assumes that objects do not in reality posses multi-variability, but, rather, that different areas of the brain are randomly and subjectively cross-activated (that is, if the color blue really does not have a sound associated with it, but, rather, the signal "blue" randomly cross-activates the parts of the brain responsible for visual and sound perception). There is still some basis for a common experience.

Another important assumption that can be made based on synesthesia research is that emotions have a profound effect on our perception of reality. Synesthetes often report that their secondary perceptions become more vivid if they are emotionally involved in the experience. One of the more famous synesthetes - the Russian composer Scriabin, for whom music evoked colors - claimed that normally he had a "faint feeling" of color, which grew stronger and ended in a visual image of color as his emotional involvement in the music escalated. He also noted that not all music could produce synesthetic experiences in him. Some composers, like Beethoven, were too "intellectual" for that, while more "emotional" modern composers, like Tchaikovsky, inevitably evoked vivid colors. (12)

If we accept the notion that we are all "closet synesthetes," then future research on synesthesia and emotion may lead to a greater understanding of the neurological basis for our liking and disliking certain smells, tastes, colors, etc. It may also add to our comprehension of why purely intellectual pursuits have emotional value for humans, leading some to quasi-ecstatic states, while remaining a boring chore for others. The link between synesthetic experiences and emotions may also uncover the neurological basis for the psychological effects of colors (for example, the calming effect of blue) and music.

As one can see, the areas of research for which synesthesia is important are numerous and crucial for neurobiology and philosophy of science. Therefore, the continual scientific study of this condition is not based on mere curiosity, but on a desire to unlock some mysteries of the human brain. When the theories that the scientists arrive at become popularized, perhaps, people will recognize that even the "simple" and "natural" processes like seeing a color or hearing a sound are more complex than they think. Moreover, even such seemingly bizarre statements like that of Kandinsky would be attributed not to artistic whims and not even to a subjective and rare neurological condition, but, rather, to a neurological process of perception common to everybody. After all, how many of us would really associate bright yellow with bass notes and dark blue with C sharp? It turns out that not only we may really hear yellow, but that it may be as natural to a human as slandering every groundbreaking scientific theory or suspecting every observation that transcends the boundaries of "normal" experience (which, in turn, may, as synesthesia research shows, be self-imposed and subjective).

References

1)Kandinsky, Wassily. Concerning the Spiritual in Art. New York: George Wittenborn, Inc., 1947.

2)"Do You See What They See?" on "Discover" magazine home page, by Brad Lemley

3)"An Ear for Color" on "Washington Post" home page, by Allison Bartlett

4)"Professor Making Sense Out of Senses" on Journal-News.com, by David Brown

5)"I Can Taste My Words" on "BBC News" page, by Jane Elliott

6)"Everyday Fantasia: The World of Synesthesia" in "Monitor on Psychology" on American Psychological Association home page, by Siri Carpenter

7)"Synesthesia: Phenomenology and Neuropsychology" on "Psyche" page, by Richard Cytowic

8)"Is There a Normal Phase of Synaesthesia in Development?" on "Psyche" page, by Simon Baron-Cohen

9)"Cortical Feedback Improves Discrimination Between Figure and Background by V1, V2 and V3 Neurons" on "Nature" journal home page, by Hupe et. al.

10)"Hearing Colors, Tasting Shapes" on Scientific American.com, by Vilayanur Ramachandran and Edward Hubbard

11)"Synesthesia - A Real Phenomenon? Or Real Phenomena?" on "Psyche" page, by Luciano da F. Costa

12)"Synesthesia and Artistic Experimentation" on "Psyche" page, by Cretien van Campen


Dark Energy: The Mystery of This Millennium
Name: Melissa Te
Date: 2003-11-05 14:33:08
Link to this Comment: 7126


<mytitle>

Biology 103
2003 First Paper
On Serendip

Billions of years ago, the universe was nothing but an infinitesimally small particle. Then, in less time than the blink of an eye, the universe expanded and increased in size by a factor of 1050. Expansion eventually began to slow down, allowing galaxies, star clusters, and so on, to form. Theoretically, expansion should still be slowing down; but to the contrary, expansion is in fact accelerating (10). Some scientists theorize that an unknown force, called Dark Energy, may be the cause of this accelerated expansion, while others disagree.

For some time, exploding stars, or supernovas, were used as a "cosmic measuring stick" (4). That is, scientists used these supernovas to calculate the age of the universe. In 1998, two groups of astronomers surveyed supernovas in very distant galaxies. These supernovas were much dimmer than expected to be, and calculations proved that the stars were over ten billion light years away, much farther away than they should be had the universe been expanding at a slowing rate, or even a constant rate, as previously theorized (5). This discovery proved that the cosmos are not expanding at a slowing or a constant rate, but instead they are expanding at an accelerated rate (4). Since this discovery, scientists have been trying to uncover what it is that accounts for this accelerated expansion.

Scientists have calculated the density of the cosmos, and they have also calculated the total mass of all visible galaxies. However, the galaxies make up less than one-third of the density needed to satisfy the current calculations of the early universe (2). Simple logic tells us that there must be something else in the universe, with some kind of mass, which accounts for over two-thirds of the density of the cosmos. The new theory incorporates a different force called Dark Energy. At first, scientists did not know how Dark Energy works or what it is physically made up of. Some proposed ideas of Dark Energy are: a cosmic field associated with inflation, a low-energy field called "quintessence," and the cosmological constant, or a negative pressure, as suggested by Albert Einstein (7).

In July of 2003, scientists confirmed that Dark Energy exists, but they still cannot truly explain it (6). They do know that Dark Energy is different from every other kind of energy found. Some say it is a negative gravity (1), while others say that it does not necessarily act opposite to gravity, but, instead, it acts more like a negative pressure (5). Scientists do know for sure that Dark Energy moves space apart, causing the rate of expansion of the universe to increase (3). Physically, Dark Energy is invisible and of an unknown form, but accounts for 65 percent to 75 percent of the makeup of the early universe. However, scientists have no way of measuring Dark Energy with the current technology, as it affects only the universe over very large distances, as opposed to gravity which affects the universe over both large and small distances (2).

Einstein's Theory of Relativity allowed for the existence of a force such as Dark Energy. He spoke of a "cosmological constant" that left open the possibility that even empty space has energy. Also, according to his theory, this energy must have some kind of mass since energy equals mass multiplied by the speed of light squared, or E=mc2 (3). While most of Einstein's theory makes sense, some say the amount of Dark Energy decreases with time, as opposed to remaining constant. It is quite possible that Dark Energy is controlling the expansion of the cosmos, so understanding the nature of Dark Energy is vital to the prediction of the fate of the universe. In order to do so, scientists must create more advanced technology to measure density, pressure, changes with time, and so on, in galaxies billions and billions of light years away (3).

There are also those scientists who do not believe in the theory of Dark Energy. Some physicists claim that they can explain the expansion of the universe without having to factor Dark Energy into the equations. These scientists say that gravity is, in fact, the force causing the cosmos to expand. They have added a term to Einstein's equations in his theory of relativity to support their ideas, which will not affect the early universe, but will instead affect the universe after billions of years (1).

Of course, there are many other theories involving and excluding Dark Energy. For example, an additional theory is that gravity slowed the expansion of the universe after the "Big Bang" until the universe was half of its current age. Then, an opposing force, Dark Energy, consumed gravity and it began pushing the galaxies away at an accelerated speed. Others say that Dark Energy conceals other dimensions that scientists have yet to find; we know of a first, second and third dimension, but where are the forth, fifth, sixth, and so on, dimensions hiding (4)? Some even say that in order for Dark Energy to be real, space must be flat as opposed to curved (8).

Scientists say that Dark Energy could also be responsible for the end of our universe. Some other theories of the end include the "Big Crunch" and the "Big Chill" which deal with either an abundance of gravity or a lack of gravity. However, a new theory, called the "Big Rip," says that if dark energy continues to accelerate, it will eventually rip apart stars, solar systems, galaxies, and even atoms. This theory was created by a group of scientists from Dartmouth College, who say that the universe has twenty million years left of existence (9).

In short, scientists are very uncertain as to nature of Dark Energy, and every group of scientists that studies the force has formulated their own respective theories. Which theory is correct? Is Dark Energy just an excuse for our miscalculations of the density of the cosmos? Could empty space really just be empty space? Will the universe indefinitely continue to expand, leaving the Earth and the human species alone for eternity? Perhaps scientists will never have the ability to answer these questions. However, what they know for sure, as scientist Michael Turner of University of Chicago says, is, "This is very weird stuff" (4).

References

1)Accelerating Universe Theory Dispels Dark Energy
2)Universe Mostly Made of Dark Energy
3)Beyond Einstein
4)Dark Energy Quickens Universe Expansion
5)Dark Energy: Astronomers Still Clueless
6)Dark Energy Confirmed
7)Dark Energy Fills the Cosmos
8)Direct Evidence Found for Dark Energy
9)Dark Energy May Rip Apart Universe
10)Decoding the Mystery of Dark Energy


Sibling rivalry (the slightly-less-amazing adventu
Name: Brittany P
Date: 2003-11-08 19:43:08
Link to this Comment: 7156


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Why yes, it's...

The slightly-less-amazing adventures of Professor Sanderson's Sociobiology discussion group!

Today's topic: Sibling Rivalry


**

Professor Armand Sanderson's Sociobiology lecture was not quite as popular as his brother Julian's Paleobio class. Partly this was because sociobio did not lend itself to psycho-Permian field trips; partly this was because he, unlike his Cosmopolitan brother, looked more like a lumberjack than a professor. In fact, the highest attendance he'd ever recorded occurred on the day he brought Julian in as a guest. This annoyed him.

"Good morning," he addressed the sea of faces sourly. "As I'm sure you all know, today we have my brother Julian in as a guest." Julian waved cheerily. Fifty-seven eyelids batted. "He's here for today's discussion on the biological origins and implications of sibling rivalry among humans. I expect you all to participate." Despite his gruffness, he received only minimal acknowledgement.

"Julian," he continued, noting with ire the sudden leap in his students' attention, "is here to provide a living example of the concepts we're about to discuss. He's also good with animal behavior, so he'll be starting you off today with some of the biological bases of sibling competition." He narrowed his eyes. "I expect you to pay attention to what he's saying."

The threat was habitual, and, in this case (he again noted with ire) completely unnecessary. The class had no trouble obeying. As Julian stepped forward, all talk immediately ceased---though some of the students' comprehensive faculties ceased along with it.

"Good morning, everybody!" Julian chirped. "Like Armand said, today we're here to discuss sibling rivalry. We don't have a lot of time, so I'll just jump right in. Now, how many of you here have ever fought with your brothers and sisters?" All but a few hands went up. Julian nodded, smiling. "Right. So you would say it's a common problem?" The class murmured agreement. "Well, you're absolutely right. This may come as a surprise to you, but humans aren't the only species who get ticked off by their siblings. In fact, sibling rivalry is ubiquitous in nature. ((5))"

Leaning back, Julian lifted himself to sit on the edge of Armand's desk. Armand, typically, scowled. "For a good example, look at baby pigs," the younger professor explained. "They push each other out of the way so they can get at their mom's anterior teats---that's where the best milk is. ((2)) And baby kestrels will physically fight one another over the food their parents bring back. ((3)) It gets pretty violent sometimes---just like home, right?" A few students chuckled. "Yeah. That's not even the half of it. For example, baby sharks will---this is gross---eat one another in the womb. ((5))" His face crinkled with disgust. Behind him, the elder professor breathed an imperceptible sigh. He had never appreciated his brother's tendency to melodramatize biology.

A hand went up in the back. Julian, abandoning his grimace, motioned the student to speak. "Um," she said uncertainly, "that's gross, but don't they do it for a reason? I mean, I read somewhere that this fits into Darwinism... does it?"

The professor scratched the back of his neck. "Tricky one," he replied. "Me, I say it does... what you're referring to is the theory that sibling competition---if it proves fatal, as in sharks---is actually natural selection because it weeds out the weaker genes from the stronger. The bigger, healthier baby shark will survive to breed. ((1)) However..."

"However," Armand broke in, with a grumble from behind his desk, "That argument falls when you consider that siblings share half their genes. ((1)). Why would you kill off someone who's going to pass on so much of your own genetic material?" He crossed his arms. "Even twins fight, though they're basically the same."

Off to the right, a pair of hands went up simultaneously. They belonged to the class's single set of twins, only one of whom---a darkly dressed but cheery looking goth---was actually enrolled. The other, a bouncy prep, was visiting for less academic reasons. "I take issue with that," the goth said when Armand waved her forward. "Twins may be genetically the same, but that's where it ends." Her sister propped her fists on her hips and added, "Yeah. My sister and I are identical twins, but we're totally different. If we had kids, they wouldn't be exactly alike. I mean, we'd raise them differently and stuff, because of our personalities." The goth nodded approvingly, then picked up, "Who you are isn't just genetic. Who says survival rates have to be dependent solely on genes?"

Armand frowned, looking surprised and a little ruffled. "True," he admitted. "But it's still the genes that determine who your children are. Darwinism still applies. If both of you reproduce, your offspring will have---from your side---the same genetic chances of survival."

A soft, disagreeing cough issued from the front of his desk. "But that depends on how important genetics are. Are they that crucial?" Julian asked quietly. For all his usual placidness, a distant debater's spark had appeared in his eyes. "Not very. Maybe how you're raised is more important. Kids, even twins, compete because they're not the same, despite genetics. No matter what their DNA, they'll still have differing chances of survival." He spread his hands. "Say those baby pigs are all identical twins. They're still individuals, so they still have to compete." Craning back, he flashed a brief grin at his brother. "So Darwinism still applies. It's just not genetic. Behaviors can be passed on, too."

The elder man let out a huff. "Fine," he agreed grudgingly, in a tone which by no means indicated surrender. "Anyway, we're getting off topic. What we're actually supposed to be getting to is why sibling competition occurs." He glanced at his brother. "Julian?"

"Eh, right, right." Julian crossed his legs and leaned back onto the desk. "Although I think we've already sort of addressed it... why do siblings compete? It's more a matter of what they compete over. And the answer can be summed up in one word: *Resources*. ((6))" He slung one arm across his knees, gesturing with the other. "Take baby birds. They fight each other for the biggest share of the food their parents bring home. The bigger chick generally gets more food and has a better shot at survival and, later, reproduction. ((6))" He looked amused, then thumbed a hand back at his brother. "For example, if Armand and I were chicks, he'd be the bigger chick, so he'd probably be the one who'd end up reproducing. Hah, hah."

Armand's eye was twitching. "That is wrong in so many ways I'm not even going to address it. Anyway," he cleared his throat disgruntledly, "now that we know a little about sibling competition in general, we can get on to our specific topic: what's the relation between animal sibling competition and human sibling competition?" He smirked a little as he added, "This is Sociobio, after all."

"But aren't humans animals?" a girl piped up from the recesses of the classroom.

He groaned. "Yes, yes, we'll get to that. Stay on the topic for now."

"Resources!" Julian chirped helpfully.

"Resources," Armand agreed tiredly.

The same girl broke in again, voice sounding slightly peeved. "But we can't talk about resources. Humans don't fight one another for worms like birds do. Our parents treat us equally."

"Ah, but do they?" Julian asked. He glanced back at Armand, who took the cue and continued.

"Look at it in terms of resources," he explained. "Human siblings do compete. They just compete for different resources than birds. Parental time, affection, love, and approval, to name a few. ((5))" He pursed his lips to hide a grin. "For example, Julian and I used to fight over our parents' time..."

"Yeah," Julian reaffirmed, chuckling. "You always wanted them to take us to those idiotic football games."

"And you always wanted to go shopping!" Armand snapped. Reddening, he turned back towards the class. "Eh... don't mind that exchange. The point is that it's these commodities---time, affection---that make for healthier human children. Well-loved children are always more likely to turn out all right. So siblings fight for that resource, just as they do in nature. ((5))."

"There's even a tendency towards birth order discrepancies," added Julian mildly. "You know how in animals the elder sibling is usually bigger and stronger? Well, in humans the older one tends to be more aggressive, perfectionist, blah blah... the younger sibling is easier-going. ((1))" He smiled sunnily. The class, swooning, smiled back. Behind the desk, Armand put his chin in one thick hand and frowned. He was saved from having to defend himself by the twins, who once again raised their hands simultaneously.

"Excuse me, but that's not always true," the prep-twin pointed out. "I know plenty of people whose younger sibling is the more aggressive one." Her sister nodded, myraid black bracelets jingling. She added, "Also, I don't think it's a given that human siblings always fight. My sister is my best friend." The prep sniffed approval, and threw an arm around her sister's shoulders to further illustrate the point.

Armand---surprisingly---responded to the counterpoint with a wide smile. The class lurched. "Finally," Armand ground. "I was wondering when someone would bring that up."

"Um...." Asked the prep twin, "Bring what up?"

"The fact that siblings don't always fight!" the elder teacher rumbled. "I thought that would have been the first thing you'd pointed out. This class is slow today." He tossed a dark look at his brother, who still perched, cross-legged and oblivious, at the edge of his desk. "Are humans animals? Yes. Do they act like animals? That's a more difficult question."

Julian let one leg drop and swung into a more stable position. "You have to remember that everything we're telling you is the result of certain scientists' studies. Just because they seem accurate for one case---say, baby pigs---doesn't mean they're accurate for other cases. Especially humans. We're different."

"Oohhhh." A new voice, high and quiet, drifted from somewhere in the intermediate middle of the room. "Because of consciousness," the student said. "And morality."

"Yes," Armand confirmed. "So the base question is: do humans still follow nature in the way they interact with their siblings? Competition, resources, and cooperation alike?" He scanned the class for hands.

After an awkward pause, one raised tremblingly, and the same high quiet voice began, "I think so." It paused, seemingly unconfident, then continued, "Look at many world societies. First-born children usually inherit the most parental power. They're treasured. Maybe it's because parents feel they're stronger, so they'll survive longer. ((1))"

Before Armand could comment, a different hand shot up. It belonged to an equine girl in the front row, who began talking without waiting for acknowledgement. "Not only that," she whinnied nasally, "but those same societies are famous for their sibling feuds. The story of one sibling opposing the other is ingrained into our culture." Her gaze flashed around the room. "Have any of you ever seen 'The Lion in Winter'?"

The elder professor nodded patiently. "Yes. You have a point about the deep-rootedness about the motif. But it's better displayed elsewhere than in Katharine Hepburn movies."

"Fairy tales, maybe?" Julian offered. "Cinderella's evil stepsisters? Belle's jealous sisters? Both were angry because she was the biological newcomer---yet also the prettiest, the most likely to win the heart of some prince and... um, well, reproduce. That jealousy fueled rivalry which got both princesses in trouble."

"But---" the prep twin blurted, a second before snapping her hand into the air, "You can't just say it's a motif like that. If you're doing fairy tales, look at Hansel and Gretel. They actually protected one another. No rivalry there." She blushed, obviously uncomfortable with directly disagreeing with Julian.

He shrugged obliviously. "Point taken," he admitted. "This is a hard question."

Armand looked towards the nasally girl. "And just because it's ingrained in our culture doesn't mean it reflects reality. If so, then unicorns and dragons would exist. What you want is biological evidence that human sibling rivalry is---well, biological. For example, the fact that siblings are different..." He inclined a shoulder at Julian. "... may show that they're using differing adaptations to get 'resources' like love and attention from their families ((1), (5))."

Julian had caught the movement, and returned it with a broad hand sweep. "To put this into more specific context," he extrapolated, smiling mischievously, "Look at Armand and me. He adapted all this brawny star-football-player-ness to demand attention and praise from our parents. And he adapted this standoffish Ivy League IQ that ensured they'd put him into graduate school." Armand's scowl deepened as his brother's grin broadened. "So I adapted the other way. I'm the cute one, the conciliatory one, that mum and dad liked to spoil. We both get what we want---he by showing off, me by looking cute." Feminine sighs and murmuring agreement from around the room enthusiastically confirmed his self-description.

"No, you learned how to whine," Armand retorted. He caught Julian's eyes. "Can we keep this impersonal, please?"

"Aw," Julian pouted. He held up a finger. "In one second." He finished hurriedly, "So you see, Armand and I are just like competing species. We find specific environmental 'niches' and we exploit them. In this case, the 'niches' are just sibling roles in our family structure. ((1)) Sibling rivalry, then, may mimic species competition on a broader scale..." he trailed off at another warning look from his brother.

A disappointed sigh emanated from the class, many of whom found their two professors' rivalry as interesting (if not more interesting) than the topic of rivalry itself.

The gothic twin, however, simply sighed and dutifully raised her hand. "Sorry to be devil's advocate here, but all of that didn't really answer my sister's question," she complained. "Some siblings don't compete. They're friends. And some of them are really protective of one another. Is that not biological?"

"Biological? Not sure," Armand grumbled, rubbing his temples. "Julian, is there animal precedent for that?"

Julian blinked. "Actually, yeah. Excuse the randomness, but in the Taiwanese aphid, each sibling pair has one sterile sibling and one fertile sibling. The stronger sterile sibling protects the weaker fertile sibling from predators, allowing it to reproduce. ((1)) And that's just one of many cooperating sibling pairs in nature."

The gothic twin blinked right back at him. "So you're saying that human siblings protect one another so that they can pass on the family genes?"

"Goodness, no," Julian returned, shaking his head. "I thought we agreed earlier that genes weren't the only important factor in natural selection. And that humans are different because of their morals. I was just saying that sibling cooperation isn't unheard of in nature as a whole." He, too, lifted a hand to rub at his temples, and for a moment the relationship between the two professors was blindingly apparent.

The twin pressed, "So then sibling cooperation in humans isn't biological?"

Armand sighed. "That is what we don't know, and it's basically the heart of today's discussion. Siblings---as annoying as they are---when they bond, are bonded together by love. So it basically comes down to whether or not sibling love is a biological emotion, or a sociological construct." He drummed his fingers across his desk. "If it is biological, then it's at odds with sibling competition. If it's not, then it's an imposed value that's overcoming a biological one. Which is just as interesting."

Both twins blinked; the prep, who was visiting the class for the first time, scowled in a desperate attempt at comprehension.

Julian, who had less experience with jargon and somewhat understood their puzzlement, quickly interjected in an attempt to dispel the conversation's heavyhandedness. "Uh, what he means is, do I love Armand because I was born to, or because he's just so darn loveable? And if it's because I have to, then why does nature tell me to fight him as well? And if it's not, is he so loveable that it transcends biology?" The tireless grin reappeared on his face. "Well, of course he's loveable, I mean, look at him, like a big teddybear linebacker---" he broke off when he saw his brother's face.

Armand's eye was twitching. "Sibling love must be biological, because I have no idea why I love you. Shut up."

"Gotcha," Julian yipped, and did.

"Anyway," Armand continued wearily, "This is not a question we can answer in one class period. There's biological evidence in animals for both sibling cooperation and sibling rivalry. In some species---humans included, there are even both simultaneously."

Julian was fidgeting. Armand sighed. "Already?" he drawled. Julian fixed him with a doe-eyed placating look. "Oh all right," his brother consented. "Make it quick."

"Ah, just a quick tidbit of information on the synchronous behaviors," he said hurriedly. "There's actually a scientist who did a study on this---name of William Hamilton---who said that siblings only compete when the benefits are greater than twice the costs of doing so. Because siblings share half their genes. ((1))" He glanced at Armand. "I know that it's obviously too simple a model to apply to human sibling relationships, what with our morals and individual consciousness, but it's something to think about. Ok, I'm done." He lowered his head like a guilty puppy. Half the class cooed; the other half shot nasty looks at Armand, who simply rubbed his temples and sighed.

"Continuing," the elder professor resumed, "As we've said, both cooperation and competition are practiced among humans. Both appear to have some biological basis. Is that basis the prime motivation for the behavior? In other words, is love biological? Who knows."

The prep twin nodded sagely. "That sounds like a wrap-up to me. We're out of time, aren't we?"

Armand nodded gratefully. "We can continue tomorrow on whether or not love is indeed biological." He gazed around the room. "Despite my brother's idiocy---"

"---I take issue with that!"

"---this was a good discussion. I'll see you all tomorrow. Class dismissed!"

***


References

Works Cited

1. 1)Sibling Competition Holy snickers, this article is AMAZING. Read it. The first half, anyway.

2. 2)Suckling pigs and sibling competition More math than I ever needed to see on the subject of nipples.

3. 3)Sibling Rivalry What it says.

4. 4)Siblicide Seabirds and other animals killing off their brothers and sisters. Fun stuff, eh?

5. 5)Sibling rivalry in humans On the psychological causes and lifelong effects of human sibling rivalry.

6. 6)Sibling rivalry and begging behaviors Again, just what it sounds like.

**

...read up on your fairy tales, children...


9)Hansel and Gretel!

9)Cinderella!

9)Beauty and the Beast!

9)Darwin!

(...for the record, I didn't use any of these sources in writing the report. We all know the stories. These links are just here for your enjoyment... and in particular, with the last two, a good laugh at the amount of annotations...)


Efficiency Above All: A Biological Look at Suicid
Name: Nomi Kaim
Date: 2003-11-09 16:46:02
Link to this Comment: 7159


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Efficiency Above All: A Biological Look at Suicide

"And let me ask you this; the dead,
where aren't they?"
– Franz Wright, New Yorker Magazine, Oct. 6, 2003


"Dear Mom and Dad," the letter begins benignly, "Thank you for all of your commitment. But I am not a suitable daughter, and you will all be better off without me. Please realize I have done this for your own good." Nothing more. And beside it, Mr. and Mrs. A find their daughter, dead by her own hand.

So begin the episodes of anguished soul-searching, of horrific "if-onlys" experienced by the family members of countless suicides. Anyone who has faced what Mr. and Mrs. A now grapple with knows that the girl is wrong: they will not be better off, not feel happier, without her. Yet each year, thousands of suicide victims express similar convictions: I am killing myself, they reassure us, for your own good. This thinking – this appeal for selflessness that our society cannot condone – where does it come from? Why, in truth, do people kill themselves?

The problem of suicide ravages the minds of its survivors – of philosophers – and, more recently, of psychologists. We simply cannot understand it. Why suicide? While many non-biological scientists are inclined to define suicide as a conscious act – thereby excluding, perhaps, all non-human self-inflicted deaths (1), (2) – lets us stick with the more basic definition of suicide as self-murder, with or without cognitive "knowledge" or "intent" (***). And, as the concerned psychologists plunge on in their direction, let us examine this problem from a different standpoint, that of biology. In order to make sense of the biology of suicide, however, we must first understand the more general omnipresent phenomenon: death.

All life has a catch: it ends. No living thing can escape death. Not only do people, animals and plants die, the components of living systems also die independently from their hosts, and continually (3). It begins in the fetus, whose extra, formative cells – like the webbing between the fingers and toes – die at a certain stage in development (4), (5). But cells also die by the trillions in the stretch between birth and death. In fact, every cell in the human body (except select nerve cells) will reproduce and die at least once during the human lifetime (4). Some cells, such as skin cells, die daily; we live beneath a protective layer entirely composed of dead skin cells (6). Truly, death exists all around us, upon us, within us. There is no escape.

But why – why death? Well, this mysterious force that we fear above all else works to keep life in balance. If living things did not die, but new things were born, then life would accumulate until it ran out of space and resources and could not survive. And if new life were not born – if reproduction did not exist – then life could not grow and change and increase in diversity. Life would be stagnant, and this WOULD NOT BE LIFE as we call it. For this reason, both reproduction and death are integral to the maintenance of life (7); life could not exist without death! Moreover, the natural selection inherent in the process of evolution requires poorly-adapted organisms to die off and turn over resources to the better-adapted. Yet even disregarding natural selection, death is necessary to allow for birth, which allows for increasing variation, and, thus, life.

It's all very well evolutionarily to die of old age or when your functions cease to be needed. But what about murder? Why must that take place? Though many of us accept death-by-old-age as "natural," we view murder as horrific, barbaric, most decidedly unnatural. And yet it isn't! If incurably sick organisms were left to die slow, natural deaths, the landscape of life on our planet would deteriorate just as an organism would languish and die if all of its old, slow-functioning cells were left intact, consuming resources at minimal output. As a body cannot be healthy when composed of old, sick, slowly dying cells, so a community cannot be healthy if made of weak, sick individuals (6). When a member of the whole starts to require more resources to survive than it produces in return, that's it – it's out of the game. Thus murder exists for those cases of slow undoing in which it would simply be impractical, uneconomical, to await death.

So perhaps we can condone murder, then, if we must – for the greater good. Perhaps it is all right that killer T-cells, a kind of lymphocyte, kill body cells infected with viruses (6); perhaps, even, we can admit to some rationale behind performing the death penalty or removing a person from life support. Whether to conserve resources, to curtail irredeemable suffering or to prevent future damage, the prevailing idea driving these murders is the same: It's not worth letting this one live any longer. Better to be done with him now.

Understanding the ubiquity of, indeed, the indispensability of death – old-age and murder alike – belies any perception of death as "unnatural" or "pathological." Seeing death as a part of the repeating, expanding cycle of life allows many mature people to reach some acceptance of this reality. But it is still not so with suicides. Crying out with a terror as intense as though the phenomenon were entirely new to them, most people label the self-murder of a human being a unique and wholly useless source of grief. But is suicide really unnatural? Is self-destruction unique to the human organism? Is it functionally useless? The answer to these questions is: No.

Like all of death, self-destruction is ubiquitous, occurring on many levels – cell (3), organ, human, animal, plant – and, like all of death, self-destruction is a vital component in the survival of life on earth. (4) The purpose of all death, regardless of its form (and including suicide), is to promote the survival of life elsewhere. Most directly, cells and organisms unsuited to cope with some aspect of their environments die so that similar cells and organisms, who are competing with them for space and resources (i.e. food), can get what they need to survive. It is a matter of investing energy and resources in the members who will contribute the most to the overall community – whether the community is the collection of cells in a living tissue or organism, the organisms in a clan or society, or even the whole of life on earth. An exercise in Utilitarianism, in sacrificing the lost sheep for the herd. And so antelopes killed by lions yield their places to other antelopes with the skills to escape the lions. Old people, and old cells, die once their systems have atrophied a certain amount so they will longer consume time, energy, resources needed by the more efficient bodies of their children, or children cells. Children who die of cancer sacrifice the resources afforded them to other children with the strength to survive (or altogether escape) the blow of cancer. Nerve cells in a developing fetus who cannot hook up properly to one another will die, leaving the job to those remaining neurons that can (6). (It is "survival of the fittest," as Herbert Spencer said, both for organisms and for cells!!) In all of these instances of self-sacrifice, death by self-destruction forms no exception. Lymphocytes, white warrior blood cells in animals' immune systems, self-destruct if they cannot kill invading pathogens so that the resources needed to create and sustain these lymphocytes go only to those fighters who can vanquish their enemies (6). On the macro-level, female octopi starve themselves to death after the birth of their offspring so their children will not have to share resources with them (1). Some lemmings, though their case is hotly disputed, probably walk off of precipices when their community gets too densely populated, so as to thin it out to a healthy number (2). In suicide, as in all death, the movement toward the greater good prevails.

Is suicide necessary? Could it be effectively replaced by the somewhat less humanly-objectionable phenomenon of murder? To answer this, imagine a community of ants, one of the most complex governmental systems visible within the two-square-foot window of the human eye. Within this little six-legged community are many hard-working members well-suited to their job in Antville. But Antville also possesses a significant contingent – perhaps 20% of its members – who wander aimlessly hither and thither, entirely maladapted to life as an ant, unsuited to do anything but gawk. Now, your job as the divine power stretching over Antville is to find and murder the deficient ants. If you do not, they will bump into the well-adapted ants, throw them off course, eat their food, and eventually cause the death – and downfall – of all of Antville! Quick, quick, find the deficients and eliminate them! The future of Antville depends on it!

Granted, this analogy is not quite accurate. Unlike a lymphocyte, you are not programmed specifically for spotting and killing misfits. And yet ... with so many poorly-suited ants, and more being born all the time, wouldn't it be easier if they could finish themselves off? What if the ants were endowed with inner machinery that caused them to self-destruct if they were not well-matched to their environments? Then you would get a break, and you'd be able to focus on protecting your ant community from invaders. The lymphocytes would get a break, and they'd be able to focus on fighting off viral and bacterial infections and demolishing (hopefully) invasive cancer cells (5), (6).

And so it is that many cells, rather than depending on other killer-cells, commit suicide on command. But this command, rather than coming from some overarching divine power, originates within. Cells contain the command for suicidal actions – the spouting of digestive juices from lysosomes, the rupture of membranes – within their all-important little control centers, the DNA of their nuclei. An encoded, preordained message for self-inflicted death (6). Many cells will self-destruct only under certain, specific circumstances, i.e., if glucose or water runs out. The suicide command gorged within the genetic material takes effect only in response to certain molecular triggers (like lack of glucose) (8), (4). But other cells, like the skin or plant leaves, are born to die of their own "hand" (8).

Suicide is NOT absolutely NECESSARY for life to exist – after all, cells might kill each other off, instead, as might other organisms (murder is always a possibility) – but suicide is highly practical due to its extreme EFFICIENCY. Suicide is more efficient, more economical, than both murder and death by gradual malfunction and breakdown (See CHART below). Just as the panoptic prisons of the 19th century took advantage of prisoners' innate abilities to psychically imprison themselves, thereby freeing up the wardens (9), biological systems profit from cells' abilities – indeed, predispositions – to self-destruct and liberate the cells that would otherwise be belabored with their murder. (In truth, cellular murder and suicide are not necessarily mutually exclusive. Cells in the process of self-destruction may be consumed mid-suicide by macrophages, other cells inclined to "eat" those surrounding them. Even cells that complete the process of self-murder are generally finished off by macrophages (4), (10).)

Does biology also profit from higher organisms' abilities to kill themselves? Does suicide among conscious beings – specifically, people – also originate from this drive of all things biological to conserve energy, time, and resources? Do the "natural laws" that govern suicide on micro-levels also apply on the macro-level? I shall not attempt to answer these questions. Ethics, at the crux of all human endeavors, stands in my way. For me to call suicide victims poorly-adapted and burdens to their environment would be insensitive at best, maybe cruel. And I can think of no other way to approach this delicate topic.

To fully unravel the enigma of human suicide in the light of biology will require great tact and even greater courage, courage to transgress fuzzy ethical boundaries (or the creativity to sidestep them!) in the hopes of encountering new insights. We have much to learn. Inquiry into this area should take the shape of comparison studies between cells, non-conscious organism (such as plants), animals, and human beings. Whatever sub-cellular interactions take place within the suicidal cell should then be screened for, on the macro (inter-cellular, hormonal) level, inside the bodies of suicidal organisms. Rather than focusing on consciousness – that sticky topic with no biological code and no parallels across the spectrum of organisms – scientists ought to pay attention to physiology. What physiological phenomena occur within the cell before it self-destructs, and do comparable phenomena occur in organisms? In people, even? A practical problem to overcome involves predicting when a cell or person is going to commit suicide; you cannot make accurate observations after death, as biological systems cease functioning then. But in uncovering the command for suicide within a cell's nuclear DNA (6), biologists are on the right path to determining which cells will kill themselves when. Then, when we can finally extend that knowledge to human beings, we will be able to understand suicide in a non-emotional, non-ambiguous, mechanical way. We may be able to predict suicides, assess their functions, and even, if we deem it necessary, prevent them.

The as-of-yet unbridgeable chasm between cellular self-destruction and human suicide is consciousness. Though we may never fully understand it, we cannot, despite my suggestions for rigorous objectivity, dismiss this unique quality. It is consciousness that makes us different (***). If nothing else, however, we may concede that it is interesting that the last conscious thought conveyed by many suicide victims – the selfless notion that "You will be better off without me" – precisely matches the non-conscious, evolutionarily-developed "intent" of cells that self-destruct to benefit their neighbors.


CHART: SOURCES OF INEFFICIENCY IN DEATH

Death By Old Age:
Long Time Required – YES
Other Agent Required – NO
Total Sources of Inefficiency – ONE

Death By Homicide:
Long Time Required – NO
Other Agent Required – YES
Total Sources of Inefficiency – ONE

Death by Suicide:
Long Time Required – NO
Other Agent Required – NO
Total Sources of Inefficiency - NONE
This simple chart compares sources of inefficiency in death by old age (gradual deterioration), murder and suicide. Note that suicide is the most efficient in terms of energy and resources being returned to the greater community. (Bear in mind also that old age, homicide and suicide are not always mutually exclusive.)

(***) An interesting afterthought: if we remove the conscious aspect of suicide, is there any real way to distinguish between self-destruction and death by old age? A cell that commits suicide dies quickly (with no external intervention); a cell that dies of old age dies slowly (with no external intervention). Both deaths occur to eliminate a now-useless cell. Wherein lies the time cutoff? And if we reduce human consciousness to mere brain behavior, is self-murder (initiated by the brain) any different from death by age and malfunction (initiated by, say, the heart or the liver)? In both cases, some part of the person's body dictates that the time has come for his life to end. Without the phenomenon of consciousness, only two kinds of death exist – death with, and death without, external intervention ("homicide"). Consciousness really DOES make all the difference ... !


References


1)Do Octopuses Commit Suicide?, subjective philosophical ruminations on suicide, octopi, and other animals

2)Jamison, Kay R. Night Falls Fast: Understanding Suicide. Knopf: New York, 1999.

3) Encyclopedia Britannica Article – Apoptosis, a definition of the mechanism of cellular suicide called apoptosis

4)Suicidal Cells, An Appetite For Self-Destruction – Animals, a good description of self-destruction of animal cells

5)Researchers Solve Killer Protein's "Crime", a discussion of cancer as a failure in normal programmed cell death

6)Clark, William R. Sex and the Origins of Death. Oxford University Press: New York, 1996.

6-b)Sex and the Origins of Death – A Book Review, a one-page review of the above book, from which I draw much of the information in this paper

7)Sex and Death: The Awful Existential Significance of Cellular Suicide, subjective ponderings on the interrelatedness of sexual reproduction and death, with some information on cancer as well

8) Suicidal Cells, An Appetite For Self-Destruction – Plants, a good description of self-destruction of cells, this time in plants

9)Foucault , Michael. Discipline and Punish: The Birth of the Prison. Random House: New York, 1977.

10) Weizmann Institute of Science: Death of a Cell, a discussion of one series of studies of cellular apoptosis

NOT CITED IN TEXT

11) http://www.sciencedaily.com/releases/1997/07/970722090258.htm>Stopping "Cellular Suicide" Could Boost Production in Biotech Labs, a look at some of the cons of cellular suicide in terms of technology

12)Researchers Find New Way to Trigger Self-Destruction of Certain Cancer Cells, a scheme for putting cellular suicide to use for humans!


Cloned Meat: Its What's for Dinner
Name: Megan Will
Date: 2003-11-09 17:09:12
Link to this Comment: 7160


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Cloned Meat: Its What's for Dinner

"[Cloning] first involves destroying the nucleus of an egg cell from the species to be cloned. A nucleus is then removed from a cell of an animal of the same species and injected into the egg cell. The egg, with its new nucleus, develops into an animal with the same genetic makeup as the donor." 1)


Sounds yummy, huh? You may soon be dining on Grade A, prime cut cloned beef. Or pork. Or chicken for that matter. Is the thought alone enough to make you want to become vegan? The Food and Drug Administration has issued preliminary statements about the sale of cloned meat and dairy products becoming a reality. These statements are based on a recent report from the National Academy of Sciences. "Eating meat or drinking milk from cloned animals is probably safe, experts from the National Academies of Science concluded after reviewing what little research exists on the topic."
(2) But is there truly enough research on this topic to draw conclusions that could affect millions of people's health? Will we even know if we are eating cloned meat or products? And will this cloned meat be used in a way beneficial for society, or simply for a money making purpose?

Obviously, the FDA's main concern with the proposed consumption and sales of cloned meat and dairy products is how it will affect the people eating it. A possible negative effect the cloned products could have is allergenic consequences. A committee from the Academies has stated that the likelihood of these products having and allergenic effect is low.(2) Yet the committee also has cautionary words about the validity of their statements, claiming that the only way to actually find out the reactions to the products is to run multiple tests of consumption of the product; tests which have not occurred in great numbers. It is likely that it will be the offspring of the cloned animals that hit the selves for consumption, not the clones themselves. As the cloning process is still very expensive, the actual clones will be used for breeding, not slaughtering. However, many of the cloned animals are not near the age for breeding, lactating, or slaughtering. (3) This is one of the reasons for the lack of data available. Another concern is whether or not the proteins ingested by the cattle will seep its way into the milk that an unsuspecting human may be drinking. (2) The majority of these concerns are unanswered at this point, due to lack of substantial experimentation and available data. The FDA's preliminary report is based on the hypothesis that successful clones of healthy animals should produce healthy meat and milk. (3) The United States is not the only country interested in beginning to sell cloned meat and milk. Japan is actively researching the process and its risks, as is Canada.

What does the selling of cloned meat mean for the agricultural and food industries? Or the consumer, for that matter? The ideal situation would be that all meat on the market would be raised to a certain level of expectancy. If the prime steer or calf could be cloned, then all meat on the market should have the top quality cut. The overall quality of meat and milk should increase. However, let's not forget about the financial part of this situation. More likely, the cloned meat would be much pricier than the regular meat. A spokesperson from the Athens Corporation agreed. "Wanner said today's cattle and pork markets ring up $700 million annually, but with the introduction of pricier cloned meat, that market will rise to almost $2 billion -- and Wanner said the Athens-based company and UGA expect ''to have a significant piece of the pie." (5) As for the agricultural industry, if cloned animals and the offspring of these clones are allowed to be bred, slaughtered, and sold, it could change the entire face of the industry. "We are deeply disturbed by the idea of mass cloning by the industrial agriculture industry,' said Wayne Pacelle, senior vice president of the Humane Society. 'It will accelerate the drive toward factory farming, which is already becoming dominant.'" (3)

One of the main reasons that the FDA is holdings its final say in this matter is the fact that it is waiting for public feedback. "The consumer has a fear of the unknown. The only way to confront that from a science perspective is to do the studies." (6) In a survey conducted by the Washington Post on November 5, 2003, 56% of those polled said that they would not eat meat from a cloned animal. 32.8% said that they would eat the meat, while 11.2% said that they did not eat any meat at all. (6) CNN conducted a similar poll in August of 2002. Comparing the results, there is an increase in those who voted that they would not eat the meat. In August 2002, only 49% voted that they would not eat the cloned meat, 33% said that they would eat the meat, and 19% said that they were not sure. (2)

But once the FDA approves the cloned meat, will we even know that we are eating it?
The FDA stated that if the products are deemed safe, than they will not be required to have a special marking on them. (7) However, if health problems only occur after long periods of consumption of the cloned meat, shouldn't consumers know what they are eating? The Center for Food Safety thinks that labels should be required. "Certainly I think there should be labels," said Joseph Mendelson, legal director of the Center for Food Safety. "I think overwhelmingly consumers would want that information and I think there's reason to give it to them." (7) Mendelson also added that many Americans do not even know that they are currently eating genetically modified foods.

The use of cloned animals in the production of a greater quantity and quality of meat could be beneficial to society in so many ways. All food prices could go down so that low-income families could afford milk and meat. Meat could be produced to be shipped to third world countries, or those in war. Dying herds of animals in Africa and the jungles of South America could be jumped started. However, based on the price of the cloning process, and the payback that many farmers who endorse this process are expecting, more than likely meat from cloned animals will become some sort of weird, expensive delicacy.


References

1) World Book Encyclopedia; the entry for cloning

2) 2)Safety Report from CNN, article about the safety of cloned products


3) 3)Sept. 15, 2002 Washington Post article

4) 4)FDA report about cloning, minimal report on safeness of cloned products

5) 5)Online Athens article

6) 6)Nov. 4 Washington Post article, Article about FDA's decision

7) 7)Article about safeness of eating clone products,


Chronic Fatigue Syndrome: What It Is and My Own Pe
Name: Elizabeth
Date: 2003-11-09 17:28:25
Link to this Comment: 7162


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Everyone, especially college students (and their professors), gets a little worn out sometimes. Even weeks before vacations begin, students start counting down the days until they get to finally sleep in and forget about the stresses of life for awhile. Chronic Fatigue Syndrome, however, is vastly different. It is a debilitating disorder that can prohibit the sufferer from accomplishing even the most basic, everyday tasks.
The symptoms of Chronic Fatigue Syndrome are various. The most obvious are constant tiredness and feeling easily exhausted. Other symptoms include frequent headaches, joint and muscle pain, chills without a high fever, depression, difficulty with concentration, and tender lymph glands. Because many of these symptoms are common to other illnesses, it makes Chronic Fatigue Syndrome all the more difficult to categorize and diagnose (1).
While Chronic Fatigue Syndrome has only recently gained publicity, it isn't a new problem. What is new is its name. Researchers chose the name because it is believed that the illness is not one single disease but a culmination of many factors (1).
It is believed that at least two thirds of people suffering from Chronic Fatigue Syndrome are women, primarily Caucasian women of a middle class socioeconomic background. Most people with Chronic Fatigue Syndrome relate the onset of it to a particular infection, which most often includes respiratory or gastrointestinal illness, influenza, bronchitis, sore throats, colds or diarrhea, mononucleosis, hepatitis, or jaundice. In my case, I was diagnosed after a series of having Strep Throat three times over the course of one winter. Most people recover completely from these infections, as I did, however are left feeling very weak, tired, and depressed even long after other symptoms of the infections have disappeared (2).
A common factor in Chronic Fatigue Syndrome is allergy. Chronic Fatigue Syndrome patients have twice the number of allergic skin reactions as people without the illness (2). I've always suffered from allergies as a child, and at one point had psoriasis, a skin condition. Such experiences are not uncommon amongst people with Chronic Fatigue Syndrome.
Various studies have been conducted concerning the immune systems of patients with Chronic Fatigue Syndrome, and differences have been found between sufferers of the illness and healthy individuals. Several studies have shown that certain aspects of the immune system in Chronic Fatigue Syndrome sufferers behave abnormally. For example, the body produces two chemicals called Interleukin—2 and Gamma Interferon, for the purpose of battling against cancer and infectious agents. It was discovered in patients with Chronic Fatigue Syndrome that these chemicals were not produced in normal amounts. In addition, cells referred to as "natural killer cells", which also battle against infections, were found in vastly reduced numbers (2).
To combat against these problems, doctors have experimented with giving patients injections of Interferon. However, this procedure is actually known to cause fever, fatigue, exhaustion, muscle pain, and headaches—which would in effect contribute to and even worsen the symptoms of the Chronic Fatigue Syndrome itself.
In my experience, there has been a stigma attached with having Chronic Fatigue Syndrome. Oftentimes people write sufferers of the syndrome off as merely being lazy. Such reactions are especially hard to deal with while already suffering from the pains of the illness. Sadly, I've even come across this in college. I've been in classes with students who have come down with mononucleosis, and when they are absent teachers are very sympathetic. However, I've found that in some cases if I miss a class, and then explain to the professor about having Chronic Fatigue Syndrome, they are less apt to accept it as a reasonable excuse. The symptoms of mononucleosis and Chronic Fatigue Syndrome are so very similar, however this goes unrealized by many people. This is an unfortunate reality of having Chronic Fatigue Syndrome, and is, sadly, not uncommon.
While Chronic Fatigue Syndrome has been around for a long time, significant research has only started to be conducted. This proves to serve as a beacon of hope for Chronic Fatigue Syndrome sufferers. While many people are uneducated about the syndrome, the more research that goes on and the greater the conclusions that are drawn about the illness, the more publicity the illness will receive, and hopefully sufferers of the illness will no longer be looked down upon. As recently as twenty years ago, people suffering from Chronic Fatigue Syndrome were oftentimes referred to psychiatrists, because so many medical professionals believed the illness to be purely psychological. Fortunately now it has been realized amongst the medical community that the illness is most legitimate, and hopefully the quest to find out more information about the syndrome will gain even more momentum in years to come.

References

1)Straus, Stephen. Chronic Fatigue Syndrome. Washington, D.C. NIH Publishing.
1991.
2)
2. Hawkins, Joseph. Medical Questions. New York, N.Y. Brookings Publishing Co.
1994.


Men giving birth?
Name: Vanessa He
Date: 2003-11-09 23:10:01
Link to this Comment: 7167


<mytitle>

Biology 103
2003 Second Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT

A leading British fertility expert, Lord Winston, says it should be possible for a man to carry a baby to term and then deliver it by a Caesarean section. In Winston's view, modern medical technology will soon allow homosexual male couples to bear children, or allow a heterosexual male to carry a child if his wife is unable to for medical reasons.
"Male pregnancy would certainly be possible and would be the same as when a woman has an ectopic pregnancy -- outside the uterus -- although to sustain it, you'd have to give the man lots of female hormones," Winston told the Times. He will outline the concept in his new book, The IVF Revolution. IVF stands for in vitro fertilization.
Winston acknowledges that there could be a few problems with the technique. Among other things, the man could experience internal bleeding -- and he might grow breasts. "I don't think there would be a rush of people wanting to implement this technology," he said. (1)
Presently, researchers are now busy perfecting a reliable birth-control drug for men. A five-year study, conducted by the ANZAC Research Institute in Sydney, involved 55 men using hormonal injections and implants as birth control. None of the men's partners conceived and there were no side effects compared to other trials, which have been terminated due to unforeseen problems. The contraceptive works by inhibiting sperm production through injections of progestin every three months. Since this hormone also reduces the sex drive, testosterone had to be implanted under the men's skin every four months to maintain their libido. After a 12-month period, participants would stop the treatment to recover their fertility.
"This is the first time a reversible male contraceptive that will suppress sperm production reliably and reversibly has been fully tested by couples," Professor David Handelsman, the study's director, was quoted by Reuters as saying.
Melissa Dear, a spokesperson for the Family Planning Association, told CNN that she thought it was unlikely that the final product would be marketed in the form of an injection. "It's too awkward a method," she said. "This study has brought the reality of the male contraceptive pill one step closer, but we need to look at combining both hormones in a tablet form." She added that although the Family Planning Association welcomed the news, she anticipated that it would be five to 10 years before a male contraceptive was available commercially. (2)
Hormones would be administered to make the patient receptive to the pregnancy. In vitro fertilization techniques would induce an ectopic pregnancy by implanting an embryo and placenta into the abdominal cavity, just under the peritoneum (the surrounding lining). Once implantation is complete, the intake of hormones ceases, because the pregnancy itself, as expected, takes over. The embryo secretes sufficient hormones to maintain its own growth and development. The delivery will require open surgery to remove the baby and the placenta. Removal of the placenta is the real danger because it forms such intimate connections with surrounding vessels that massive hemorrhage is likely. Implantation may have also involved other structures in the abdomen, including the bowel and it is possible that parts of other organs may need to be removed. Several physicians who are well-accustomed to advanced and dangerous forms of ectopic pregnancies should be on-hand to handle any complications. (3)


The first time I heard of these reports I was just amazed by the concept of men giving birth. But, really thinking about it I realized how the idea could take away a women's identity. For so long, woman have struggled to display themselves as strong beings capable of anything and used the ability of giving birth as part of their argument. If men take that away from women, then women lose part of the argument; moreover, the battle.
The perception of males can also change as well. There could be men who would want to raise a child on their own, as mothers who undergo artificial insemination. This scientific revelation could possibly turn into the reevaluation of gender roles; if there should be any or if new criteria is needed.
Nowadays, it seems as if there has been a role reversal. Fathers have been seen taking the job of caring for the children, while the mothers go off into the work field. Some women will feel inclined to make the men sacrifice the carrying of the child. At the same time, a couple might want to share responsibilities if they wish to bear more than one child. This could drastically change our future demographics, since it has been calculated that by the year 2020 there will be a lesser population of children. This new light of male pregnancy could rise the sagging decline of young children in the near future.
The new found discovery that the population of children is diminishing in the near future could be contributed by the advancement of male contraceptives. It doubles the rate of preventive measures and chances of impregnating. The effectiveness of these products will then encourage the administering of these male contraceptives, allowing the nation at large to feel safe using said drugs.

With the introduction of male pregnancy, there will be an expected growth in the percentage of young children residing in the world. Infertile couples will be given the gift of conceiving, homosexual couples will no longer need surrogate mothers, and males without partners could have children of their own as well. Many things could change, until then one can only imagine how.

(1) http://wwww.gsreport.com/articles/art000085.html
(2) http://www.cnn.com/2003/HEALTH/10/06/male.pill/index.html
(3) http://www.malepregnancy.com/science


The Diverse Landscapes of Life
Name: Laura Wolf
Date: 2003-11-09 23:16:25
Link to this Comment: 7168

<mytitle> Biology 103
2003 Second Paper
On Serendip

Living organisms have been found to exist in many diverse environments on this planet; places where perhaps no human had thought to look before. Sometimes life is found because of the wild imaginations of a few curious people – other times it is stumbled upon nearly by accident. This paper will explore two seemingly unlikely landscapes of life, and will highlight the successes of discovering new living organisms in terms of expanding the array of possibility and our perception of the question "What is Life?"

One environment receiving a lot of attention from scientists is the bottom of the ocean. Earlier in history it was thought that no creature could survive under the immense pressure and the total darkness of the ocean. The landscape remained untouched by humans, because without that sense of possibility for life, the technology was not created to explore the area. Until finally, in 1972, studies conducted near the Galapagos Islands reported vents, or hot water plumes. Now that something unexpected had been found, curiosity, possibility and new questions arose. The search began to accelerate along with the technology. A deep-sea robot named Alvin was sent exploring and a whole array of bottom dwellers was found. There were giant worms, clams and mussels (1).

Once a community of living organisms has been found in a foreign environment, explanations will start rolling out. These hypotheses generally attempt to compare the system of life to our own systems – grappling for similarities among the resources of the new landscape those we are already familiar with. For instance, in forests and jungles (environments which are very understandable to us) there are some animals that can climb or fly to the tops of the trees where the fruit is. Other animals must stay on the ground, and so they live off of fruit which has fallen out of the tree. When a few organisms were found in the depths of the sea, it was first conjectured that they ate food that floated down to them from the "lighted regions of the ocean" (1), which seems very similar to the configuration of the familiar woodland food-system. This story was adequate until it was discovered that entire "cities" of creatures were thriving down at the bottom of the ocean – biologists had to come up with a new story.

There are hydrothermal vents called black smokers which let off heat and chemicals from the bottom of the ocean. Entire communities of creatures live around these vents, and it was soon discovered that chemoautotrophic bacteria use chemicals from these vents as energy, rather than sunlight which was previously thought to be essential to all living organisms and life systems. The bacteria use chemical energy from the breaking-down of hydrogen sulfide, and they are at the bottom of the food chain in this environment. Many species have been discovered in this warm underwater system – including giant clams, crabs, and pink brotulid fish (2). Not only were these new species found, but entire categories had to be formed for certain creatures that fit into no pre-existing group. For instance, "a reddish worm known as vestimentiferan, which builds and lives in a tube up to 7.5 m (25 ft.) long, is so different from any other known animal that it has been classified in a phylum of its own" (2). By exploring this new landscape of life, the make-up of what we call "life" has been enhanced. The addition of new species and new categories of species means a wider spectrum of stories and explanations of life.

Even with all these discoveries, "less than 1% of the sea floor where hydrothermal vents are suspected has been investigated" (1). "One puzzle for scientists to figure out is why the chemistry of hydrothermal vents changes, not only among locations, but over temporal scales as well. Measurements of vent water taken from the time of a sea floor eruption indicate a change in the mineral composition being emitted. Scientists have also found that individual vents or entire vent fields can change anywhere from days to thousands of years" (1). So much remains to be understood about these systems, and there is an incredible probability that new creatures or new systems of life will be discovered in the depths of the seas. We simply have not been looking for very long; on the timeline of human history, these past few decades barely represent a noticeable mark, yet it embodies all we know of the bottom dwellers of the ocean.

Another set of landscapes which exist in the realm of unfamiliarity are caves. Caves seem dark, cold and lifeless to us, but to many species they provide an environment of perfect conditions. Organisms here are separated into four categories, troglobites, troglophiles, trogloxenes, and habitual trogloxenes. Troglobites are the only type that live entirely in the dark zone of a cave and cannot survive anywhere else. Troglophiles could actually live on the surface if the environment was comparable to that of a cave. Trogloxenes "occur in caves but don't complete their entire life cycles within a cave" (3), and habitual trogloxenes are only found in caves during certain periods of their life cycle.

For such a dark, unfamiliar landscape it is surprising to me how much is known about cave organisms and their life cycles. Many creatures live in puddles made by water dripping from the ceiling of a solution cave. A solution cave is actually created by water coursing through carbonate and sulfate rocks including limestone, dolomite, marble and gypsum (3). Stalactites and stalagmites can be found in these caves along with many unique species of animals. For instance, glowworms (otherwise known as fungus gnats) can be found on the ceilings and walls of some caves. When still in the larval stage, this creature glows because of a chemical called luciferin "which can be 'burned' with oxygen to produce water, carbon dioxide, and light---the exact opposite of photosynthesis" (4)).

There are creatures that survive only on the limited food sources inside the cave, however, some animals depend on outside sources being brought into the cave (especially in certain caves where plant-life is impossible). To survive in this environment, "sensory systems are adapted...Also since food is scarce in most of these caves, since plant life can't grow in the absence of light, there are metabolic changes to account for this deficit" (3). Sometimes organic matter flows into the cave when water flows in, or flows up from the ground. Other nutrients come from the guano of birds that nest in caves, such as the green-masked balaclava-bird. This "supports a thriving community of fungus and insect life, which in turn are eaten by larger organisms" (4). Bats also roost in caves, providing organic material rich in nitrates to many permanent cave-dwelling species. Therefore it seems that some connection with the outside world is necessary to many cave life-systems.

The ocean floor and the caverns of the earth have one major thing in common – they are not practical environments for a human to live in. They remain mysterious although it seems that a lot of progress has been made in the exploration of these two environments, particularly caves. This is surprising because there are many different types of caves and generally they are hard to enter and explore. Perhaps we know more about caves because they incited curiosity before the bottom of the ocean ever did; we can see the entrances to caves, and we imagined what creatures might live inside them. Also, the technology did not need to come as far to begin exploring caves (not including sea caves and ice caves).

In conclusion, finding every living organism on this planet is probably impossible. Discovering every living species is probably impossible. But beginning to explore all available landscapes and accepting the possibility of finding life is an important undertaking. To find life one needs curiosity, technology, and continuous, collective observations over time. This is true in unfamiliar settings as well as familiar ones – when our class explored the courtyards of the science building, treating them as foreign landscapes called Planet Nearer and Planet Farther, we were very successful with discovering living organisms and categorizing what we saw. The question "What is Life?" is better approached with a wider arena of understanding in terms of landscapes and habitats. The fact that we are discovering new organisms in unlikely places is a credit to the curiosity and the hypothesizing ability of human beings.

References

1)Information about black smokers, an in-depth site about underwater vents.
2)More information about black smokers.
3)Caves, a thorough look at caves and the different categories of cave-dwelling creatures.
4)Cave creatures, a guide to specific types of cave creatures in the Waitomo caves.


Sleep Paralysis
Name: Rochelle M
Date: 2003-11-09 23:22:02
Link to this Comment: 7169


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Rochelle Merilien
Class: Biology 103
Professor: Paul Grobstein
Due Date: 11/10/03

Sleep Paralysis


You are lying in bed taking a much-needed nap. You have had a long day and this little refresher is just what you need. You are slowly becoming awake and aware of what is going around you. You can hear someone in the kitchen cooking and through the open window by your bed you can hear the sounds of the kids of the neighborhood jumping rope and playing hand games. You can even hear Old Mrs. Jones yelling at Little Johnny for running all over her flowers. You have been sleeping for about an hour and you feel that it is about time to get up. So you open your eyes, or at least you think you do. For reason some they are not open. So you think to yourself, "That is odd, I thought I mentally told my eyes to open?" So you try again, and this time you hear your voice in your head say, "Eyes open;" but again nothing happens. Now you think maybe you are really out of it, and that you must be extremely tired and just need to rub your eyes a little to get them moving. So next you try to move your arm, only it is stuck. Then you realize that your entire body is stuck. You think that this situation has to be unreal. You are awake; you have to be. You can obviously think to yourself, and you can hear everything that is going on inside and outside, but why are you not moving? You try to open your mouth and call for help, but you cannot do that either. You are completely paralyzed! Then you start to think this that is some sort of nightmare-and it is, except it is very much real. You are experiencing sleep paralysis.

Sleep paralysis is a condition that occurs at either the onset or upon awakening of sleep. The medical terms for the two forms of sleep paralysis are hypnogogic and hypnopompic (1). When a person falls asleep, the body secretes hormones that relax certain muscles within the body, causing it to go into paralysis. Doing this prevents the body from acting out a person's dream, which could result in an injury. Sleep paralysis generally runs within one's family or in those who suffer from narcolepsy (2), but there is currently no explanation for why some people get it while others do not. While researchers say that sleep paralysis is not physically harmful, because the body is supposed to come out of that state within a few minutes after a person wakes up, they do not consider that the experience can take a toll on a person emotionally. The strain that a person feels and the stress that having this condition can cause, can possibly affect someone mentally, emotionally, or later on in life.

The reason I decided to research and write about this topic was because throughout my senior year in high school, I suffered from sleep paralysis, and have experienced situations much like the one described in the first paragraph. My senior year was packed with extracurricular activities. I worked part-time, preformed in three school plays and was also active in my religious responsibilities. It was April when I had my first experience with sleep paralysis. I honestly thought that I was loosing my mind. It was a Saturday afternoon and I was taking a nap. I had fallen asleep watching TV. One of the first things I became aware of after my initial shock of being unable to move any of my limbs, was that the TV had been turned off. I could also hear my mother cooking in the kitchen, and all I could think was, "This cannot be real. It has to be a dream; no one goes to sleep with the ability to move only to wake up paralyzed!" I was some how able to calm myself down and fall back asleep. When I woke up again, I was able to move and the first thing I did was tell my mom. Being from the Caribbean, she told me that when stuff like that would happen back home, people would say that it was demons. I honestly did not how to respond to that, but I told her it was one of the scariest experiences of my life and that I hoped it was one-time occurrence. Imagine my dilemma when it occurred two weeks later, and then again a few weeks after that. It would always occur on Saturdays between 2 and 4 o'clock in the afternoon. The difference between these occurrences and the first was that I could never calm myself back asleep. I would wake up immobile and immediately the fear would set in-the fear that I was no longer in control of my body. It was like being told I could not control my life. I would lie there wanting to scream, willing some body part to move or for someone to come into my room and wake me up for some stupid reason. Then a minute later I would regain my mobilization. I would be shaking and breathing heavily as if I was just suffocating.

While some of the triggers for sleep paralysis are fatigue, anxiety, and continuous changes in daily routine, I find it interesting that I have not suffered an episode during my attendance at Bryn Mawr-a place where I feel I have experienced much more anxiety and fatigue. For the month and a half that I suffered from frequent sleep paralysis episodes I dreaded taking naps on Saturday afternoons. I would torment myself thinking that there was always a chance that I might fall asleep and wake up and not be able to come out of my frozen state. As the months went by I was able to retake naps on Saturdays and I thought that the episodes had finally stopped. I was able to take the all of my emotional feelings about sleep paralysis and lock them away in the back of mind only to be released for anecdotal discussions. I felt great and thought I would never have to deal with it again, only during one of our school breaks, I had another episode. It was then that I knew that it wasn't really over-that there was always a possibility of it coming back.

Sleep paralysis is real. Although it does not affect everyone, it does occur. I and others I know who suffer from it are living proof if that. There are those who pass it off as mere hallucination and do not believe in it because it is not commonly discussed or well-known. In researching sleep paralysis, I feel that even those who do know about it and have written about it do not validate the emotional injury that suffering form this condition brings upon a person. Many of the web pages I visited would state that sleep paralysis was not harmful to a person physically -which is true, but they rarely made any mention of the agitation, despair, and fear that is experienced when a person goes through it. There was only one page that I went to that actually offered suggestions as to what a person can do to calm down when waking up in paralysis. It is very easy to say, "Take deep breaths and concentrate on trying to move one small body part" when a one is awake, but it is quite different to try to do that when one is within the paralysis.

(1) Hypnogogic refers to the period when the body is in paralysis before a person falls asleep and hypnopompic refers to the period when the body is in paralysis when a person is about to wake up.
(2) Narcolepsy is a neurological condition where a person has uncontrollable attacks of deep sleep.

References

1)Stanford University Sleep Paralysis Information Page , Dr. William C. Dement's page on sleep paralysis
2)skeptic homepage , The Skeptic's Dictionary
3)Sleep Paralysis information , Sleep paralysis is normal
MP
August 29, 1997


Why is Diabetes an Epidemic in the African America
Name: Ramatu Kal
Date: 2003-11-10 02:25:09
Link to this Comment: 7170

Ramatu Kallon
11/10/03
Biology 103
Professor Grobstein



"The facts are clear: The diabetes epidemic sweeping the U.S. is hitting the African American community particularly hard, according to doctors." (2) Diabetes is defined as, "A disease that affects the body's ability to produce or respond to insulin, a hormone that allows blood glucose (blood sugar) to enter the cells of the body and be used for energy." (1) There are two types of diabetes: type 1 diabetes and type 2 diabetes. Type 1 diabetes, which usually begins during childhood or adolescence, "Is a condition characterized by high blood glucose levels caused by total lack of insulin. This occurs when the body's immune system attacks the insulin producing beta cells in the pancreas and destroys them.." (2) Type 2 Diabetes, most common form of the disease, "Usually occurring in middle age adults after the age of forty-five, is a condition characterized by high blood glucose levels caused by either lack of insulin or the body's inability to use insulin efficiently." (2) National health surveys over the past 35 years show that the number of African American's that have been diagnosed with diabetes is drastically increasing. In fact, it has been reported, "Out of 16 million Americans with diabetes, twenty-three million are African Americans." (3) There are clearly many implications on why diabetes is so rampant in the African American community, those of which will be discussed in this report. In this report, I will exam aspects of the "African American Culture," in order to determine whether those aspects have anything to do with the reasons why diabetes is higher in the African American community, more so than others.

"Have you ever heard in the Black culture someone say to another "I'm going home to grease?" or "Mama can sure burn." Do they mean that literally? Is there a lot of grease in soul food? Do African Americans like their food well done or almost burnt? Do greens and beans require pork to satisfy as soul food? Is this a legacy from slavery that remains with us 135 years later?" (4) These rhetorical questions are solutions to why diabetes is most prevalent in the African Americans community. "Fifty percent of African American women suffer from obesity. African American adults have substantially higher rates of obesity than white Americans." (3) Overweight is a major risk factor for diabetes 2 in the African American community. Excess amounts of fats and sugars are killing the African Community. In the bestseller Satan", by Dr. Jawanza Kunjufu, she argues that "Diabetes is the third leading killer after heart disease and cancer among African Americans." (4) She goes on to say that, "Historically black people have played diabetes off and commonly referred to this deadly disease by saying Mama has a sugar problem." (4) In order for the expansion of diabetes to lessen in the black community, people have to be comfortable enough to name the disease and realize that "if mama has a sugar problem," then mama needs to stop eating five pounds of sugar! Many African Americans tend characterize diabetes as being a "sugar disease", but there are so many other factors besides sugar, to take into account when talking about diabetes. For one, a high amount of fat intake can be a huge risk factor for diabetes 2. My parents both have diabetes 2 (which is common in African Americans) and their parents and grand parents suffered from diabetes as well. The manifestation of diabetes in my family history is not surprising to me, primarily, because my mother cooks with massive amounts of oil and sugar. My mother generally cooks with palm oil, which is notorious for clogging the arteries, which can put people at greater risks for high bloods pressure, diabetes and obesity. According to ADA (American Diabetes Association), "Many African Americans who have diabetes know they have it, but continue their same diet." (4) That statistic holds true for my parents, whom know they have diabetes, yet continue to eat with the same amount of oils, sugars and fats as before.

Poor dieting is another risk of diabetes amongst all ethnicities, particularly the African Americans. African Americans reportedly have a higher sugar and fat diet and tend to neglect eating vegetables and fruits. Last summer, I worked at a camp, which was predominantly African American. At every meal, the campers were given a main entree with vegetables and fruits as side dishes. What disturbed me the most about the meal was that the campers never ate their vegetables nor were they encouraged to eat them. When many children see their parent and peers consuming more sugar and fat and less fruits and vegetables, they are often going to do the same. Thus, the cycle of diabetes continues through the family lineage.
"The common finding that diabetes runs in the families indicates there is a strong genetic component to type 1 and type 2 diabetes." (3) Some researchers believe that inheritance of diabetes, specifically diabetes 2 is more apparent in the African Americans community, than any other race. African American children have most recently been reported as being more susceptible to diabetes 2 than whites. Researchers also believe that "African Americans inherited a "thrifty gene" from their African ancestors." (3) This "thrifty gene" enabled "Africans during the "feast and famine" cycle, to use food energy more sufficiently once food was scarce." (3) However, today the "thrifty gene" that was meant for survival makes African Americans more susceptible to diabetes type 2.

So far in this report, I have given some of the causes of diabetes in the African American Community. There is one piece missing to this puzzle and that is the affects of diabetes. "Compared to white Americans, African Americans experience higher rates of diabetes complications such as eye disease, kidney failure and amputations." (3) Some other factors that influence these complications are high blood pressure, cigarettes smoking and lack of exercise. It is unfortunate that so many diabetics, particularly African Americans do not exercise. In the NHANES survey, "Fifty percent of African American men and sixty-seven percent of African American women reported that they participated in little to no leisure time physical activity." (3)
There is obviously still a sense of apathy in many Americans, particularly African Americans when it comes to caring for diabetes. In this report, I have revealed several aspects of the "African American culture" that can contribute to the high diabetes risk in the African American community. The risks discussed include, excess amounts of oils and sugars in food, improper dieting and apathy towards treating the disease. Diabetes is a major disease in all ethnicities, particularly in African Americans and can be deadly if not treated properly. If the cycle of diabetes is to lessen, in the African American community, people have to take the approach to eat right and exercise, or else the diabetes will continue to run rampant throughout the community.


WWW Sources

1 , a rich resource from the diabetes community outreach project


2) , a rich resource from the department of health and diabetes.

3, a rich resource on diabetes

4) ", a rich resource on diabetes


Why is Diabetes an Epidemic in the African America
Name: Ramatu Kal
Date: 2003-11-10 02:25:16
Link to this Comment: 7171

Ramatu Kallon
11/10/03
Biology 103
Professor Grobstein



"The facts are clear: The diabetes epidemic sweeping the U.S. is hitting the African American community particularly hard, according to doctors." (2) Diabetes is defined as, "A disease that affects the body's ability to produce or respond to insulin, a hormone that allows blood glucose (blood sugar) to enter the cells of the body and be used for energy." (1) There are two types of diabetes: type 1 diabetes and type 2 diabetes. Type 1 diabetes, which usually begins during childhood or adolescence, "Is a condition characterized by high blood glucose levels caused by total lack of insulin. This occurs when the body's immune system attacks the insulin producing beta cells in the pancreas and destroys them.." (2) Type 2 Diabetes, most common form of the disease, "Usually occurring in middle age adults after the age of forty-five, is a condition characterized by high blood glucose levels caused by either lack of insulin or the body's inability to use insulin efficiently." (2) National health surveys over the past 35 years show that the number of African American's that have been diagnosed with diabetes is drastically increasing. In fact, it has been reported, "Out of 16 million Americans with diabetes, twenty-three million are African Americans." (3) There are clearly many implications on why diabetes is so rampant in the African American community, those of which will be discussed in this report. In this report, I will exam aspects of the "African American Culture," in order to determine whether those aspects have anything to do with the reasons why diabetes is higher in the African American community, more so than others.

"Have you ever heard in the Black culture someone say to another "I'm going home to grease?" or "Mama can sure burn." Do they mean that literally? Is there a lot of grease in soul food? Do African Americans like their food well done or almost burnt? Do greens and beans require pork to satisfy as soul food? Is this a legacy from slavery that remains with us 135 years later?" (4) These rhetorical questions are solutions to why diabetes is most prevalent in the African Americans community. "Fifty percent of African American women suffer from obesity. African American adults have substantially higher rates of obesity than white Americans." (3) Overweight is a major risk factor for diabetes 2 in the African American community. Excess amounts of fats and sugars are killing the African Community. In the bestseller Satan", by Dr. Jawanza Kunjufu, she argues that "Diabetes is the third leading killer after heart disease and cancer among African Americans." (4) She goes on to say that, "Historically black people have played diabetes off and commonly referred to this deadly disease by saying Mama has a sugar problem." (4) In order for the expansion of diabetes to lessen in the black community, people have to be comfortable enough to name the disease and realize that "if mama has a sugar problem," then mama needs to stop eating five pounds of sugar! Many African Americans tend characterize diabetes as being a "sugar disease", but there are so many other factors besides sugar, to take into account when talking about diabetes. For one, a high amount of fat intake can be a huge risk factor for diabetes 2. My parents both have diabetes 2 (which is common in African Americans) and their parents and grand parents suffered from diabetes as well. The manifestation of diabetes in my family history is not surprising to me, primarily, because my mother cooks with massive amounts of oil and sugar. My mother generally cooks with palm oil, which is notorious for clogging the arteries, which can put people at greater risks for high bloods pressure, diabetes and obesity. According to ADA (American Diabetes Association), "Many African Americans who have diabetes know they have it, but continue their same diet." (4) That statistic holds true for my parents, whom know they have diabetes, yet continue to eat with the same amount of oils, sugars and fats as before.

Poor dieting is another risk of diabetes amongst all ethnicities, particularly the African Americans. African Americans reportedly have a higher sugar and fat diet and tend to neglect eating vegetables and fruits. Last summer, I worked at a camp, which was predominantly African American. At every meal, the campers were given a main entree with vegetables and fruits as side dishes. What disturbed me the most about the meal was that the campers never ate their vegetables nor were they encouraged to eat them. When many children see their parent and peers consuming more sugar and fat and less fruits and vegetables, they are often going to do the same. Thus, the cycle of diabetes continues through the family lineage.
"The common finding that diabetes runs in the families indicates there is a strong genetic component to type 1 and type 2 diabetes." (3) Some researchers believe that inheritance of diabetes, specifically diabetes 2 is more apparent in the African Americans community, than any other race. African American children have most recently been reported as being more susceptible to diabetes 2 than whites. Researchers also believe that "African Americans inherited a "thrifty gene" from their African ancestors." (3) This "thrifty gene" enabled "Africans during the "feast and famine" cycle, to use food energy more sufficiently once food was scarce." (3) However, today the "thrifty gene" that was meant for survival makes African Americans more susceptible to diabetes type 2.

So far in this report, I have given some of the causes of diabetes in the African American Community. There is one piece missing to this puzzle and that is the affects of diabetes. "Compared to white Americans, African Americans experience higher rates of diabetes complications such as eye disease, kidney failure and amputations." (3) Some other factors that influence these complications are high blood pressure, cigarettes smoking and lack of exercise. It is unfortunate that so many diabetics, particularly African Americans do not exercise. In the NHANES survey, "Fifty percent of African American men and sixty-seven percent of African American women reported that they participated in little to no leisure time physical activity." (3)
There is obviously still a sense of apathy in many Americans, particularly African Americans when it comes to caring for diabetes. In this report, I have revealed several aspects of the "African American culture" that can contribute to the high diabetes risk in the African American community. The risks discussed include, excess amounts of oils and sugars in food, improper dieting and apathy towards treating the disease. Diabetes is a major disease in all ethnicities, particularly in African Americans and can be deadly if not treated properly. If the cycle of diabetes is to lessen, in the African American community, people have to take the approach to eat right and exercise, or else the diabetes will continue to run rampant throughout the community.


WWW Sources

1 , a rich resource from the diabetes community outreach project


2) , a rich resource from the department of health and diabetes.

3, a rich resource on diabetes

4) ", a rich resource on diabetes


Mtoion Sickness
Name: Bessy Guev
Date: 2003-11-10 10:08:22
Link to this Comment: 7174


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Ever felt carsick, airsick or seasick? Motion sickness is the most common medical problem associated with travel. As a child I was always told that "it was in my head," that if I wanted to, I could make it go away. I was made to believe that motion sickness was a psychological problem. To certain extend it is true that it is in my head, but it is not a psychological defect, but rather, a disorder that occurs when conflicting sensory information is sent to the brain. This mild and self-treatable disorder can affect anyone, but recent studies seem to imply that motion sickness may affect certain groups of people more than others. This paper will discuss the causes of motion sickness and will question the genetic and racial implications as contributing factors.


The anatomy of balance


Balance is maintained by a complex interaction of sensory parts of our body. The first are the inner ears, which monitor the directions of motion (such as side to side, back to front, up and down, and turning). Some people may feel dizzy without having to be spinning or turning. This dizziness is sometimes caused by an inner ear problem. Changes of fluids in the semicircular canals of the inner ear are one of the attributing factors of motion sickness. (1). Second, the eyes monitor where the body is in space and also the direction in which the motion is taking place. Third, the skin pressure receptors (joints and spine) send messages to the brain to inform what part of the body is down and touching the ground. Lastly, the muscle and joint sensory receptors are in charge of informing the brain which parts of the body are in motion. Through the interaction of all these parts, the central nervous system (the brain and the spinal cord) receives and processes all the information sent by the above mentioned four systems to make some sense of coordination. (2) If any of the four sensory systems are not in accordance with the rest of the systems, this resulting conflict thus leads to symptoms of nausea, dizziness, and sometimes vomiting. These symptoms are all pertinent to motion sickness. For example, suppose you are riding on Greyhound for thanksgiving, and decide to read a cookbook. While you are reading recipes, your eyes sense that your body is stationary. Your eyes cannot detect you are moving because you are inside the bus reading a book. However, your skin receptors and your inner ear fluids sense that your body is in motion since you are riding a moving bus. Consequently, your brain receives mixed messages, thus being susceptible to getting motion sickness or "carsick".


Risk Factors


There are risk factors that may increase your chances of getting motion sickness. Those include long or turbulent car, boat, plane, or train rides, amusement park rides, anxiety or fear, smoke or fumes, poor ventilation and having a minor illness, hangover, overeating, or overtiredness in the twenty-four hours before travel. Moreover, the following risk factors seem to have most significance: A) Age; children have a tendency (more than adults) to have motion sickness. In this situation, the cliché "you will grow out of it," is true. B) Family members who get motion sickness. This factor may imply that there exists a genetic link to the disorder. How could genes affect the equilibrium of our sensory systems? There is no clear hypothesis to explain the relation between DNA and motion sickness, but there are medical conditions linked to genetics that affect our sensory systems. Studies have shown that neurological diseases and allergies are genetically linked; both conditions affect the sensory systems. (3)


Upon researching, an article stated "....some families suffer from motion sickness more than others, there is also a racial difference which was shown in a medical trial...the Asian-American children suffered the most sickness..." (4) How could motion sickness affect one race more than another? Or, do Asian-American children travel more than African American children? I think not. This statement was immediately troublesome for me. Motion sickness (as stated above) is caused by a biological conflict influenced by one's moving surroundings. Race is a word used to describe people of different nations and should therefore, not be used or imply that motion sickness is part of one's self identity. It is true that some individuals are naturally prone to motion sickness since childhood (including myself), but this should not be because one is Latino or African-American. In fact, I dismiss the notion that race has anything to with one having motion sickness. Perhaps, it would've been better to investigate the environment and conditions of where people are located; there may be risk factors related to one's location in a geographical space.


1)What Causes Motion Sickness
2)Dizziness and Motion Sickness
3)What's Motion Sickness
4)What Causes Motion Sickness


On the Road to a Unified Science of Culture: Bewar
Name: Su-Lyn Poo
Date: 2003-11-10 12:56:14
Link to this Comment: 7177

<mytitle> Biology 103
2003 Second Paper
On Serendip

      Culture has developed far beyond the requirements for survival, such that our forays into art, music and pure mathematics are 'useless' from the biological point of view. In "The Selfish Gene", Dawkins (1987)5 introduced the concept of the meme, analogous to but separate from the gene, to explain this puzzling phenomenon. The resultant field, memetics, has been a recent battleground between various disciplines. While a natural science approach to culture remains the stage for the debut of a much hoped-for unified science, interdisciplinary work has yet to transcend traditional academic lines. Ignorance, prejudice and territoriality pose serious hurdles to the synthesis of science, which must, very simply, begin with the scientist.

      Memes are units of cultural transmission propagated by imitation and may include ideas such as natural selection and fairy tales, behaviors such as shaking hands and sitting upright, and styles such as baggy pants and slang. Like genetic evolution, memetic evolution fits the classic 'survival of the fittest' scenario: the process of replication produces variation that is acted upon by selection. However, memes exist for their own sake, not for the sake of man or the sake of genes. In this sense, they are 'selfish', and the separation means that human culture can no longer be explained in terms of biological advantage (Dawkins 1987)5.

      Memetics sprang from Dawkins' meme concept as a natural science approach to culture, and many grand visions have been penned for this, the final frontier of the unified science. Wilson exhorts the synthetic scientific method, which he terms consilience. He imagines connecting causal explanations across all levels of organization and between all branches of learning as the "Ariadne's thread" that is needed to traverse "the labyrinth of empirical knowledge" (Wilson 1998: 73)10 . Similarly, Plotkin (2002)9 thinks of complete intertheoretic reduction as the unattainable ideal, but believes that the possibility of some reduction by explanatory causal mechanisms extending across some levels is sufficient. He emphasizes that unified science requires all science to be done, and so does not sideline the work of social scientists. More importantly, both scientists believe a unified science of culture is possible because humans are products of nature and natural processes.
      Although a relatively new field, thus far held at bay by conceptual disagreements, the ranks from which the meme debate pulls its opponents is admirably wide. Cultural evolutionists Boyd and Richerson (2000a)2 propose the study of cultural change as a population process, as most cultural information is transmitted not through genes but teaching and imitation, which also affect which memes are acquired. Conte (2000)4 , a psychologist, points to social cognitive processes as both means of acquisition and source of selection, in which the autonomous memetic agent is liable to social influence but decides, based on internal criteria and motives, whether to accept or reject it. Evolutionary biologists Laland and Brown (2002)8 suggest applying tests of genetic evolution, such as searching for character displacement, convergence and shifts towards new equilibriums after sudden disturbances, to determine where memetic evolution occurs.
      The criticisms of memetics similarly represent the lenses of different fields. Plotkin, an evolutionary psychologist, objects to imitation as the only mechanism of transmission, given that much of culture revolves around shared understandings, values and beliefs, which can only be acquired through memory and abstraction (2000)9 . It has also been argued that natural selection acting on random variation is not the only process shaping human culture. Memes are often transformed during transmission as a result of purposeful human decision-making (Dennett 1995)6, improvement and synthesis (Boyd & Richerson 2000b)3.
      Though these suggestions hail from different disciplines, they stand individually rooted rather than bridging separate fields. It is because of this lack of collaboration that memetics has managed to limp along while the group of social scientists traditionally charged with the study of culture remains absent from the table. Bloch, a social anthropologist "well-disposed" towards memetics (Bloch 2000)1, tactfully points out that anthropologists have known since Tylor and Boas in the late nineteenth century, that information can now replicate, persist and transform by non-genetic means. Sadly, memeticists have proven themselves quite ignorant of the existing literature expanding on the points they are now only proposing.
      Anthropologists also express important objections to the meme. The transmission of culture by imitation runs against the understanding that culture is actively made and remade during communication, but more fundamentally, anthropologists take exception to the idea that culture is made of distinguishable 'bits' that replicate independently (Bloch 2000)1. That pushes the gene analogy too far. The crucial question that anthropologists raise, then, is whether memes even exist in the first place.

      Hull, however, is not impressed by the paralyzing debates over conceptual issues. He argues that critics of memetics have assumed too high standards for scientific knowledge, that they do not realize that terms do not start out clear and uncomplicated (2000)7. Simplified models and crude methods can be very useful in refining theory and experiments (Hull 2000; Laland & Brown 2002)7;8, though this is an approach that social scientists remain hostile towards. Laland and Brown (2002)8 address the issue that memes have ill-defined boundaries by pointing out that genes and species do too, yet this has not prevented very interesting and very important research from being conducted. Furthermore, the operational criteria for applying concepts will emerge only from doing memetics (Hull 2000)7.
      Bloch thus points out that the difficulty in achieving a unified science of culture arises, essentially, from putting specialists in different aspects of a unitary phenomenon in the same room. Apart from separate styles and traditions, cooperation fails as a result of "the crudest misunderstandings of either the nature of the social and the cultural by the natural scientists or of the biological and psychological by social scientists" (Bloch 2000: 190)1. Prejudice, suspicion and territoriality remain as barriers to true interdisciplinary work.
      The problem with unified science lies not in any internal logic. The idea itself is sound and deserves serious consideration. However, its proponents are prone to hold the rest of the academic world in limbo. While the natural sciences have indeed gained many concrete footholds, the social sciences have not been stagnant - debates rage on about methodology and conceptual frameworks. The natural sciences are now in a position to contribute foundational knowledge to the cause, but they must do so with an awareness of the rich dynamics that have shaped the social sciences and the issues they tackle. Unified science does not mean exporting scientific methodology wholesale. It requires that both the natural and social sciences compromise, surrendering previously held conceptions of each other and some of their own methodological autonomy. New methods will arise through collaboration.
      Similarly, anthropologists must not be blindly prejudiced against science as a result of the time when they were once strange bedfellows in the form of social Darwinism and eugenics. That enterprise was based on seriously flawed scientific understanding and while science today is far from perfect, it is poor judgment to hold on to demons in the past. The ignorance of memeticists towards anthropology is all the more reason to jump in, not to return the favor. Only from the inside can anthropologists understand what memeticists are trying to achieve and the means to do so, and only from the inside can they deliver anthropological theory accordingly and exert some influence over the way in which the field will develop. The scientific approach to culture will not just go away if ignored. More importantly, by withholding any contribution to an interdisciplinary study of culture in a deluded attempt to control the territory, anthropologists are essentially excluding themselves from what will inevitably be a very productive pursuit.
      The science belongs to no-one; every voice is an input. Traditional claims of academic territories no longer hold as new methodologies become available and boundaries shift. The way is paved for a unified science that embraces perhaps the most complex question of our existence. While many recognize the value of and need for such interdisciplinary research, few appear to realize that it is not enough for teams to be assembled like mosaics from the ranks of biologists, psychologists, anthropologists and so on. Instead, the new challenge is for these individuals to themselves espouse the unified science. Ignorance can no longer be excused by specialism, nor prejudice by unfamiliarity. These are no longer the criteria by which scholars will be created and research dictated. Science at the edge pushes for synthesis to be achieved even before data collection has begun.


References

1) Bloch, M. 2000. A well-disposed social anthropologist's problems with memes. In R. Aunger (Ed.), Darwinizing Culture: the status of memetics as a science. Oxford; New York: Oxford University Press.

2) Boyd, R. and P. J. Richerson. 2000a. Memes: Universal acid or a better mousetrap? In R. Aunger (Ed.), Darwinizing Culture: the status of memetics as a science. Oxford; New York: Oxford University Press.

3) Boyd, R. and P. J. Richerson. 2000b. Meme theory oversimplifies how culture changes. Scientific American, 283(4): 64-72.

4) Conte, R. 2000. Memes through (social) minds. In R. Aunger (Ed.), Darwinizing Culture: the status of memetics as a science. Oxford; New York: Oxford University Press.

5) Dawkins, R. 1989. The selfish gene. (2nd ed.) Oxford; New York: Oxford University Press.

6) Dennett, D. 1995. Darwin's dangerous idea. London: Penguin.

7) Hull, D. 2000. Taking memetics seriously: memetics will be what we make it. In R. Aunger (Ed.), Darwinizing Culture: the status of memetics as a science. Oxford; New York: Oxford University Press.

8) Laland, K. and G. Brown. 2002. The golden meme: memes offer a way to analyse human culture with scientific rigour. Why are social scientists so averse to the idea. New Scientist 175(2354): 40-43.

9) Plotkin, H. 2002. The imagined world made real: towards a natural science of culture. London; New York: Allen Lane.

10) Wilson, E. O. 1998. Consilience: the unity of knowledge. New York: Knopf: distributed by Random House.

Suggested Web Resources

"What is a meme?"
Several scientists answer the question.

Memes: the new replicators.
The final chapter from Richard Dawkins' "The Selfish Gene" (1976, 1st ed.). Oxford; New York: Oxford University Press.

Memetics publications on the web
A good list for further readings available online.

Alt.memetics
Memetics user group and forum on Google.


Do You Choose to be Homosexual?
Name: Alice Gold
Date: 2003-11-10 13:18:18
Link to this Comment: 7178


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Is it possible for one to choose his or her sexual orientation? Is one's sexual orientation something that can be changed, or is it a fixed attraction? These are a few questions, among many others that have been raised by researchers and religious organizations, as well as everyday people. Particularly, over the last decade there have been various debates over whether sexual orientation is based on genetic factors or whether it is a choice.

Most researchers find that homosexuality, like many others psychological conditions, is due to a combination of social, biological, or psychological factors (1). Psychiatrist Jeffrey Satinover believes influences including a postnatal environment have an impact on one's sexual orientation. Examples within this postnatal environment include cultural behavior as well as the behavior of one's parents and siblings (1). This is just one specification that one's sexual orientation is determined at a young age, and is a lifestyle that is not chosen. A statement issued by the American Psychological Association can support this observation. A spokesperson for the organization states that "...However, many scientists share the view that sexual orientation is shaped for most people at an early age through complex interactions of biological, psychological, and social factors" (1).

Richard Green, a psychiatrist at the University of California, Los Angeles, conducted a study that compared effeminate and "masculine" boys (3). In this study, Green found that children who grow up to become homosexual often engage in "gender inappropriate play" in their early childhood. "Feminine" boys generally played four times as much with dolls and about a third as much with trucks than a "normal" or "masculine" boy (3). At the end of his study, Green concluded that 75% of emasculate boys grew up to be gay adults. He also found similar results among adult lesbians (3). Based on this study, one can further conclude that homosexuality is not a taught behavior, nor is it a copied behavior from other children in a family.

According to a study done by Simon LeVay, a former Associate Professor at the Salk Institute for Biological Studies, and current Adjunct Professor of biology at the University of California, sexual orientation is based substantially on biological makeup. LeVay found that the brains of a group of gay men differed from those of straight men (2). Specifically, the nucleus of the hypothalamus, which triggers male-typical sex behavior, revealed a small, but significant difference in the clusters of neutrons of homosexual men as opposed to heterosexual men. It was also found that the nucleus looked more like that of a woman's, which amounts to approximately half the size of a heterosexual male (4). In addition, LeVay discovered that the corpus callosum, which is an arched bridge of nervous tissue that connects the two cerebral hemispheres of the brain, allowing communication between the left and right sides, is significantly larger in gay men than that of straight men (2). Three years after LeVay's observations, molecular biologist Dean Hamer, of the National Institute of Health in Washington D.C., found evidence that a specific gene carried on the maternal line had an influence on sexual orientation in men. These observations, in additions to many others, strongly suggest that sexual orientation is deeply rooted in biology, and is not simply a matter of choice.

When questioning topics surrounding sexual orientation, there is generally a conservative view, which includes conservative Christian organizations, and the liberal view, which is comprised mostly of religious liberals, gays, lesbians, mental health professionals, and human sexuality researchers (3). Conservative organizations believe that homosexuality is a chosen lifestyle, or something that one does. They believe that this choice is caused by poor parenting, and possible demonic possession. Conservatives also think that one's sexual orientation is determined in his or her teenage years, and that homosexuality is an addiction similar to drugs and alcohol (3). In the opposite spectrum, the views of most liberals coincide with the findings of many scientists and researchers. Liberals believe that homosexuality is not a chosen lifestyle, but is something that one is. They feel that it is genetically predetermined, in addition to some unknown environmental factor in early childhood. Furthermore, liberal thinkers believe that one's sexual orientation is determined in pre-school, and is fixed and therefore unchangeable (3).

Based on research and observations over the last decade or so, it can be concluded that sexual orientation is determined more so through genetic make-up than any other suggested factors. One's sexual orientation cannot just change over night or within a couple of years. In many aspects, it is much like those who sing and dance. To elaborate, when hearing the biographies of many of today's stars, it is said that a vast majority of them started singing or dancing between the ages two and five. The same concept holds true for one's sexual orientation; it is developed early in one's childhood.

References


1)Is Sexual Orientation Fixed at Birth?


2)Is Being Gay Natural and Do We Have a Choice?


3)Homosexuality: Chosen Lifestyle or Fixed Orientation?

4)Homosexuality: Genetics & the Bible


Addicted to Coffee?
Name: Adina Halp
Date: 2003-11-10 14:07:34
Link to this Comment: 7179


<mytitle>

Biology 103
2003 Second Paper
On Serendip

As a sophomore in college, I know how important it is to get that first cup of coffee in the morning. That first cup of coffee, second cup, and third cup seem vital to the well-being of Bryn Mawr students all over campus. They help us to stay awake through our classes, hours of study, and even time spent socializing. But is caffeine really addictive? Ask any Bryn Mawr student, and chances are that she will answer with an emphatic "Yes!" Ask any scientist or doctor the same question and the answer is likely to be just as emphatic, but what that answer will be is much less predictable.

It is universally recognized that caffeine is a stimulant, a substance that causes the body to act differently from the ways that it would naturally act by inducing "fight or flight" reactions which cause the body to act in emergencies (1). However, it is still debated as to whether or not this stimulate is addictive. When deciding whether a substance is addictive, most professionals who make diagnoses in the United States and in many other countries will turn to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, better know as the DMS-IV. This manual is published by the American Psychiatric Association and lists, among other things, the symptoms of all mental health disorders. According to the DMS-IV, the symptoms of substance dependence (in other words, substance addiction) are substance abuse, continuation of use despite related problems, increase in tolerance, and withdrawal symptoms (2).

The debate lies largely in the interpretation of these symptoms. What exactly constitutes a withdrawal symptom? Can having a headache for a few days even compare to the horrors experienced by heroine addicts who suddenly stop drug intake? Are there problems related to caffeine intake? What exactly is meant by increase in tolerance?

It seems to me that all of these symptoms apply to people who have a regular intake of caffeine (e.g. coffee, tea, and cola drinkers) but to a much lesser extent than apply to people who are addicted to other substances such as prescription and illegal drugs. When a person who has been regularly consuming caffeine suddenly stops caffeine intake, that person is likely to experience symptoms such as irritability, inability to concentrate, constipation, and lethargy (1). A study in which people in a mental hospital were given either caffeinated or decaffeinated coffee produced results which indicated that caffeine is in fact addictive when those who were given decaffeinated coffee increased their coffee intake to consume the same amount of caffeine as they had with fully caffeinated coffee (3). These results show people exhibiting symptoms of substance dependency.

As for whether there is an increase in caffeine tolerance, Georges Koob, Ph.D., reasons that caffeine intake does not stimulate a craving for more caffeine because the tolerance is so complete that the same results are felt after relatively large amounts of caffeine as are felt after relatively small amounts, and this "discourages a chronic destructive pattern of abuse." This is also the case with the hallucinogen LSD (3). Some experts also say that even small doses of coffee can cause negative effects in the body, such as tiredness in the afternoon when the caffeine wears off (1), and it is generally recognized that high doses of caffeine (about the equivalent of eight cups of coffee or more) cause anxiety, excitability, restlessness, dizziness, irritability, loss of concentration, gastrointestinal aches, headaches, and trouble sleeping (4).

Many other experts claim that caffeine is not addictive. Again, the interpretation of tolerance seems to be where many of discrepancies lie. Herbert Muncie of the University of Maryland's family medicine department perceives tolerance as meaning that "you begin to need more and more, or a higher dosage of the chemical to experience the effects." He reasons that because this is not true in the instance of normal coffee intake, then caffeine must not be addictive (5). Especially controversial was a study conducted by Astrid Nehlig, Ph.D. of the National Health and Medical Research Institute in Strasbourg, France, in which animals consuming the equivalent of one to three cups of coffee a day did not exhibit withdrawal symptoms when caffeine intake was suddenly stopped (3).

Central to Nehlig's argument is that in her experiment, small doses of caffeine did not trigger functional activity in the shell of the nucleus accumbens (6), the part of the brain that is involved in reward and punishment (7) and is heavily involved in the addiction of other drugs such as amphetamines, cocaine, morphine, and nicotine (6). Nehlig did find, however, that the equivalent of seven or more cups of caffeinated consumed by these animals did trigger the activation in the nucleus accumbens (6). This last finding eludes to the probability that it possible to become addicted to caffeine. Technically, according to this study, caffeine in large doses is addictive. I am cautious, however, in blindly accepting the results of Nehlig's study. Caffeine might not act exactly the same way on animals as it does on humans. No matter how similar humans are to some other animals, there is no guarantee that our bodies would react the same way in any given situation.

Using the DMS-IV's definition of substance dependence, it seems to me that even small does of caffeine are in fact mildly addictive, even without the participation of the nucleus accumbens. Nowhere in the definition is the nucleus accumbens mentioned. Addiction can also be different to substance dependence. Perhaps caffeine is psychologically addictive in the same way that gambling is addictive.
Some experts are skeptical as to whether the negative effects felt by people who regularly consume caffeine can be considered to be "adverse". As Charles O'Brien, M.D. points out, linking caffeine addiction with addiction to serious illegal drugs could make serious drugs appear less dangerous than they really are (8). Although this might be an ethical issue, one should not refrain from calling caffeine addictive on the basis that it could influence children's decision to take illegal drugs. This is a matter that should concern parents and educators, not doctors and scientists.

In conclusion, it seems to me that small doses of caffeine are mildly addictive. Large doses are definitely addictive. However, I see little harm in becoming mildly addicted to this drug. The negative effects linked with this drug do not, in all cases, hinder normal functions of the body and mind. The reputable Mayo clinic even recommends drinking "a cup of coffee, tea, or a can of soda pop that contains caffeine" to people who have trouble staying awake at work (9). It should be up to the individual to determine how "adverse" these effects really are. If they are causing harm, one should make up one's own mind to stop or reduce caffeine intake. As far as long term effects of caffeine use, research has linked caffeine to osteoporosis, birth defects, miscarriages, infertility, cancers, high blood pressure, premenstrual syndrome, ulcers and heartburn, fibrocystic breast disease, and heart disease (4), however, these are ongoing studies and are beyond the scope of this report.

And now if you'll excuse me, I have a lot of homework and it looks like it's going to be a long night. I'm going to get myself a nice, steaming cup of coffee.


References

1)Today's Question, on DrWeil.com, a question and answer site about health and wellness.

2)AllPsych Online, a virtual psychology classroom.

3)Researcher Brews Debate About Whether Caffeine is Addictive, an article on the American Psychiatric Association Website.

4)Go Ask Alice!: Caffeine's effects on health, on the Go Ask Alice site, Columbia University's Health Question and Answer Internet Service.

5)Fix on this, coffee lovers: Caffeine may not be addictive, an article in the Diamondback, the University of Maryland's Independent Newspaper.

6)ScienceDaily News Release: Debate Brews over Caffeine Addiction – Study Also Confirms Caffeine Improves Alertness And Energy, on ScienceDaily, an online magazine.

7)nucleus accumbens, a short description of the nucleus accumbens on the Department of Integrated Science and Technology section of the James Madison University website.

8)Caffeine Myths and Facts, on koffeekorner.com, a coffee appreciation website

9)Sleepy at work? How you can stay awake, tips for staying awake at work on MayoClinic.com.


Pain and Panic: The Demons behind Biological Fear
Name: Lindsay Up
Date: 2003-11-10 16:05:26
Link to this Comment: 7182


<mytitle>

Biology 103
2003 Second Paper
On Serendip

"A variety of terms are used to describe fear. The Bible uses words like fear, afraid, terror, dread, anxious, tremble, shake, and quake over 850 times to portray this core human emotion. Healthcare professionals use terms like fear, anxiety, panic attack, and phobia to illuminate the spectrum of our fears." (2)

Our emotions are said to be the most subjective of all our biological components. It seems that we have a difficult time grasping them, and an even more difficult time controlling them. Fear seems to be one of the most challenging of our human emotions when it comes to trying to subdue it ourselves. When we see a creepy bug, or are caught off guard by an extremely loud noise, we jump before even thinking about it. It seems like a normal reaction, and then after the initial surprise we can assure ourselves that we are still alive, everything is fine. But what about people who have abnormal reactions to fears? People who develop a phobia that is not so easy to subdue?

These questions can be partly answered by looking at what happens in the brain when we are afraid. In an experience of danger the amygdale, a small part of the brain located behind both ears, is alerted. In response to the frightening stimulus, the amygdale sends signals to the circulatory system. Blood pressure goes up, heart rate speeds up, and muscles tense. Doesn't this response sound a lot like what we can see on the Discovery Channel? When a lion attacks, we can immediately see the antelope go into "defense mode." So basically, our initial reaction to fears is no different than the basic instincts of animals, an evolutionary response. (1)

But wait—animals do not, or CAN not get afraid of the same things that humans can. And I am fairly certain an antelope cannot be diagnosed with an anxiety disorder. Animals, for instance, do not live in fear that they might fail a test, or lose their job. These fears that humans develop that are not simply instinct reactions deal with another part of the brain, the cortex. Humans can use cognitive reasoning to assess whether or not we should feel afraid. Charles Darwin posed the question, "Does the reaction to fear precede the thought?" (3) The answer is yes. In studies, it has been shown that pathways from the cortex to the amygdale are weaker than those that lead from the amygdale to the cortex. For us, this means that we have the physical reaction, our heart races, before we can think about it. (1) In other words, when it comes to fear, emotion takes precedent over rationalization—no matter how much we may not like it.

Knowing how humans have complicated the experience of fear, I would expect that the emotion's presence would vary greatly from culture to culture, and from century to century. Tribes people of the African plains would probably rely more on the instinctual side of their fears when hunting large animals or defending themselves. In America today, we have more fear of a stock market crash than a charging wildebeest. However, in our society, many people, especially men, consider it a sign of weakness to admit they are afraid. Someone might have a prolonged fear that they will lose their job. They won't seek psychiatric help about it but their physician might find they have the physical symptoms of an anxiety disorder, such as high blood pressure or heart racing. (1)

It is estimated that about one-fifth of people have panic or anxiety disorders while about one-tenth have some kind of phobia. (3) A phobia is termed to be not just a recurring fear, but rather one that has a serious impact on a person's life and daily activities. For example, a phobia of heights might prohibit someone indefinitely from crossing a bridge or flying, making it nearly impossible to travel. Anxiety disorders and phobias are both very closely related to the phenomenon of fear, but develop in different ways. Our tendency to experience anxiety is probably genetic. Studies show that two out of three people with a panic disorder are not the only ones in their family to have it. (1) But this raises the question: Is a tendency toward fear and anxiety actually genetic, or simply suggested from parent to child? When doctors look at phobias, they are dealing with a specific form of fear or anxiety rather than a tendency to panic. So phobias are more associated with individual environmental experiences, fears that live on in our memory. (4) We are all familiar with the cultural image of the Vietnam War Veteran who panics every time he hears a loud noise. This is an example of one kind of panic disorder, Post-traumatic Stress Disorder. PTSD is prolonged fear from a trauma, the most common example probably being a car accident. (1) So our individual experience of fear is somewhat influenced by our genes, but also by the many events that take place in our own lives.

Fear seems to work in mysterious ways. We often speak about "facing our fears" in order to get over them. To me this sounds a bit idealistic and cheesy, but it has been shown that exposure to a fear will reduce it, while letting it go will more likely intensify it. In fact, this form of therapy is often used in treating patients of phobia. People with phobias also often undergo PET scans to image how the individual's brain responds to the fear or anxiety-triggering stimulus. (3) Perhaps PET scans help scientists answer questions about phobias, but I still have questions about fear in general. When I am on the edge of my seat in a scary movie, and I just know the killer is going to strike at any moment, does that make me jump a little higher when he finally does appear? At least for me, the expectation of being scared seems to trigger a stronger response. Also, do people in cultures where there is a greater danger or lower life expectancy live in more fear and anxiety than Americans? Many scientist say that fear is not a cultural phenomenon, that the percentage of people from country to country that suffer from fear and anxiety deviates generally no more than three percent. But I still think that if I moved to a place like the Bahamas, my fear and anxiety levels would go way down. It seems that every culture associates fear with different things. For some, fear is the opposite of bravery. For others, it is the involuntary reaction that occurs when bombs can be heard dropping at night, a few towns over. I think that while our evolutionary reactions to fear function essentially in the same manner, the way that we live and deal with our own fears varies greatly from person to person.

References

1) Exploring your Brain: Fear and Anxiety. , A radio interview with three doctors about fear and anxiety and their related disorders.

2)C.A.R.E. , Non-profit Christian organization seeking to help people with stress and anxiety.

3)Fear Itself: ,An article about fear and anxiety disorders.

4) Scientific American: Ask the Experts: , Biology: Is our tendency to experience fear and anxiety genetic?


What Do Breast Implants Say About Popular Culture?
Name: Katherine
Date: 2003-11-10 16:56:24
Link to this Comment: 7183


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Breast augmentation is rapidly becoming a common procedure among women in the United States. Shows detailing the surgery on TV station such as MTV and VH1 show mothers and their daughters getting implants together and teenage girls thrilled with their new 34-D chests. What most of these shows don't mention are the possible risks and painful recovery that come with the procedure. That breast implants are becoming more and more an accepted part of popular culture raises several questions. Are implants as safe and easy as they seem? Are women getting implants because they expect them to radically change their lives? More importantly, does our culture really believe that breast implants somehow improve a woman's quality of life?
There are two kinds of breast implants. Silicone implants, currently under review for re-approval by the FDA, consist of a silicone pouch filled with a silicone gel. Saline implants, currently the only implants available unless a woman is part of a medical trial, are simply a silicone pouch that is filled with saline solution once it is implanted in the woman ((1)). The risks associated with both kinds of implant include implant rupture, capsular contraction (where the scar tissue around the implant tightens), calcium deposits in the tissue surrounding the implant, infection, hematoma, delayed wound healing, a possible atrophy of breast tissue and an increased difficulty for medical professionals when reading mammograms ((2)). Rupture and capsular contraction are fairly common with both kinds of implant, and require that the patient undergo surgery to correct the problem. In fact, about 20% of women who sought breast implants for augmentation, and about half of those who had the surgery for reconstructive purposes have reported having "re-operations" ((3)). Despite all of this, many women say that they were not given adequate information about the risks associated with breast implants before undergoing the surgery ((4)).
The recent push for approval of silicone implants is particularly problematic. Doctors and patients often prefer the silicone implants because they more closely mimic the look and feel of breast tissue ((1)). Although there is little evidence supporting the claims made against silicone breast implants in the 1980's (which said that they contributed to autoimmune and connective tissue disorders), it can be said that silicone implants cause more problems than saline implants. When a saline implant ruptures, it deflates almost immediately, creating visible evidence of the problem ((1)). Silicone implants may show symptoms of rupturing, but many women have a "silent rupture" in which the scar tissue around the implant holds in the saline gel. Since these women have no symptoms, the only way to identify the rupture is through MRI ((5)). What makes this particularly alarming is that the long term effects of having the silicone gel sitting indirect contact with scar and breast tissue is unknown, which is one of the reasons that the chairman of the FDA advisory panel, which voted in favor of approving silicone implants, asked that the FDA ignore the panel's advice ((6)). Long term safety of silicone implants has simply not been demonstrated by any studies presented to the FDA, yet many in the plastic surgery community continue to push for their approval.
Aside from the physical risks of breast augmentation, there is the psychological aspect. Studies have shown that the connection that some once claimed existed between breast implants and suicide is not a valid one. What is now considered to more likely is that certain subgroups of women who seek breast implants are often in demographic groups that put them at a higher risk for suicide, but that this has nothing to do with the surgery itself ((7)). One doctor even goes so far as to claim that breast implants actually reduce the risk of suicide among women in these groups by improving their self image ((7)).
This theory brings to light what I consider to be the most troubling aspect of breast augmentation. It appears as though many people, including the doctors performing the surgery, see implants as a quick fix for women's body image. There has been a 600% increase in the number of breast augmentation surgeries performed in the last decade, with 32,000 performed in 1992 and 225,000 performed in 2002 ((8)). In between 2001 and 2002 the number of women ages 18 and younger getting the surgery increased by 19% ((9)). That the procedure is becoming more common among women in general suggests a disturbing trend; the idea seems to be that because you can do something about your small breasts you should do something about your small breasts. The increase in the number of younger women getting the procedure implies that society in general is becoming more accepting of the idea that a teenager who is unhappy with her body (which is fairly normal) needs to seek surgery to correct the problem, instead of developing a more healthy relationship with her body as it is before deciding on such a drastic procedure, the dangers of which she probably does not fully grasp.
I am not "against" breast augmentation or plastic surgery in general. Plastic surgery provides a valuable service to those who have been scarred in accidents, and to women who have lost their breasts to cancer. Even in cases where the procedure is non-restorative, breast augmentation can be beneficial for women who fully understand the procedure and acknowledge that it is more than likely not going to be a quick fix to their problems. The trend in popular culture, however, leans towards encouraging women to "explore the possibility of sculpting a new you" ((10)), as though surgery will somehow change and improve a woman's life. By encouraging this mentality, popular culture is able to avoid facing the problematic beauty ideals it constructs for all of its members, not just women. The idea that there is no reason to reevaluate the standards set by society because everyone can go under the knife to conform to the standards is a frightening one.


References

1)Saline Vs. Silicone Implants, Discusses differences in aesthetic qualities of implant types, as well as risk variances.

2)Potential Breast Implant Complications, Informs the reader of the various risks associated with breast implants and breast augmentation surgery.

3)Silicone Breast Implants Would Be Carefully Monitored, Lists some of the restrictions that would be imposed on silicone implants upon reapproval

4)Silicone Breast Implants Do Not Cause Chronic Disease, But Other Complications Are of Concern, Discusses the myths and truths about breast implant complications.

5)Silicone Breast Implants – Most Common Risks

6)FDA Advisor: Ignore Breast Implant Vote, Details the concerns that the FDA advisory board chair has about silicon implants.

7)Suicide Risk May Be Lower Than Expected, from the American Society For Aesthetic Plastic Surgery.

8)Silicone Breast Implants Redux

9)Silicone Breast Implants Could Make a Comeback After FDA Hearings, Talks about the rise in the number of breast augmentation surgeries.

10)Breast Enhancement For the Modern Woman, Article written by a plastic surgery


How We Measure Up: Height and Psychology
Name: Julia Wise
Date: 2003-11-10 17:08:49
Link to this Comment: 7184

<mytitle> Biology 103
2003 Second Paper
On Serendip

Your height won't influence what you earn as much as your race or gender, but it may well be significant. In Britain and America, the tallest quarter of the population earns 10% more than the shortest quarter. A white American man averages a 1.8% higher income than his counterpart an inch shorter (1). Economics is not the only area in which taller people win: out of the US's 42 presidents, only eight have been below average height for the time. Most have been significantly taller than the average for white adult males of their eras (2). Tall men are also more likely to be married and have children (3).

Outside of normal height differences, people with growth deficiency are much more aware of the role height plays in their lives. A study done through a growth clinic showed that children with growth deficiency are more likely to have social problems. The problems included lower social competance, increased behavior problems, and low self-esteem. Another study found lower rates of employment and marriage when children with growth deficiency grew up (4).

One theory of why tall people are more successful is that there is stigma attatched to height, and thus short people are seen as easier to dominate (2). Another theory is that evolutionarily, tall people had an advantage in hunting and such and were thus associated with positive traits (5). Perhaps we still retain this association unconciously. The third theory is that taller people have a better-self image, and this increased confidence makes them more successful (2).

A factor that may influence both earnings and height is one's family background. Shorter men tend to come from bigger families with parents who have less education than those of taller men. This shorter height may be a factor of poor childhood nutrition, and parents with less education are more likely to have children who also receive less education and therefore earn less. Family background is not the only influence, though, as shorter men still earn less than taller men from the same background a href="#2">(2).

Effects that appear to stem from one's adult height, though, may have a different cause entirely. Participants in one study were asked to report their heights at ages 7,11, 16, and 23. The height that affected one's adult earnings, it turned out, was not the adult height but the 16-year-old height. (The others did not correspond.) While adult height was found to correspond to earnings in other studies, it seems because of the correlation between adolescent height and adult height a href="#2">(2).

I have observed one phenomenon of height's effect on psychology in my own life. When a group of people have been asked to line up by height, there's always some debate. I usually end up next to someone who I consider shorter than me - but she considers herself taller. One of us is clearly wrong, since we can't both be taller, but it really doesn't matter which of us is right. The interesting part is that both of us perceive ourselves as being taller. My theory is that because height and confidence are linked, how people see themselves affects how they see their height. Since I am about 5'4", on the lower half of the height scale for women, I suspect most girls around my height would like to be taller. When we have to evaluate ourselves, our self-images cause us to overestimate our own heights.

I think, then, that the biggest part of height's role in our lives is not measurable in feet and inches, but in our own minds. The fact that our adolescent heights instead of our adult heights influence our earnings means that employers are not doling out pay based on simple physical appearance, but something that has been with us for years. Our own social skills and attitudes towards ourselves would seem to be what matter here. Maybe it's true - the scrawny kids who got picked on in gym class really were changed by the experience. Maybe they weren't as sure of themselves as the taller kids, and it affected how they did in school later, how well they worked with other people, how much they were valued as workers. Maybe it changed their success in love or presidential elections. It would seem the converse is also true, that one's self-image can change one's perception of height. So in a way, it's in our heads - the important thing is not how tall we are, but how that changes our own mindset.

References

1) National Longitudinal Surveys, an abstract of a study from The Economist

2) The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height, an extensive U Penn study

3) Tall Men Do Get the Girl, an article from Psychology Today

4) Concerns about Growth Hormone Experiments in Children

5) The Height of Love


Treatments for Depression
Name: Flicka Mic
Date: 2003-11-10 17:38:15
Link to this Comment: 7185


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Clinical depression is a disease that involves feelings of sadness lasting for longer than two weeks and is often accompanied by a loss of interest in life, hopelessness, and decreased energy. (3) Depression affects 340 million people in the world today. One in every 4 women and one in every 10 men develop depression during their lifetime. About half the cases of depression are untreated and about 10 to 15 percent of all depressed people commit suicide. (4) There are many different types of depression including major depression, Bipolar Disorder, Dysthymia, and Seasonal Affective Disorder (SAD), and there are different degrees of depression ranging from less severe to major severe. (3) There are various ways to treat depression, but what most people do not know is that depression is one of the most treatable mental illnesses.

There are a variety of drugs called antidepressants which help to increase certain neurotransmitters in your brain. There are also various types of counseling, psychotherapy, self-help techniques, and alternative therapies to help a person overcome
depression. In many cases, doctors combine different forms of therapies and treatments to produce the best result in depression cases. (1) The most widely used therapy today is antidepressants. Antidepressants are usually divided into three categories: Selective Serotonin Reuptake Inhibitors (SSRI), Tricyclic Antidepressants (TCA), and Monoamine Oxidase Inhibitors (MOAI). (1) SSRIs raise the level of serotonin in the brain because low levels of this neurotransmitter have been connected to depression. TCAs increase the level of norepinephrine in the brain. MOAIs increase the levels of epinephrine, norepinephrine, and serotonin in the brain.(1)

Another form of treatment for depression is psychotherapy. In some cases, psychotherapy can be just as effective as medication and antidepressants. There are many types of psychotherapy such as psychiatry, counseling, or cognitive behavior therapy. (2) Counseling is used to help a person understand his or her feelings and to help them deal with these feelings. Counseling can be different depending on each person's needs. There is individual counseling, group therapy, or therapy with a partner or relation. Counselors ask patients where these feelings came from, what brought them on, and together they explore options on how to overcome the feelings of sadness. Ultimately, counseling is a way for people to take a more positive, satisfying view on their own lives.(2)

There is also cognitive behavior therapy, which recognizes that a patient's self-criticism and self-disparaging thoughts are what lead them to depression. Cognitive therapy seeks to correct these negative thoughts and replace them with positive ones in order to overcome feelings of hopelessness and dejection.(5) Scientific studies show that cognitive behavior therapy is hopeful in its results. However, recent research shows that pharmacotherapy works better than cognitive behavior therapy in moderately to severely depressed patients.(5) Critics of pharmacotherapy claim that antidepressants remove the feelings of depression in a person but fail to get rid of the underlying problems behind those feelings. However,supporters of pharmacotherapy claim that the negative thoughts of a depressed person are the result of major depression and not the cause.(5)

There are other over the counter medications that are available to depressed people, such as St. John's Wort and S-adenosylmethionine (SAM-e). St. John's Wort is an herbal remedy which has reported to be effective in many depression cases in Europe. It is only used to treat mild to moderate cases of depression and can not be used to treat any form of major depression.(1) It has a few side effects so it can not be taken with other forms of prescription antidepressants. SAM-e is a natural component of living cells whose formula is derived from yeast. It is said to have very few side effects, but its effectiveness is controversial.(3)

Electroconvulsive therapy is an older form of therapy developed in the 1930s to treat a large variety of mental disorders. It sends tiny electric charges into the brain to stimulate the neurotransmitters in the brain. It is less frequently used now because of the permanent side effects which include sleep problems, memory loss, confusion, and sometimes brain damage.(1) However, for patients with sever depression or who do not respond to other forms of medication, electroconvulsive therapy is still an option. Phototherapy is another alternative for patients with Seasonal Affective Disorder (SAD). It is the exposure to natural sunlight that the patient lacks, therefore phototherapy creates a way for artificial light to enter the patient's eyes. A light box is set up with full spectrum bulbs that create between 2000 and 1000 lux, and the patient sits within 3 feet of the box for 30 to 60 minutes every morning. (3)

Another les intense treatment for depression is activity and exercise. Doctors believe that regular aerobic exercise stimulates the body's natural release of endorphins, chemicals that make you feel happy and self-satisfied. Walking, running, bicycling, swimming, or any other activity done for 20 minutes at least 3 times a week is shown to improve a person's confidence. Sometimes, becoming physically fit can have beneficial effects on a person's emotional well-being. (3)

While each therapy works differently person, most doctors suggest the combination of therapies to overcome depression. Most patients take a combination of antidepressant medication and psychotherapy. There was a research study done by doctors from the University of Nevada School of Medicine and Reno Veterans and a psychologist from the Clevland Clinic Foundation. (6) They studied the effects of different treatments on depression and came up with this conclusion: "Psychotherapy, notably cognitive–behavioral intervention or interpersonal psychotherapy, should be considered the treatment of first choice for depression primarily because of superior long-term outcome and fewer medical risks than drugs or combined treatment..."(6) The authors also said, "If antidepressants are used, psychotherapy should be included because of the higher risk for relapse with medication alone."(6)

In conclusion, there are many ways to treat depression depending on the form or severity of the disease. However, it seems that the combination of medication and psychotherapies is the most widely recommended treatment when trying to overcome depression. Other alternatives such as St. John's Wort and phototherapy work for mild cases of depression, while electroconvulsive therapy can be effective in cases of major or severe depression. However, doctors must be careful in prescribing the correct medication for the patient because of the many dangerous side affects associated with them. Maybe one day there will be a cure for depression just as we are searching for a cure for cancer or AIDS, and then people will not have to deal with this disease that causes them to lose 10 percent of the productive years during their lives.(4)

References

1)Depression Treatment and Help

2)50+Health-Home/Treatments for Depression

3)Other Treatments for Depression

4)Depression- Net, Info on Depression

5)Major Depressive Disorder: Treatment

6)Depression Treatment


Fighting More Than the Blues: A Look into Depress
Name: Paula Arbo
Date: 2003-11-10 22:31:19
Link to this Comment: 7186


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Fighting More Than the Blues: A Look into Depressive Disorders

This paper will focus on depressive disorders, and it will describe what they are, how they manifest themselves, what causes them and/or what makes certain individuals susceptible to the disorder as compared to others. This piece will also describe the most common treatment practices, and the effectiveness of these treatments. It will conclude by offering some testimonials from individuals who suffer from depressive disorders as well as some additional commentary about depressive disorders and their implications/challenges.

What is depression?
A depressive disorder is an illness that involves the body, mood, and thoughts. It affects the way a person eats and sleeps, the way one feels about oneself, and the way one thinks about things. It is not a sign of personal weakness or a condition that can be willed or wished away. A depressive disorder is exactly that—a disorder; therefore, people with a depressive illness cannot will themselves to get better they can't just pull themselves together. A depressive disorder requires treatment. (1) ("Depression 1)
1) Depression. National Institute of Mental Health,

Are there different types of depressive disorders?
Depressive disorders take on different forms. There are three common types of depressive disorders. They are major depression, dysthymia, and bipolar disorder. Major depression is characterized by a combination of symptoms that interfere with an individual's ability to work, study, sleep, and eat. Symptoms include but are not limited to the following: persistent sad, anxious, or empty mood, feelings of hopelessness, feelings of guilt, helplessness, worthlessness, decreased energy, fatigue, appetite and/or weight loss, or overeating and weight gain, thoughts of death or suicide among others. This depression episode is said to occur once, but may occur several times in a lifetime. Dysthymia, on the other hand, is a less severe type of depression, and is characterized by long-term, chronic symptoms that prevent one from functioning well or feeling good, but that are not disabling. People who suffer from dysthymia can experience major depressive episodes at some point in their lives. Another type of depressive disorder is bipolar depression, which is characterized by cycling mood changes—severe highs (mania) and lows (depression). Mood changes are usually gradual in people who suffer from bipolar disorder but they can also be dramatic and rapid. Mania is said to affect thinking, judgment, and social behavior. People who suffer from depressive disorders do not experience every symptom; some experience many symptoms, others experience few ones. Their intensity/severity varies from person to person. (2). (Depression: An Overview 1-2)
2) Depression: An Overview 1-2,

What are the causes of depressive disorders?
It cannot be stated with much certainty what causes depression, or what makes an individual more susceptible to depression than another; however, some evidence indicates that depressed people have imbalances in the brain's neurotransmitters, the chemicals that allow communication between nerve cells. Serotonin and norepinephrine are two neurotransmitters whose low levels are thought to play an important role. At the same time, the importance of environmental factors cannot be dismissed and when environmental factors are combined with biochemical or genetic disposition, life stressors may cause the disease to manifest itself. Substance abuse and side effects from prescription medication may lead to depressive disorders as well. (3). (Nordenberg, 3-4).
3)"> Dealing with the Depths of Depression, 3-4 ,

How is depression treated?
One of the major approaches for treating depression is the use of anti-depressant medications. The effects of anti-depressants on the brain are yet to be understood, but, it is believed that they restore the brain's chemical balance. Anti-depressant medications can control depressive symptoms in four to eight weeks. At the same time, different drugs work in different ways for different; hence, it is difficult to predict how people will respond to anti-depressant medications and their side effects (4). (Nordenberg, 4-6). People with milder forms of depression may respond favorably to psychotherapy; however, it is common for people with moderate to sever depression to benefit from both the use of anti-depressants and psychotherapy. Lastly, electroconvulsive therapy (ECT) is used by people who suffer from severe depression or by people whose depression is life threatening or for those who cannot take anti-depressant medications. ECT is most effective where anti-depressants can't provide sufficient relief of symptoms. In order for ECT to be effective, several sessions are necessary, usually three sessions per week.
(5). ("Depression: An Overview" 5-7).

Testimonials
The purpose of this section is to give voice and in a sense an opportunity for suffers of depression to express what depression feels like, the challenges that treatments or their failure pose on their lives, goals, personal sustainability, a space to express frustration but also hope.

Every minute feels like a week when I'm waiting to see if something will work.
It's like the worst migraine of your life, and it seems like it will never go away.
Nothing disturbs me more than when someone tries to describe something as complicated as a mental state with something as simplistic as serotonin levels.
I still have people tell me that I should cheer up because I have nothing to be depressed about, as if I had a choice in the matter.
So many people have worked hard to make me well, so I'm trying my damnedest hardest to cope with this. (6). (Schrof and Schultz 1-8)
4) Melancholy Nation, 1-8 ,

Final Comments
Depressive disorders and the people who suffer from these disorders must be better understood and analyzed before treatment can be effective. It is also a society's responsibility to better educate its citizens about this disorder not only to dispense more accurate information, but also to dispel myths, misinformation, and, ultimately, to help lessen the stigma that goes along with suffering from depression and being treated for it. Depression is not about dusting yourself off and trying again, it is not a character flaw, and it cannot be "cured" by popping a Prozac, or Tegretol, or Wellbutrin. It is in recognizing that depression like many other disorders is not so concrete, is not something that can be pointed to and identified as what it is, and then dealt with, that we, as human beings, as individuals that suffer from depression, as people who know people who suffer from depression, can begin to educate and inform others about the realities of this disorder.


Not just your kid's problem: Adult ADHD
Name: Romina Gom
Date: 2003-11-10 22:48:11
Link to this Comment: 7187


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Attention Deficit/Hyperactivity Disorder (ADHD). Everyone has heard of it. A few years ago every newspaper and weekly magazine had a feature about the disorder. The disorder was mostly associated with school-aged children because that was the time when most of the symptoms surfaced. Today ADHD is the most common behavior disorder diagnosed in children and teens. ADHD refers to a group of symptoms that begin in early childhood and can continue into adulthood, causing difficulties at home, at school, at work, and within the community if not recognized and treated (1). But what most people never hear was that ADHD also affects adults and if left untreated can have serious effects.

ADHD is a condition that makes it difficult for children and adults to pay attention, control their activity level and limit their behavior in age appropriate ways (2). Inattention is the most common symptom. In addition to having difficulty paying attention, people with this ADHD symptom often are unable to consistently focus, remember, and organize. They may be careless and have a hard time starting and completing tasks that are boring, repetitive, or challenging., impulsiveness and hyperactivity. With impulsivity, people who frequently act before thinking may not make sound judgments or solve problems well. They may also have trouble developing and maintaining personal relationships. An adult may not keep the same job for long or spend money wisely. A hyperactive child may squirm, fidget, and climb or run when it is not appropriate. These children often have difficulty playing with others. They may talk a great deal and not be able to sit still for even a short time. Teenagers and adults who are hyperactive don't usually have the more obvious physical behaviors seen in children. Rather, they often feel restless and fidgety, and are not able to enjoy reading or other quiet activities.

There are a couple of reasons why it is more difficult to diagnose an adult with ADHD than it is to identify a child with the same problem. One of the problems is that there is no real test for ADHD. Instead there are a series of evaluations that must be done that rule out other problems (2). The American Psychiatric Association describes the symptoms and criteria for diagnosing mental disorders in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). Information used to diagnose the condition includes: an interview with the person, medical history, to include asking about the social, emotional, educational, and behavioral history of the person, physical exam, behavior rating scales or checklists for ADHD to evaluate the person's symptoms.

In the past these questions were geared towards children. Now researchers have realized that as a child with ADHD matures, the symptoms evolve so that they may be harder to detect. Adults with ADHD usually develop coping mechanisms that make up for their problems. Specific tools have been developed to aid in the diagnosis in adults. An example, physical hyperactivity may evolve into excessive talking or foot tapping. Impulsivity can be expressed as having a very short temper, quitting jobs or ending personal relationships suddenly and being prone to emotional outbursts. Inattention leads to poor time management, difficulty finishing tasks, and a tendency to miss deadlines and other important details at work, home and in social settings. All these symptoms can create enormous stress for adults with ADHD and their families. Unfortunately, this evolution of symptoms from childhood to adulthood isn't reflected well in the DSM-IV criteria. As a result, some experts believe the apparent remission of ADHD symptoms in many adolescents and adults is due primarily to the limitations of current diagnostic criteria. These criteria for diagnosis may be modified for use in adults. With adults, special care must be taken in the diagnostic process to distinguish between ADHD and other psychological disorders and/or other life stressors (2).

Now it is known that 60% of children with ADHD continue to struggle with the disorder into their adult life. In past years ADHD was very hard to diagnose among adults and it is estimated that 2.5% of the adult population have ADHD (3). However, only an estimated 1.5% of these adults are being treated for it. There is no cure for ADHD but there is treatment. Treatment ranges from therapy to medication.

Adults with untreated ADHD have been found to be more likely to have a substance abuse problem, which makes some people edgy about taking a controlled substance such as Ritilan because when used incorrectly can become addictive. However, when taken properly, medication can prove very helpful, allowing a person to concentrate on activities and be more productive. Therapy is especially helpful for adults in relationships because it allows them to bridge the gap in communication.


References

1)WebMD, Topics and Overview of ADHD

2)ADHD in Adults

3)Ruhrold, Richard, PhD Adult ADHD


The Science of Our Laws
Name: La Toiya L
Date: 2003-11-11 03:38:58
Link to this Comment: 7194


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Science and the Judicial System are two concepts that at face value seem to be very distinct and unique in their own nature, but at their cores share interesting parallels. They each propose a different way of understanding how we comprehend and organize order and structure within institutions, yet they do so with similar strategies. In this paper I'll address my understanding of both, what characteristics they share and how these similarities prove them to be inextricably connected by what we call life and its connection to the human experience.

Although Science is largely composed of observation, experiments and their results, it is often controversial because perspective and experience play a key role in how data is interpreted. And because perspective and experience undoubtedly vary with each person due to various reasons; how is it possible that we can assign concrete truths to such a varied conceptualization? Scientists fuse logic and philosophy.

Traditional science often fails to provide theories and explanations for phenomenons that hold truth and validation in both a scientific context and the context of the human mind. I feel that Science often caters to only a "black and white" way of formulating answers; failing to recognize the gray areas. Often times people try to find the most common and accepted ways to support their theories and in doing so they adapt to the standard and more traditional ways of viewing the world. This leaves less room for creativity and exploration of the mind when trying to formulate "truth". "A body of assertions is true if it forms a coherent whole and works both in the external world and in our minds." Roger Newton (1)

Much like science, the justice system in this country is very much based on experience. Although the understanding of these laws is largely composed of formal education, logic and reasoning, there is more to law then these solid and concrete aspects. Experience plays a key role because before obtaining any form of judicial authority one must practice and "get a feel" for what the position entails. Through these experiences one acquires a very personal and first hand knowledge and experience that is necessary before venturing out into his or her field. The judicial system poses a similar problem to that of traditional science. I believe the laws in our justice system are far too clear cut. There are a lot of gray areas when it comes to crimes committed, political decision making, and societal issues. I feel our constitution, which is what our laws are based on, is too limited and poses a problem because a lot of the pressing issues in our society such as abortion and gun control lay on right and wrong border lines. It's hard to come to a resolution because of the strict and limited language of our laws and also because of the fact that there's more to these problems than laws; they involve emotions, perceptions, culture, and perspectives; none of which are taken into consideration in legislation.

The controversy with Pro-Life or Pro-Choice is controversial and complex because there are so many ways to examine the issue, all of which have valid points depending on which light you're looking at it under. Abortion is both a societal issue as well as a political issue. It involves high sensitivity because of the direct connection to our emotions and personal values. Politics and laws also play a major role in this debate because so many of them have been passed concerning this issue. The Government on many levels is dealing with the issue of abortion. The courts, federalism, judicial review and the separation of powers are all involved in and dealing with this issue. In 1973 the Supreme Court declares abortion as a constitutional right. (2) Scientist have clearly declared the fetus as a living thing and it is clearly illegal by law to kill another human being, yet it is perfectly legal to have an abortion. When this issue is examined thoroughly one can see how controversies arise and stay in debate. So this case really depends on how one looks at it. This poses a problem because an agreement and a middle ground are almost impossible to reach because people specifically those with opinions about it, can only see the credibility in their value and position. Thus, the choice is highly dependent on personal perspective, moral, and experience. Although constitutional law governs the issue of abortions, science clearly plays a role of equal importance and authority.

Gun control is deeply rooted in controversy and is an epitome of a gray area when dealing with right or wrong. There are two conflicting sides, those in favor of gun regulation and those against it. It is an issue for our nation as a whole but it stems from the division of this country's mixed cultures. Those who have grown up in a culture where hunting is a family and cultural tradition are strongly against gun control, but for people who did not grown up with hunting as a sport do not see the same value. This conflict is rooted not only in value but also politics. The respective sums of experiences for both sides are the reasoning behind their positions on the issue. Science and the judicial system produce gray areas when trying to understand and rationalize. Both are inextricably connected to life. Holmes convinced people through his work and writings that the law should develop along with the society it serves. If this is true than law should always be changing because society is constantly changing with time and experience. "Life of the law has not been logic: it has been experience." (Oliver Wendell Holmes). We systematically try to put life in a box to create order, order insures a comfort, but that comfort often gets in the way of open-mindedness. The human mind by itself is a convoluted vast universe. We as scholars, scientists, and human kind need to understand that by assigning concrete truths, right or wrongs we are limiting the extent of our intellectual capacities.

References


1) Roger Newton,

2)Supreme court rules abortion a right,

3)Oliver Wendell 1,

4)Oliver Wendell 2,

5)Oliver Wendell 3,


test
Name: Paul Grobstein
Date: 2003-11-11 15:03:33
Link to this Comment: 7199


<mytitle>

Biology 103
2003 Second Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


test
Name: Paul Grobstein
Date: 2003-11-11 15:10:16
Link to this Comment: 7201


<mytitle>

Biology 103
2003 Second Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Yawning: It Isn't About Oxygen Anymore
Name: Abigail Fr
Date: 2003-11-11 19:55:07
Link to this Comment: 7213


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Have you ever wondered why yawns are contagious; have you ever been in class and seen someone across the room yawn and found yourself following along? Have you ever been reading a book and, upon coming across a yawning character, been moved to stretch out your own face muscles? Most likely these things have happened to almost everyone more times than they can remember. I cannot tell you how many times I have yawned in the process of researching and writing this paper. A friend of mine began to yawn uncontrollably as I discussed my ideas with her.

Yawning is a phenomenon that occurs for most people many times a day, yet it is not one that has been studied extensively by researchers. This is an unfortunate fact because he more I read about yawning and thought about number of situations in which it occurs, the more eager I became to better understand what is behind humans' tendency to yawn.

At first, one might see yawning as a silly phenomenon to spend time studying because, well, it is just what happens when we are tired; but it is more complicated than that. We yawn when we are tired, but also when we wake up, when we are bored, and even simply because we see others doing it. When one delves into the unknown of what causes a yawn, he or she will become intrigued by how mysterious the occurrence is and surprised about how little we known about it. The following will discuss the many theories that have been put forward regarding the phenomenon and its contagious qualities and explore the implications and problems with these various theories.

There exist both theories as to why we yawn and theories as to why yawns are contagious. Let us first look into why we yawn. The theory that has long been thought to explain yawning, and the one that has often used in medical textbooks, is that we yawn due to low oxygen levels in the lungs (1). When we are in a resting state, we make use of a very small percentage of our lungs' capacity and are only using the air sacs, or alveoli, in the bottom of our lungs (1). The alveoli partially collapse when the air sacs stop receiving fresh air, cueing our brains to induce a yawn (1). This theory has been largely cast aside, however, because our lungs do not necessarily detect oxygen levels (8).

The interesting contrast to the low-oxygen theory is that some observations have been made that suggest that fetuses in the womb yawn. Doctors have observed fetal yawning in utero at twenty weeks gestation and noted a 'fetal yawning movement' (7). Mouths opened widely resembling a yawn with qualities quite different from those of a brief moment of swallowing and the mouth remained open for around two minutes (7). These observations do not support the oxygen theory because fetuses in utero do not yet have ventilated lungs (8). Other doctors have responded to these observations in the New England Journal of Medicine saying that, "there is too much of a range of variation in the observations and that there is a discrepancy in the use of the anatomical criterion of retraction of the tongue to characterize the fetal yawn, whereas in yawning adults, the tongue is extended" (7).

One interesting study on the cause of yawning hypothesized that, "contagious yawning occurs as a result of a theory of mind, the ability to infer or empathize with what others want, know, or intend to do. Seeing or hearing about another person yawn may tap a primitive neurological substrate responsible for self-awareness and empathic modeling which produces a corresponding response in oneself" (2). Researchers tested this hypothesis by observing individuals that exhibited schizotypal personality traits. They felt that those traits would inhibit a person's ability to process information about the self and would therefore lower their tendency to yawn contagiously (2). Their lowered ability to identify with another's state of mind would prevent them from 'catching the yawns' as a result of empathizing with someone seen yawning. The researchers' findings were consistent with their hypothesis and could aid in explaining why schizophrenics rarely catch the yawns (3). Another experiment conducted at New York State University's Department of Psychology declared similar findings stating, "We have also shown that individuals who score higher on schizotypal personality traits are less likely to show contagious yawning because of a fundamental impairment of self-processing" (9).

There are evolutionary theories for yawning. Robert Provine, professor of psychology at the University of Maryland Baltimore County, suggests that yawning is about "transitions in the body's biology" (5). This theory might support observations that suggest fetal yawning. Perhaps it aids in maintaining the balance of amniotic fluid. Provine goes on to say that yawning can occur not only when transitioning from a state of alertness to a state of sleepiness, but also from a transition from sleepiness to alertness (5). He makes the point that, "at track and field events, sometimes you'll find participants in the race of their life will be standing around on the sidelines or in the starting block and they may be yawning. Or before a concert, a musician may yawn to prepare for an increasingly energized state" (5). The evolutionary theory behind this is that yawning is a result of synchronizing behavior based on these changing states of alertness (5). The changes in your body that "are brought about by yawning are synchronized in everyone that's doing it" (5). The associate professor of physiology at the Lake Erie College of Osteopathic Medicine suggested a similar theory, stating that, "the contagious nature of yawning is most likely a means of communication within groups of animals, possibly as a means to synchronize behavior; therefore in humans it is most likely vestigial and an evolutionarily ancient mechanism that has lost its significance" (8). Just as our teeth have gotten smaller as we have evolved, so has the significance and meaning of the yawn.

Continuing along the lines of evolution, one might consider the yawn in term of a link to our "furrier days" (6). At that point in human evolution, we would yawn to show our teeth, which is why zoologists speculate animals yawn. According to this theory, when someone near us yawns, our "subconscious Neanderthal responds to the 'aggressive challenge' with an I-have-big-teeth-too-yawn" (6). As the writer of the article that discusses this theory agrees and teases, the 'furry days theory' seems to be a fairly far-fetched argument. It makes sense that animals show their teeth to intimidate other animals, but should we call that a yawn or equate that act with the human act of yawning? It is hard to imagine that we are subconsciously putting forth an "aggressive challenge." Consider throwing the theory of evolution out the window. We cannot be certain that humans have evolved from monkeys. If, in fact, we have not, how might we explain the yawning phenomenon? I would like to suggest two theories that seem to me to bear the most significant and convincing evidence for why we yawn and why yawns are contagious.

There have been so many theories, and just as one of them starts to become convincing, a different discussion presents information that suggests such a theory does not make sense. Despite these discrepancies, we have to begin to make sense of the act somewhere. Yawning is undeniably a contagious phenomenon that I am convinced is more than a result of evolutionary adjustments and that has become an act having little meaning and that is insignificant to our function as humans. The idea of yawning being about transitions in the body's biology is my "first pick" as an explanation for yawning and into which I am inclined to look further. Through some discussions with my friends, it became clear that they not only yawn when they are tired, but many of them also yawn when they wake up in the morning. Further, I have been surprised to find myself yawning on numerous occasions because I was not feeling any hint of being sleepy, but perhaps it occurred at moments when something was happening for which I needed to be more alert than normal, and my body therefore responded with a yawn.

The theory of mind suggestion is my "second pick" and the experiment described earlier carried some very intriguing implications. It was interesting to think about the phenomenon in terms of personality traits that might lessen a person's chance to experience the same phenomenon. The fact that a schizophrenic patient does not 'catch the yawns' as easily as someone without the disease because of their difficulty in identifying with another's state of mind is convincing information in support of the theory of mind concept. Humans are very receptive organisms that respond quickly to the feelings and emotions of the individuals around them. We often find ourselves being very connected to those that we are close to in terms of interpreting moods and finding ourselves saying the same things at the same time or finishing sentences. These human tendencies convince me that a theory of mind that explains yawning is a very likely one. Our capabilities to connect with others on personal, intellectual, and subconscious levels are qualities of the human experience that are very difficult to explain, but that exist nonetheless. These characteristics are what make us such complex organisms.


References

1) NBC News Health , Theory on why we yawn.

2) Good study on schizotypal patients

3)Nature: Science Update, Links self-awareness and yawning

4)National Library of Medicine, The Neuropharmacology of Yawning

5)NBC News, Yawning and its Contagious Tendencies

6)Island Scene Online, Speaks on why a yawn can be more contagious than the flu

7)Fetal yawning in utero, Addresses observations of fetal yawning at 20 weeks

8)Scientific American, Addresses why we yawn and why yawning is contagious

9)Article in press at www.elsevier.com, Discusses the impact of schizotypal personality traits


The Kohen Gene
Name: Talia Libe
Date: 2003-11-11 22:59:46
Link to this Comment: 7218


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Talia Liben
November 10, 2003
Biology 103
Prof. Grobstein

The Kohen Gene

In a world where Jews have assimilated so much into other cultures, is it possible to trace the lineage of an elite group of Jewish men all the way back to a man who lived three-thousand and five-hundred years ago? According to Karl Skorecki, a scientist at the Israel Institute of Technology in Haifa, and Michael Hammer, a geneticist from the University of Arizona at Tuscan, the possibility is alive (1).

In Jewish tradition, as written in the Hebrew Bible, the Children of Israel were split into three groups. The Kohanim (the singular is simply Kohen) were the priests. The first Kohen was Moses' brother, Aaron, and all Kohanim since then are said to be descendants of Aaron. The second group was the Levis, of which Moses himself was a part of, and the third group was compiled of the remaining eleven tribes (of which ten have said to be "lost"), simply called the Israelites.
Since the Kohanim were the priests among the Jewish people, their duties were the holiest and most important. They were in charge of the sacrifices brought to the Temple, and thus had the most intimate relationship with God, aside from the prophets such as Moses. After the destruction of the Second Temple in 70 C.E., and thus the secession of sacrificial offerings, the role of the priests became ceremonial. However, despite the fact that their strict duties do not apply today, all Kohanim, according to Jewish tradition, must still obey many commandments that pertain directly to them. The hope is that one day, a new Temple will be built, and their service will be required once again (1).
According to Jewish tradition, the role of each individual (Kohen, Levi, or Israelite) is passed down patrilineally from father to son. In traditional and orthodox Judaism, a woman is known as "the daughter of a Levi" (if her father is a Levi) until she marries, and then she is "the wife of a Levi." So, the concept of a "kohen gene" can only pertain to Jewish men who have not converted into the faith (1).

A gene is a sequence of DNA that is used by cells to create protein. It has all of the information needed to make a protein. It knows when to make these protein and where to begin and end. The functions of a cell are then carried out by the proteins. When someone speaks about a gene for eyes, he is talking about the gene that codes for the protein that is the pigment for the eye color (2). A gene is a "functional and physical unit of heredity passed from parent to offspring" (3).
Since genes are hereditary, the question among certain scientists, and many Jews, became an interesting one; whether or not it is possible to have a gene that marked Kohanim throughout the centuries. In January of 1997, after working on the project for over four years, Hammer and Skorecki found that the priestly lineage can indeed be traced all the way back to Aaron, the first High Priest of Israel.
Jews are split in two by another category, aside from tribal divisions. Those born in, or whose ancestors came from, Europe, are called Ashkenazi Jews. Those from Spanish and Middle Eastern countries are called Sephardi Jews. The researchers discovered that the gene was found in Jewish Kohanim of both Sephardi and Ashkenazi lineage (4).
The two scientists tested genetic samples from the inside of the cheeks of unrelated Jewish men. These men came from three different nations – Israel, North America, and Britain (3). There were 188 men in the original testing who believed that they were descendents of the line of Kohanim. The majority of these men had genetic phenotypes that genetically differed from those of the men who did not believe themselves to be Kohanim. The Y-chromosome YAP, DYS19B haplotype is passed down from father to son, and it is the genetic marker that was found in 98.5% of the Kohanim (4).
There are several implications of these findings. Besides being a biological proof of events and traditions that have remained alive for 3, 500 years, Michael Hammer says, "it's a beautiful example of how father to son transmission of two things, one genetic, one cultural, gives you the same picture" (5). It also shows that the wives of these Kohanim have remained extremely faithful to their husbands. More than 90% of the Kohanim ultimately tested share the same genetic markers. Dr. David Goldstein said that "even a low rate of infidelity would have dramatically lowered the percentage" (6).
The prevalence of this gene in Kohanim is convincing evidence of the existence of The Temple and the priests as described in the Bible. An estimated 5% of the Jewish males in the world (by 1997 when the study was done) have the y-chromosome gene in them, and are therefore Kohanim. The fact that the gene was found in men of both Ashkenazi and Sephardi lineage is interesting, because it means that the priesthood is older than the division of the Jewish people into these two groups. It is a division that transpired during the Middle Ages, over a thousand years ago (4). This also refutes the possibility that Ashkenazi Jews did not descend from the ancient Hebrews, but were a part of a Turkish-Asian empire from before the tenth century. The empire was said to have converted en masse to Judaism (7).
Using the tools available, researchers are now searching for the "ten lost tribes" who were uprooted from the now State of Israel by the Assyrians. DNA can now be used to "discover historical links to the Jewish people" (7). In an attempt to learn whether or not they are truly Kohanim, many Jews have tried to get tested for the common y-chromosome.
Is it possible that the preponderance of this genetic marker among self-claimed Kohanim is a fluke? The Levis is also a tribal group which is passed down from father to son. Yet, when Levis were tested for a common gene, there was no genetic marker found. Is it possible that these scientific findings could cause rifts among nations or religions? Can biological proof of an old-age tradition truly exist without a doubt? And does it really matter?.

Work Cited


Cohen, Debra Nussbaum "Kohen Gene Pioneers Fear Misuse" 1997
YourGenome.org
HyperDictionary.com
Jeffrey, Grant "A Genetic Trace is Found Linking Kohanim Worldwide"
Hammer, Professor Michael, New York Times, 1997
Goldstein, Dr. David, Oxford University Science News, 1998
Kleiman, Rabbi Yaakov, "The DNA Chain of Tradition: The Discovery of the "Cohen Gene""
Jewish Post, "Scientists Discover Chromosome Similarity of Jewish Priests" 1997
AccessExcellence.org


Catch A Yawn
Name: HoKyung Mi
Date: 2003-11-11 23:52:27
Link to this Comment: 7220


<mytitle>

Biology 103
2003 Second Paper
On Serendip

A trick in every girl's handbook: If you want to know if someone is checking you out, yawn and check to see who, if anyone, yawns back. While we may be using the contagious phenomenon of yawning to our advantage, the age-old question still lingers on - why, in fact, is yawning contagious? Plausible explanations range from historic origins to muscular requirements. However, one answer that encompasses all other questions about the cause and traits of yawning has yet to be found.

First, let's tackle the question of why we yawn. An evolutional/psychological theory has claimed that yawning was once used as a non-verbal form of communication to synchronize group behavior among animals (9). For example, the leader of a pack of wolves would yawn to set a certain mood or signal a change of activity. Humans also being group-oriented animals may have assimilated to this form of agreement. In the same way that one pumped up team member can influence the level of aggression and team-spirit of an entire team, one yawning client can also affect the mood of sales-pitch meeting. Another good example of synchronization among humans is if a group is sitting around a campfire and the leader yawns, it most likely will act as a signal to the others that it may be time to call it a night.

Yawning is commonly perceived to be a sign of boredom or tiredness. Dr. Robert Provine, known as the yawn-expert from the University of Maryland, performed a study on 17-19 year old students to test this perception. In comparison to a group of students who watched music videos for 30 minutes, a group who watched an uninteresting color test bar pattern for 30 minutes yawned more (10). Dr. Provine also suggested that yawning is like stretching (5). Much like stretching, blood pressure and heart rate can be increased just by yawning. Perhaps animals yawn instinctively when bored or tired to get their blood pumping so that they may be physically stimulated to move or seek a new activity. But then why is it that we yawn after waking up? If we yawn after waking as a physical prompt to become active that's one thing. But yawning as a sign of tiredness can be ruled out if we yawn after waking from a restful sleep. Maybe a study could be done in which a comparison could be made between the hours of sleep and the occurrence of yawning when waking.

Above all, the most widely known reason for yawning is because our lungs need more oxygen. Medical schools teach that animals do not use their entire lung capacity when breathing normally. Because the alveoli, air sacs on the lower portion of the lungs, partially collapse when they do not receive fresh air, the brain is thought to signal the body to yawn or sigh to take in more oxygen (1). Along the same lines, when a person is bored his or her breathing slows and thus less oxygen is brought into the lungs.

However, I disagree with this lack of O2 theory for several reasons. First, Dr. Provine set up another experiment in 1987 to test this theory that yawning is caused by high CO2/low O2. Results showed that the test subjects' number or length of yawns were not affected by breathing 100% of O2 for 30 minutes (1). Therefore, this particular study acts as evidence the lack of O2 is not cause of yawning. To further support my disagreement of the O2 theory is the fact that an 11 week old fetus can yawn. Since fetuses do not receive oxygen to the lungs it has no reason to compensate a lack of oxygen in their alveoli. Additionally, just watching, hearing, or even reading about yawning can cause a person to yawn. Even if a person on TV is in a room with low levels of oxygen per say, regardless of the oxygen level in the room of the viewer, the viewer will still yawn.

This brings us to the next question: Why is yawning contagious? Some have agreed with Walter Smitson, professor of psychiatry at University of Cincinnati Medical Center, that as social creatures we are highly inclined to copy one another (2). If yawning is indeed related to mood, the suggestion of one person being tired, bored, or lacking oxygen can cue, alert, or suggest to others in the group to feel the same. This can also be linked to the previously suggested cause of yawning as a synchronizing behavioral act.

The "copycat" theory can be supported by the similar phenomenon humans find with tearing. When a child sees its mother crying it is natural for it to also cry. Even though it could be attributed to a maternal connection to one's mother, it could also be that as a helpless follower the child takes its mother's tears as a behavioral sign that something is wrong and mimics its leader. But does this then somehow imply that women or females are more sensitive to each other's feelings?

Although not a very strong supporting fact, it is often true that seeing, hearing, or even just reading about someone else's tears of pain or sadness can cause the recipient to also tear - granted this may be more applicable to women or that this "copycat" tearing could be a result of various emotional factors. Or is this an indication that women have stronger behavioral synchronization? After all, many more women than men choose to cry together - female support groups such as divorcee groups are more common than male support groups.

Then why is it a fact that men yawn more than women? (2) Dr. George Bubenik from the University of Guelph brings the debate back to the theory of supplying oxygen to the body. He proposes that, based on the O2 theory, men yawn more than women due to their larger muscle mass, which in turn require more oxygen. From a very general standpoint with the minimal data provided Dr. Bubenik's idea could be valid to a certain degree. If a legitimate connection between muscle mass and yawning can be made, perhaps it could support the O2 theory to be true.

Overall, there seems to be more research done on the causes and reasons for yawning than the contagious factor of yawning. So, I did some of my own research by asking others about their thoughts on the subject. The best suggestion I gathered was the possibility of a domino-effect theory. Some time long ago, one yawn could have been "caught" by several people and spread like a continual virus to others who passed it on to even more people and the cycle just never stopped. Therefore, someone is always "catching" someone else's yawn somewhere.

In the end, the answer to why animals yawn or what makes yawning contagious may not be as straightforward as we would all like. But discovering and dissecting an array of answers is what makes animals, oxygen, emotions, behaviors, our bodies, and science so interesting. Whether you yawn because you are signaling our boredom to others or because we are lacking oxygen in our alveoli, drop our jaw, yawn the average 6-second yawn, and just try checking who's yawning back.

References

1) MSNBC News, "Why we yawn."

2) Reily, Mary. E-briefing, "A Real Yawner: Causes, Concerns and Communications of the Yawn."

3) Hughes, Ron. Science Shorts, "What Makes You Yawn?"

4) Hughes, Ron. Science Shorts, "What Makes You Yawn?"

5) Neuroscience for Kids - Yawning.

6) Wong, A. Scientific American, "Why do we yawn when we are tired? And why does it seem to be contagious?"

7) Argiolas A, Melis. "The neuropharmacology of yawning."

8) Provine, R. PubMed, "Yawning: no effect of 3-5% CO2, 100% O2, and exercise."

9) Raphael, Rebecca. ABC News, "Is Yawning Contagious? Understanding Behaviors That Are Out of Our Control."

10) Dobson, Roger. Yawning Explained, "If you yawn, you're a human dynamo."

11) Provine, R.R. Yawning: effects of stimulus interest. Bulletin Psychonomic Sociology, 24:437-438, 1986.


The Joy Of Laughter
Name: Sarah Kim
Date: 2003-11-12 04:16:39
Link to this Comment: 7222


<The Joy Of Laughter>

Biology 103
2003 Second Paper
On Serendip

Laughter is defined by dictionary.com as "the act of expressing certain emotions, especially mirth or delight, by a series of spontaneous, usually unarticulated sounds often accompanied by corresponding facial and bodily movements."(1) A thesaurus offers immense amounts of synonyms for the word "laugh", including giggle, cackle, chortle, snort, chuckle, crow, howl, snicker, snigger, convulse, titter, and the list goes on.(2) There are many words to describe laughter because it is such an integral part of our lives. The question of why we laugh may first be answered by looking at laughter in the purely physiological sense, which has been studied as gelotology. Then we can look at the effects of laughter, not just physically, but mentally and socially as well. After going over the oft-overlooked background of laughter, we can delve into the motivations behind our laughter.

The actual flow of physical effects in the brain after hearing a joke are as follows. First, the left side of the cortex analyzes the words and structure of the joke. Then the brain's large frontal lobe becomes very active. This part of the brain has a lot to do with social emotional responses. After this, the right hemisphere of the cortex helps with comprehension of the joke. Then stimulation of the motor sections occurred, producing the physical responses of laughter.(3) The production of laughter is also highly involved with certain parts of the brain. For example, the central cortex has been found to have a negative electrical wave as a person laughs. The hypothalamus, part of the central cortex, has been found to be a main contributor to the production of loud, uncontrollable laughter.

The combination of the set of gestures and production of sound is what makes up laughter. The actual muscles that create a smile are fifteen facial muscles which contract and stimulate the zygomatic major muscle, which basically lifts your upper lip. When the epiglottis half-close the larynx, the respiratory system is upset which causes air intake to occur irregularly, making you gasp. (3) In extreme circumstances, the tear ducts are activated, so that while the mouth is opening and closing and the struggle for oxygen intake continues, the face becomes moist and often red. Laughs can range in sound from virtually silent to noisy guffaws.

The overall physical effects of laughter are that laughter stimulates the immune system. The experience of laughter lowers serum cortisol levels. Elevated levels of corticosteroids (which are converted into cortisol in the bloodstream) have an immunosuppressive effect, so lowering the levels helps boost the immune system. Laughter also increases the amount of activated T lymphocytes, which provides that lymphocytes that are "awakened" and ready to combat a potential foreign substance. In addition, it increases the number and activity of natural killer cells which are a type of immune cell that attacks viral or cancerous cells and do not need sensitization to be lethal. They are always ready to recognize and attack an aberrant or infected cell. An intact immune system can function appropriately by mobilizing these natural killer cells to destroy abnormal cells. (4)

Laughter is also a good cardiovascular workout! Researchers estimate that laughing 100 times is equal to 10 minutes on the rowing machine or 15 minutes on an exercise bike. Laughter also gives your diaphragm and abdominal, respiratory, facial, leg and back muscles a workout. Blood pressure is lowered, and there is an increase in vascular blood flow and in oxygenation of the blood, which further assists healing. (3)

John Morreall, a philosopher at the University of South Florida in Tampa, proposes that the first laughter developed as a sign of shared relief at the evading of some danger.(5) This is simply using laughter to release the tension that was built up from the automatic flight-or-fight response. Laughter truly does make the muscles in a person's body relax. This is a sign of trust in the person's companions. Robert Provine, a behavioural neurobiologist at the University of Maryland, Baltimore County, says that laughter must have evolved as a manipulation technique to change the behaviour of others.(3) In an embarrassing or otherwise threatening situation, laughter may serve as a gesture of appeasement, a way of deflecting anger. If the threatening person joins in the laughter, the conflict may be avoided. Laughter is a way to connect people, to bond. Laughter is a social signal; people are 30 times more likely to laugh in a social situation than when alone.

The psychological benefits of laughter are amazing as well. People often store negative emotions, such as anger, sadness and fear, rather than expressing them. Laughter provides a way for these emotions to be harmlessly released. Laughter is cathartic. Humor can also be used as an empowerment tool. Humor gives us a different perspective on our problems and, with an attitude of detachment, we feel a sense of self-protection and control in our environment.(4) Humor is a quality of perception that enables us to experience joy even when faced with adversity. Our sense of humor gives us the ability to find delight, experience joy, and to release tension. This is extremely important not only for a person under a lot of stress, but also for a person's quality of life in general. Increasingly, mental health professionals are suggesting "laughter therapy," which teaches people how to laugh openly at things that aren't usually funny and to cope in difficult situations by using humor. (3)

There are three traditional theories of what makes us laugh: the incongruity theory, the superiority theory, and the relief theory. The incongruity theory is when a person expects one outcome and another happens. The superiority theory is reflected in jokes pointing out another's mistakes or stupidity. The relief theory is just releasing built up tension through laughter.(3) These are the three main theories of why we laugh, but everybody laughs at different things.

Many things can affect what each individual person finds humorous. Age is a big determining factor in what a person finds humorous. For example, infants and children are constantly discovering the world and much of it seems ridiculous or surprising, which strikes them as funny. As they grow older, adolescents tend to find sex, food, authority figures and subjects that are typically frowned upon as more humorous topics. Adults tend to laugh in shared common predicaments and embarrassments; basically they laugh at stressors. Also, the environment you were raised in has a lot to do with what you find funny. Political, social and economic issues surrounding your upbringing will affect what you find funny and what you find offensive. The country where you live has a lot to do with what you find amusing, too. Overall, there are many determining factors in what motivates laughter in a person.

Can laughter really be quantified? Even through the problem that laughter often cannot be produced when it is ready to be observed, especially in a laboratory, electrical signals are traced through brain activity, every muscle contraction can be recorded, all the hormones monitored, and every physical reaction measured, but is that really laughter? Laughter is an expression of mirth and delight. Scientific research has even found that one of the best predictors of long-term relationship health is the ability to laugh together. Researchers at the University of Seattle say couples who have the best chance of staying together long term should be chuckling with each other at least once a day.(6) Indeed, a sense of humor is usually one of the most desirable qualities in a person, not only in terms of a relationship, but also a friendship. Laughter is contagious, and people want to be around somebody who makes them laugh. Laughter produces such positive benefits, it's no wonder an entire field of therapy has developed simply to harness the power of laughter. Overall, it's impossible to say what exactly makes people laugh. It is different for every person. And for all the measurements taken, there is still no definitive answer.

References

References:
1)Dictionary.com, definition of "laughter"

2)Thesaurus.com, synonyms of "laugh"

3)Howstuffworks.com, How Laughter Works

4)Jesthealth.com, Humor:An Antidote to Stress

5)Globalideasbank.org, Why Did Laughter Evolve?

6)msn.com, "How To Diagnose A Healthy Relationship"


The Anxiety of Anti-Anxiety Medications
Name: Brianna Tw
Date: 2003-11-12 22:16:02
Link to this Comment: 7242


<mytitle>

Biology 103
2003 Second Paper
On Serendip

The Anxiety of Anti-Anxiety Medications

19 million Americans (approximately one in eight) aged 18-54 suffer from anxiety disorders. (1) When I heard this statistic, I realized how important the discussion of such disorders was to the sciences. 1/8th of the most productive portion of the US population suffers from an anxiety disorder. The National Institute of Mental Health (NIMH), a division of the Institutes of Health for the Federal Government, is committed to research causes and treatment of such disorders. (2) Progress has been made, comparing studies of animals to studies of humans, in pinpointing the specific areas of the brain. Anxiety is associated with fear- fear of a specific object or situation, generalized fear and worry, recurring fearful memories, etc. The NIMH has found that a specific portion of the brain, the amygdala, controls the body's automatic response to fear. When the brain is confronted with fear, the brain takes two course of action. One, the brain transmits information to the cerebral cortex (the thinking part of the brain) to inform it of what specifically is endangering the individual. Second, the brain transmits to the amygdala the same information, so that the body might prepare for action.
Beyond this information, not much is known regarding the causes or mechanics of anxiety. Granted, understanding which portions of the brain are affected by or control anxiety is an important step. However, not much conclusive evidence or useful conclusions have been reached regarding anxiety.
With this information in mind, I began thinking of my personal experiences with anxiety. On one occasion I went to the emergency room, expressing the inability to breathe and dizziness. It was concluded that I was suffering from an anxiety attack, and was offered Xanax. I refused the medicine until I might better research what I would be taking. Much later, I attended counseling in effort to deal with anxiety issues, and once again was offered anti-anxiety medicines, otherwise known as anxiolytics.
Clearly, regardless of the inconclusive evidence regarding the causes of anxiety, the medical professions are quick to administer medicines when faced with a patient suffering from anxiety. My personal encounters with this are not the only evidence. At Bryn Mawr, through counseling services, I know many students who have received anxiolytics. Of course, there is an evaluation process. Nonetheless, many students are able to receive medication, regardless of the inconclusive evidence of the causes of anxiety. Additionally, the statistic regarding one in every eight adults suffers from anxiety proves true amongst my peers, and in fact, is a significantly greater number. Of my fifteen closest friends, both at school and from home, nine have suffered anxiety attacks, two have received medication for anxiety, and two for depression.
I have several concerns with this issue. Many medicines given to treat anxiety have a plethora of negative side effects, everything from insomnia to addiction. While information is available regarding the possible long-term effects of anxiolytics, evidence of their direct connection or effect on anxiety is not available. The question at hand, then, is whether or not it is useful- or even safe- to administer medicines for a disorder of which there is little information or understanding.
To better understand this question, information regarding anti-anxiety drugs is needed. It should be noted that frequently medication is accompanied by psychotherapy. There are many varieties of anti-anxiety medications, but they are all of two types: benzodiazepines and antidepressants. Because there are so many variations of the two, I shall only discuss in detail the two separate types.
Benzodiazepines, which include medicines such as Xanax, Versed, and Restoril, depress the central nervous system (CNS) at the limbic system, which controls automatic functions, the reticular formation of the brain stem, which controls respiration, heart rate, posture, and state of consciousness, and the cortex, which is where most of the brain's neurons are located. (3) These areas of the brain are associated with the function of fear, and the brain's response to fear. A benzodiazepine interferes with this process; however I cannot find information on how. Because it is not clear exactly what happens in the brain to produce anxiety in an individual, understanding how a medicine might interact with the process is impossible.
Benzodiazepines are used to treat many disorders or illnesses besides anxiety. This list includes insomnia, catatonia, alcohol withdrawal symptoms, convulsions, depression, mania, bipolar affective disorder, and PMS. Considering this list, I find the use of such drugs to be very interesting. For one, a benzodiazepine is used to treat insomnia, the inability to sleep, as well as catatonia, which may manifest itself in the inability to respond to stimulus, i.e. a lack of motivation. These are two seemingly opposite illnesses. Additionally, the drug is used to treat PMS, which, though potentially extreme, is a much milder illness than mania. If these drugs can treat such a wide variety of illnesses or disorders, what does this say about the nature of the drug? How can a drug treat two opposing disorders?
Besides these questions, one must consider the adverse affects of benzodiazepines. The most common side affect is generalized sedative effects, such as drowsiness, fatigue, confusion, dizziness, etc. Benzodiazepines may also cause paradoxical agitation, such as insomnia, hallucinations, nightmares, euphoria, rage, etc. Additionally, most benzodiazepines are highly habit forming, as well as tolerance building. Thus, intake of the drug not only must increase over time, which may result in toxic level doses, but also discontinuing use of the drug would result in withdrawal.
Just as with benzodiazepines, there are many variations of antidepressants. For the purposes of this paper, I shall only discuss Selective Serotonin Reuptake Inhibitors (SSRI's), which includes medicines such as Paxil and Prozac. The exact way in which SSRI's work is unknown. However, it has been concluded that they cause a "down-regulation" of receptors by blocking the re-uptake of serotonin. (4) Antidepressants are used primarily for generalized anxiety disorder, which is a general feeling of worry or anxiety, and posttraumatic stress syndrome, which is reoccurring anxiety at the recollections of a specific traumatic memory. Possible side effects of SSRI's include insomnia, chronic fatigue syndrome, mania, and headache, et al. However, most informational websites state that side effects cannot be anticipated for most patients. (5)
This information produces several interesting discussions. It is important, in this discussion, to bear in mind that the health of the patient is the top priority. That having been said, I would like to consider whether it is good for the patient to use drugs such as benzodiazepines and antidepressants for the treatment of anxiety. For one, the causes of anxiety are unknown. Second, the exact mechanisms of these medications are unknown. Putting these two statements together, doctors are using a drug that they do not understand to treat an illness of which there is no clear conclusive evidence. Granted, patients who use these drugs do more often than not experience alleviation of anxiety symptoms. However, the exact way in which their anxiety is being relieved is unknown. How responsible is this of the scientific community?
An additional cause to be weary of these medications is their side effects. They can in fact induce symptoms that they are trying to treat. For a patient suffering from anxiety, occurrences of mania are very dangerous. This is a big risk to take, in my opinion, in hopes that a medicine in some way alleviates an illness that no one has figured out.
There are, of course, benefits of this method of treatment. Once researchers figure out the ways in which antidepressants and benzodiazepines interact with the brain, they can use this knowledge to decipher the way anxiety interacts within the brain, assuming the medications directly treat anxiety. It is a process of elimination of sorts- it is one step in the scientific process of discovery. Thus, perhaps the prescribing of such medications will facilitate researchers. However, I think the scientific community ought to bear in mind the potential dangers of this method before applicating its results directly to patients.


References

1)National Institute of Mental Health
2)National Institute of Mental Health
3)Neuropsychology Medical Resources
4)Neuropsychology Medical Resources
5)PDR Health


Are you sick, or do you just want attention?
Name: Lara Kalli
Date: 2003-11-13 17:37:27
Link to this Comment: 7251


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Most of us, in our youth, were probably asked this question in some form or another at least once by our parents; and most of us would probably admit to having faked being sick at least once in our lives. It is interesting, then, to note that there seems actually to be a pathology associated with this kind of behavior known as Munchausen syndrome.


What, technically, is Munchausen syndrome? According to the Merck Manual, it is "Repeated fabrication of physical illness - usually acute, dramatic, and convincing - by a person who wanders from hospital to hospital for treatment." (1) People suffering from this disorder will even go so far as to inflict physical harm upon themselves in order to get the attention they want. Generally, it is associated with a past history of severe neglect and abuse inflicted upon the subject. It is important at this point to differentiate between Munchausen and two other pathological behaviors for which it might be mistaken: unlike hypochondriacs, Munchausen sufferers are conscious of the fact that they are not genuinely sick (2); unlike malingerers (people who fake or induce the symptoms of illness for some external gain, such as the prescription of painkillers (3)) the behavior of an overwhelming majority of Munchausen sufferers cannot be attributed to conscious motives. (1)


A far more alarming variant of this disorder, known as Munchausen syndrome by proxy, has also been documented. In these cases, the subject fabricates the existence of physical illness in another person, usually the subject's child. The same sorts of behaviors occur - faking or simulating the symptoms of illness, resorting to physical harm in order to induce those symptoms. Even though the parent - the Munchausen sufferer - will always appear to be deeply concerned for the child's welfare, her actions will not infrequently result in the child's being severely deformed or even dying. (2) Both variants of this disorder are highly uncommon.


At present, people with either Munchausen syndrome or Munchausen syndrome by proxy are seldom, if ever, treated with drugs. Standard methods of management and treatment include early recognition of the disorder and years of intensive counseling; many doctors believe that the disorders are not treatable, inferring from the nature of the disorders that giving the subject medical attention would in fact heighten the severity of their pathology. (2) Munchausen syndrome and Munchausen syndrome by proxy are rarely treated successfully. (1)


Current research has not been able to determine any biological basis for Munchausen syndrome, due to its extreme infrequency and the fact that when it has been determined by doctors that an inpatient at a hospital has the disorder or is the victim of abuse by someone with the disorder, that person usually flees. So what can be said of this disorder? I would like to advance my own thoughts on the subject here:


Munchausen syndrome seems to me to be reminiscent of two other, much more documented mental illnesses: antisocial personality disorder and dependent personality disorder. People with antisocial personality disorder - sociopaths, as they are more commonly known - are unable to distinguish "right" or "good" behavior from "wrong" or "bad" behavior; they seem to have little concern for anyone's personal safety, including their own; they are often impulsive and pathological liars; furthermore, they frequently seem to exhibit no remorse for any of their actions that might have had negative consequences. (4) Munchausen sufferers seem to exhibit many of these same characteristics to varying degrees, most notably the pathological lying aspect. Furthermore, it has been hypothesized that Munchausen sufferers, similar to sociopaths, might get a distinct measure of satisfaction from successfully fooling doctors into thinking that they are sick. (5) The parallel continues in Munchausen syndrome by proxy subjects, as these people exhibit a disregard for or inability to comprehend the effects that their actions have on the children that are their victims. People with dependent personality disorder, as one might imagine from the name, are characterized by "a pervasive and excessive need to be taken care of that leads to submissive and clinging behavior and fears of separation." Notably, people suffering from this disorder have a tendency to go to great lengths to receive the attention they desire. (6) This particular symptom is also, as we can see from the above characterization, one of the most salient features of Munchausen syndrome. It would be, I believe, a very rewarding and enlightening task to study Munchausen syndrome with these two other disorders in mind. Of course, it is important to remember that very little is known for sure about Munchausen, and so the majority of theories advanced about it are more conjecture than anything else; the thoughts I have presented must be construed more as questions than statements.

References

1. the Merck Manual entry on Munchausen and Munchausen by proxy

2. an overview article on Munchausen and Munchausen by proxy

3. Dr. Marc Feldman's website on factitious disorders

4. Internet Mental Health's information database on antisocial personality disorder

5. Internet Mental Health's information database on dependent personality disorder

6. an account of a woman who recovered from Munchausen syndrome

Related Reading

a recovered Munchausen patient's first-hand account of the illness

an account of a doctor's encounter with a Munchausen sufferer (skip to the section titled "A Case of Munchausen")

an account of a person suffering from factitious bereavement

a detailed description of Munchausen syndrome by proxy


The Selfish Gene
Name: Melissa Ho
Date: 2003-11-13 22:16:25
Link to this Comment: 7255


<mytitle>

Biology 103
2003 Second Paper
On Serendip


"We are survival machines—robot vehicles blindly programmed to preserve selfish
molecules known as genes."
-- Richard Dawkins, The Selfish Gene (1).


Can genes alone determine your DNA's place in the next generation? Are humans simply vessels for these genes?

With his provoking work entitled The Selfish Gene, Richard Dawkins attempts to answer such questions as he proposes a shift in the evolutionary paradigm. Working through the metaphor of a "selfish gene", Dawkins constructs an evolutionary model using a gene as the fundamental unit of selection, opposed to the more commonly accepted belief of the species as the unit of selection.

This "selfish gene", possessing a certain selfish emotional nature, acts as an independent entity fighting to ensure its replication in future generations, maximizing its number of descendents (2). Those successful in replicating have made the most of their given environment (1). For the interests of this paper, is it valid to assume that natural selection occurs at the level of DNA? Hence, what can be implied about genetic predispositions?

For Dawkins, evolution of a species is dependent upon the transmittance of this information to the next generation; the individual species is irrelevant (2). This theory is a departure from Charles Darwin's theory of evolution, which concentrates on the species. Species, to Dawkins, are "survival machines" whose purpose is to host these genes, as species are mortals and fleeting, whereas genes are not (2).

Is it valid to assume Dawkins position that humans are merely "robot vehicles"? This concept, alienating emotion, physical, and cultural growth from evolution, can be startling. By placing the importance of natural selection at the level of DNA, all humanistic aspects are removed. Inherent complexities arise, as the individual is of little importance— other than to provide shelter and nourishment, so to speak. Varying schools of philosophical and scientific thought could argue the ethic and biological counterarguments to this theory.

Dawkins' gene is a personified entity, seemingly to the extent that it is an independent being to an extent. The "machines", therefore, are subjected to programming of sorts by the genes. Capable of selfish and altruistic behavior, the gene "reaches" outside of the human body to interact with its environment (3). "With only a little imagination we can see the gene as sitting at the centre of a radiating web of extended phenotypic power," stated Dawkins (3). By granting "phenotypic power", the genotype (as determined by the interaction of genes) behaves in such a manner which dictates the phenotype, or physical expression of the gene. By following this pattern—interaction between the gene and its environment, it is arguable that the environment is actually governing genotypic behavior. By this, the environment is not merely a factor manipulated by the gene, but instead can manipulate the gene itself.

Apply the above reasoning to the concept of genetic "predisposition" to maladies and conditions. In Dawkins theory, only the "strong" genes persist. One, therefore, can perhaps assume that only the most preeminent and healthiest genes exist. Given this predilection for only the genetic superiority, then why do maladies exist? One response could be similar to the idea outline above—the environment's role in phenotypic expression is dominant to that of the gene. Alternatively, the "bad" genes leading to disease and illness are actually the dominant units. These malignant genes could be a natural mechanism of population stabilization along the survivorship curve. The environment serves a godlike position—choosing those who will carry on and who will perish (3).

Genes are malleable entities. They interact with their environment—it is tested, processed, articulated, and modified with time—responsive to necessary changes in order to maximize survival and "replication" (3). Change is inevitable and imperative. Author Oliver Morton best describes this dynamic:

Just as organisms are interpretations of genetic information within a specific environment so the use of this genetic knowledge will depend on the environments—economic and ethical, personal and political—in which that use is made. But those uses, good or ill, will surely be made. The genes that imperiously limited and permitted will be bent to human will; limits will become movable, permissions stretched. Genes have never been the complete masters of human destiny, but nor have they been humanity's servants. Until now. (3).

As humankind progresses, so does its ability to become the master of one's own fate. Dawkins theory of the "selfish gene" offers one possible version to the many reasons how humans evolve. While he offers intriguing insight by bringing the evolutionary process to the micro-level of DNA, the personification of genes is an exceptionally difficult idea to support.

When observing genetic maladies, it is difficult for Dawkins theory to hold completely—as it is not necessarily capable of fully encompassing the idea of anything more than the "superior" genes survival—it fails to explain flaw. The relationship between the genetics and their environment is best explained offering the genes as a framework for evolution, with the environment as the substantive filler interpreting how the framework will ultimately look and act.


References

Works Cited

1) "The Selfish Gene" , The opening pages of and selections from Dawkins work

2)The Selfish Gene Theory, Explanatory site providing overview of theory

3) The Selfish Gene?" Reason in Revolt , Genetic issues and Dawkins discussed


Works Consulted
4) In Defense of Selfish Genes , Dawkins refute to claims made about his theory by Mary Midgely

5) Selfish Genes and Social Darwinism , Counterarguments for Selfish Gene Theory

6) The Selfish Gene: The Underpinnings of Narcissism , Further discussion and implications on Selfish Gene Theory


First and Second Language Acquisition
Name: Margaret T
Date: 2003-11-13 22:57:38
Link to this Comment: 7256


<mytitle>

Biology 103
2003 Second Paper
On Serendip

In our everyday lives, the origin of our ability to communicate is usually not often taken into consideration. One doesn't think about how every person has, or rather had at one time, an innate ability to learn a language to total fluency without a conscious effort – a feat that is seen by the scientific community "as one of the many utterly unexplainable mysteries that beset us in our daily lives" (3).. Other such mysteries include our body's ability to pump blood and take in oxygen constantly seemingly without thought, and a new mother's ability to unconsciously raise her body temperature when her infant is placed on her chest. But a child's first language acquisition is different from these phenomena; different because it cannot be repeated. No matter how many languages are learned later in life, the rapidity and accuracy of the first acquisition can simply not be repeated. This mystery is most definitely why first language acquisition, and subsequently second language acquisition, is such a highly researched topic.

On the surface one would look at child first language acquisition and adult second language acquisition and see similarities. In each case the learner first learns how to make basic sounds, then words, phrases and sentences; and as this learning continues the sentences become more and more complex. However, when one looks at the outcomes of these two types of acquisition, the differences are dramatic. The child's ability to communicate in the target language far surpasses that of the adult. In this paper differences in these two processes that most always produce such different outcomes will be explored.

Before this exploration begins, however, I would like to state that I am looking at child first language acquisition and adult second language acquisition because they both seem most relevant to our lives right now – as college students who have most definitely mastered our first language at a young age, and are mostly likely attempting to master our second as adults. One could also look at situations where only one variable is changes (e.g. child first vs. child second or child second vs. adult second) but these comparisons are not represented in this paper.

The first area of difference between first (L1) and second (L2) language learning is input – specifically the quality and quantity of input. It is the idea of the "connectionist model that implies... (that the) language learning process depends on the input frequency and regularity" (5).. It is here where one finds the greatest difference between L1 and L2 acquisition. The quantity of exposure to a target language a child gets is immense compared to the amount an adult receives. A child hears the language all day everyday, whereas an adult learner may only hear the target language in the classroom – which could be as little as three hours a week. Even if one looks at an adult in a total submersion situation the quantity is still less because the amount of one on one interaction that a child gets for example with a parent or other caregiver is still much greater then the adult is receiving.

This idea of one on one interaction versus a class room setting (where an instructor could be speaking to up to twenty, or more students) also ties in with the idea of quality. It is also much easier for a parent or caregiver to engage the child in what he or she is learning. It is hard, however, for a teacher to make the topic being learned relevant to the students' lives. This can lead to a lack of concentration, and a lack of motivation – something that will be visited later.

The next great and obvious difference between L1 and L2 learning is age. A large part of this train of thought is the idea of a "critical period, or the "time after which successful language learning cannot take place" (4).. This time is usually aligned with puberty. This change is significant, "because virtually every learner undergoes significant physical, cognitive, and emotional changes during puberty.

There are three main physical changes one undergoes in regards to language acquisition. The first is the presence of muscular plasticity. A child's plasticity goes away at about the age of five. After this age it is very hard for a learner to fully master pronunciation of a second language. The second change is one's memorization capabilities. It is fairly well known that as a person grows older their ability to hold large amount of information reaches its peak fairly early in life, and then begins to decrease. This is seen most dominantly with very old individuals. The third physical change that occurs is more related to neurology.

"As a child matures into adulthood, the left hemisphere (which controls the analytical and intellectual functions) becomes more dominant than the right side (which controls the emotional functions)." (2).

This idea is called the Lateralization Hypothesis. The significance these specific neurobiological changes have on language learning will be discussed below.

The one advantage adults seem to have over children is their cognitive ability. Adults are better able to benefit from learning about structure and grammar. Unfortunately this slight advantage in ability does not help adult second language acquisition in general. In fact this ability almost hinders them in that they analyze too much. Specifically, they cannot leave behind what they know about their first language, which leads to a tendency to overanalyze and to second guess what they are learning.

The final area that puberty changes is within the emotional, or affective, realm. Motivation is much affected by emotional change. A child's motivation is simple. In order to communicate and to be a part of family and society the child must master the target language. This motivation is quite weighty, especially when compared to the motivation that adults have, or rather, must find. Adult motivations usually fall into one of two categories: "integrative motivation (which encourages a learner to acquire the new language in order to become closer to and/or identify themselves with the speakers of the target language) or instrumental motivation (which encourages a learner to acquire proficiency for such practical purposes as becoming a translator, doing further research, and aiming for promotion in their career)" (5).. Either one of these types of motivation must be prevalent for successful acquisition to take place.

The final change that takes place, and changes language learning has to do with egocentricity. Children are naturally egocentric. While learning their language they are not afraid to make mistakes, and in general, they do not feel abashed when they are corrected. Also, their thoughts usually do not surpass their language ability. Adults, on the other hand usually suffer form a fairly large amount of language learning anxiety. Adults often "feel frustrated or threatened in the struggle of learning a different language" (5). Mistakes are seen more as failures then as opportunities for growth. "The adult learner may also feel greatly frustrated, for being only able to express their highly complex ideas at a discourse level of an elementary school pupil" (5). These new emotions leave an adult learner in a slightly helpless position, unable to regain the egocentricity of their childhood, which is just on more hindrance in a line of many.

Although the desired outcomes of child first language acquisition and adult second language acquisition are exactly the same, the actual outcomes are in reality quite different. Factors such as motivation, quality and quantity of input and a lack of egocentrism, among many other factors, will forever stand in the way of adult second language learning. In conclusion, because of so many varying factors, both the processes and outcomes of child first language acquisition and adult second language acquisition are extremely different, and are only connected by a common goal.

References

1)Comparing and Contrasting First and Second Language Acquisition

2)First and second language acquisition

5)First Language Acquisition

4) Gass, Susan M., Larry Selinker. Second Language Acquisition. London: Lawrence
Erlbaum Associates Publishers, 2001.

5)Reviewing First and Second Language Acquisition: A Comparisono between Young and Adult Learners


Male Menopause: Fact or Fiction?
Name: Enor Wagne
Date: 2003-11-14 16:51:27
Link to this Comment: 7260


<mytitle>

Biology 103
2003 Second Paper
On Serendip

"Male menopause is a lot more fun than female menopause. With female menopause you gain weight and get hot flashes. Male menopause - you get to date young girls and drive motorcycles." (11)


While 'male menopause' has provided both sexes a variety of jokes and frustration, there are researchers and scientists studying the alleged condition with great seriousness. Those who support the existence of male menopause feel strongly that its affects on the male mind and body should be regarded with the same credence that society attributes to the female menopause.

Male Menopause begins with declining testosterone levels and is eventually characterized by the following symptoms: hair loss, depression, a slower immune system, weight gain, less stamina for physical activity, forgetfulness, irritability, and loss of or reduced interest in sex. (5) Impotence may also occur. Usually this "change" arises between the ages of 40 and 55, although it has been known to transpire as early as 35 and as late as 65. (6) Several different clinical terms exist for the popularized term "male menopause" such as "andropause" or "viropause". (2) Andropause was named for the hormone "androgen" which is essentially testosterone. It is also the name of the therapy with which they treat a man suffering from male menopause. This treatment comes in the form of injection, skin patches or liquid gel. (5) In order to be diagnosed with male menopause, one must have reached the eligible age then endure a physical exam wherein blood samples will be taken. These samples are tested for hormone levels. If these blood samples demonstrate low levels of androgen and the patient seems to be suffering from the symptoms associated with male menopause, a physician will most likely prescribe a proper hormone replacement treatment.

There are a number of arguments against the possibility of there existing such a thing as "male menopause". First, it has been estimated that approximately 25 million American men are currently going through male menopause and researchers expect that number to double by the year 2020. (6) However this hypothesized statistic does not seem realistic. Such a statement implies that more men will go through the change in the future than are going through it today. How can the number of men afflicted double? It is not contagious, and there is no biological proof that such a gene exists that may help yield the affects of male menopause.

The second objection to the term "male menopause" is that it does not occur in ALL men as it does in all women. At least, research does not show it affecting all men. In a woman's case, she is biologically predestined to endure menopause. Her chance of escaping the most feared female phase of life is as unlikely as men ever attending Bryn Mawr undergrad. For men, there is a percentage that applies to male menopause, for females it is simply a fact of life.

Lastly, the term menopause has been attributed to women for a reason. The prefix 'meno' comes from the latin root 'menses' which means menstruation, while the suffix 'pause' signifies a halt. Women experience menopause because their levels of estrogen, upon reaching a certain age, drastically drop, causing their menstruation cycles to cease. When men undergo so-called "male menopause" or "andropause" their hormone levels slowly decrease, not abruptly. And while men may suffer from sexual side effects or loss of sexual appetite, their ability to conceive offspring is in no sense taken away. Thus, if there exists such a thing as male menopause it should be properly renamed for the connotations that ensue are false and misleading.

Once the term "male menopause" has been rightly named, it is understandable that certain symptoms, health problems, and its predictable timing lead the public to justify "andropause" as the male equivalent to the female "menopause". However, with this misapprehension clarified, the similar experience of hormone depletion in both men and women is cause for concern. If women have been supplemented with estrogen as a populace once menopause occurs, and men have altogether not been, could this be the reason that women tend to live longer than men? In America the average life expectancy of a man is 74, (8) for women it is 80. (9) Do the supplemented hormones augment the life expectancy in women? Do neglected hormone deficiencies take a toll on the aging process? It is true. If a person has hormone levels lower than they had in their younger years, the depletion will eventually lead to a more rapid aging process. Perhaps, if society treated 'male menopause' as gravely as they treated female menopause; urging men to test their hormone levels once they reach a certain age, men would eventually live as long as women do.

One of the problems with society is its obsession with 'the male mid-life crisis', a hopeless sitcom cliché. (1) Most people believe that the terms 'male menopause' and 'mid-life crisis' are interchangeable. However, they mean completely different things. Andropause (male menopause) occurs due the declining in androgen in some men at mid points in their lives. Men usually experience a midlife crisis as a result of the change, because they do not know how to deal with their lessening hormones.(10)

While there are many differences between male and female menopause, one similarity rises above tying men and women into one undeniably human knot of anatomical fate: all women and most men suffer from hormone depletion past a certain age. It should be taken into consideration that hormone replacement therapy is not a natural process, thus if it did not exist what would happen? Would a certain age point inevitably mark a genderless phase of life? Women would have low or no estrogen levels and men, since the number suffering from male menopause is allegedly increasing, would eventually lose their testosterone. Would the elderly become an androgynous human species, with only the once-reproductive parts of a woman and man? Would this breed make society reflect on the trivial gender wars and gender discrimination that exists today and wonder what it was all for? If we eventually all become just human, what is the point of stereotypes and unequal pay and gentleman's clubs and feminism? Shouldn't it all be useless debate and tiresome justifications if we eventually all naturally become genderless anyways?

While the genderless proposition may be a bit extreme, it is still a necessary question we must ask ourselves in considering the possibility of male menopause as a biological truth. It is obvious that male and female menopause cannot be parallel for reasons of syntax and universality and biological repercussions, enough comparisons can be made between the two midlife trials to relate them with one another.

References

1)CNN Science Page, article produced by CNN entitled "Male Menopause: Is It for Real?" explains the differences between male menopause and the male midlife crisis

2)CNN Science Page, another article to follow up the first which explains androgen treatment therapy

3)CSUN Research Page, this page offers links to different areas of interest in male menopause

4)BBC Science, article argueing against the possibility of male menopause, instead calls it laziness

5)Monterey Clinic Page, article called "Andropause, the Male Menopause" explains the treatments available for men undergoing andropause, also offers a number of statistical information relating to male menopause

6)Today's Healthy News, an article relating to male menopause which discusses the alleged augmention in the number of men that will experience male menopause

7)Human Development Report, provides the average male life expectancy statistics for all countries

8)Human Development Report, provides the average female life expectancy statistics for all countries

9)ABC Science, short article by ABC questioning the reality of male menopause

10)ABC Interview, interview with Jed Diamond, author of Male Menopause, a 1998 best seller. who used to be a disbeleiver in male menopause but now firmly credits the possibility of a male 'change'

11)Women Joke Page, Male menopause joke


Sexual Selection: Fact or Fiction
Name: Natalya Kr
Date: 2003-11-16 23:11:14
Link to this Comment: 7281


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Darwin's theory of sexual selection is an intriguing one because it offers an explanation of human striving and cultural value systems. The theory is that humans who are more sexually desirable will have more offspring and thus their traits will be passed on to future generations to a greater extent than those of less sexually desirable humans. As opposed to Darwin's other theory, natural selection, those who are the best adapted to their environment will be more likely to pass on their genes, or, "survival of the fittest", you might call sexual selection "survival of the sexiest." The theory is intended to in part explain why, when humans diverged from other primates, the human brain tripled in size in just two million years. At first glance, this theory also seems to explain much of the motivation behind human culture and achievement. Upon closer inspection, there are some fairly conspicuous problems with it, especially when it is extended to describe not only human evolution in the distant past but it the present, but it may still be the most plausible explanation available to explain why humans mental capacities have expanded so far beyond those of our primate relatives.
It makes complete sense that we would be biologically driven to prove our sexiness. At the most basic level, this could explain the plenitude and popularity of fashion magazines for young women and the emphasis on being good at sports in school for both genders. Beyond this, it could also explain why men and women are driven to succeed at their various careers, or to be perceived to be successful, smart, witty, fun-loving, good-looking, responsible, or any of a number of things that human aspire to be which are also sexually attractive. The drive for achievement could be rooted in their biology - and their desire to be considered sexually desirable.

What is interesting is that while it seems logical that the desire to succeed is rooted in the desire to appear sexually desirable, (and by that standard, many people are trying very hard to be sexually desirable), in this day and age, sexual desirability bares little or no direct correlation with the number of offspring one produces. In fact extremely sexually desirable people, supermodels, billionaires, sports stars, and affluent people in general, tend to have fewer children than those who are less sexually desirable by this definition. Even so, many of us are highly motivated to prove our sexual desirability, but the purpose of doing so, if it ever existed, seems to have been lost or distorted.

Was it ever really true that the more sexually desirable people had more offspring, or is this theory only speculation? According to Geoffrey A. Miller, a senior research fellow and University College London, anthropological data show that in our hunter/gatherer days good hunters had more extra-pair copulations than poor hunters, but that is hardly concrete evidence that good hunters actually produced more children than poor hunters (4).

Miller rejects both the rapacious male/helpless female and the choosy female/displaying male models of sexual selection. (Studies done with primates suggest that male and female hominids exercised mate choice and followed a pattern of serial monogamy, and that rapists would have been ostracized or killed.) Instead he believes that over the course of evolution, both sexes have had substantial choice in sexual partners, and even in the case of arranged marriages, the parents have exercised choice of sexual partner for their children based on certain desirability standards.

If the theory does hold, and, in the past, at least, those who were more sexually desirable did indeed have more offspring, that would imply that, over the course of human history, humans generations were getting sexier and sexier, meaning that whatever traits were valued in terms of sexual selection were becoming magnified in the human population over time.

Some of the diverse traits that Miller suggests as possible factors in sexual selection are art, morality, language, and creativity. In the case of art, his argument is that it acts as an extended phenotype, basically demonstrating the biological fitness of the creator. The argument for morality is based on modern (unsubstantiated) sexual abhorrence of selfishness, lying, and cheating, and a study by David Buss (1989) which found kindness to be the most desired trait in a mate across 37 cultures (4). In addition, traits like conspicuous magnanimity are good fitness indicators. The language argument is predicated on the belief that there are more words in most languages than are necessary for survival so they must serve some "self-advertisement" function. In addition, he claims that today, vocabulary is more influential than any other mental trait in mate choice. In terms of creativity, with the assumption that courtship entails a great deal of conversation, Miller concludes that there would be plenty of time before the probable time of conception for either lover to decide to dump the other on the basis of speaking, listening, thinking, remembering, storytelling or joke-telling ability. (He cites Cyrano de Bergerac as an example of these sexually desirable abilities at work.) In addition, he theorizes that early humans, as humans today, would become bored with predictable mate, so unpredictability, a form of creativity, was probably a sexually selected trait. (4)

The problem with this theory is that it is largely speculative. You could claim sexual selectivity for any trait that humans today possess, and I see no reason why positive traits like creativity and morality should come before negative ones like unimaginativeness and immorality, which are also in abundant supply today. Sometimes the positive and negative are even inextricably linked.

Creativity, which Miller credits partially for the rapid evolution of the human brain, has also been linked with mental illness and attention deficit disorder (Evolution, Creativity, and ADD ). Thus creativity can often be detrimental to the individual who has it or the society in which they live, but on other occasions, it can lead people to create great inventions and works of art. Because of the potential detrimental effects of creativity, it would not be useful for everyone in a society to be extremely creative, because this would probably mean that a large proportion of people would also be mentally ill or have ADD, and pandemonium would ensue (2),/a> . Conversely if a small portion of the population possesses these traits, the detrimental effects remain manageable, and the rewards are still great. In this case, the theory of sexual selection might not be applicable, because it would not be beneficial to the species to produce such unstable elements in greater and greater numbers.
Furthermore, who is to say that everyone is attracted to the same characteristics. Some people may value kindness in a mate, but others may consciously or unconsciously like to be treated cruelly. In addition, I see no reason why people who are more sexually desirable would have more offspring, even in earlier stages of evolution. Sure they might have a greater choice of potential mates, but does that mean they would necessarily have children more often than ugly, unsuccessful cavemen and women did with each other? After all, just because they are more sexually desirable does not necessarily mean they enjoy sex more or have more children, and just because they have more choices in terms of sexual partners does not mean that undesirables were left without anyone with whom to copulate.
In more general terms, I see no reason to believe that the human race was or is becoming more creative, ethical, or verbal, with each passing generation. Instead, it seems much more logical that as a species, we maintain a stable proportion of people with varying creative and verbal capacities, and that ethics are only marginally, if at all, genetic. The one instance in which this theory makes any sense at all is over the course of the two million years when human brain size tripled. However, even then, natural selection, not sexual selection, or perhaps a totally different force was responsible for this dramatic change. Maybe certain traits were considered unacceptable culturally and so no one was allowed to mate with people baring these characteristics, even if they were sexually desirable - one could call this cultural selection.

On the other hand, what other way is there to explain the importance most people attach to being sexually desirable: beautiful, smart, sociable - successful. What other biological reason is there for us to be so driven not merely to survive, but to succeed, to prove ourselves, and why else would we have so much of our emotional well-being wrapped up in whether we succeed or not? According to the theory of sexual selection we care about success because success makes us sexy, and reproduction is the biological purpose of life. Still, it is clear that if sexual selection ever did function, it has become completely perverted (no pun intended) now; the most successful people do not have the most offspring. It is difficult to judge whether or not sexual selection functioned in the distant past because there is only patchy data to support or give evidence against it.

The other problem I have with this theory is that it is too deterministic. It attributes too much to genetics and not to human striving. Vocabulary, for example, is acquired, not inherited. If it is highly heritable, as Miller suggests, this is probably because children generally learn to speak from their parents, not necessarily because they share a similar language learning ability. Ethics are even more difficult to attribute to genes. Decisions, in general, ethical ones should be no exception, tend to be based on past experience not DNA.

The perspective that our genes are the only thing attracting us to each other is fundamentally depressing. What is the point of human striving if everything is predetermined by our genes? Or is the only point of all of our striving to prove to society and potential mates the genetic capability we had all along: is success an extended phenotype? The truth may be that even we don't know the true extent of our genetic capability until we try to test it out in the real world. Still, true ability always remains ambiguous because there can always be barriers to success even if you do have the genetic capability for it. Certainly one can't achieve things without a certain innate ability. Perhaps we all strive to achieve in order to reach our full genetic potential so that others will know the full extent of our genetic desirability. We are frustrated when we fail because it either means we aren't living up to our potential or we don't have the potential at all. Luckily, we can give ourselves the benefit of the doubt because as of yet, there is no way to measure to absolutely measure "potential", or our genetic make-up. Luckily, as well, if we don't have certain sexually desirable traits, like physical attractiveness or intelligence, we can always have others like kindness and magnanimity.

Humans are relatively more creative, verbal, artistic and ethical (by our standards) than other primates and the theory of sexual selection does provide a plausible explanation for this. Early humans probably did select mates who had relatively greater capacities in these areas. This would explain why brain size increased so rapidly. However, the theory does not necessarily extend to recorded history. Even if this theory was once true, it does not appear to hold true anymore, because those who we consider the most successful and desirable are not producing the greatest number of offspring, and so, the traits that are being selected for future generations may be entirely different. Ironically, we may be spending our lives trying to prove we have traits which evolution is actually selecting against.

References

1) Creativity, Evolution and Mental Illnesses

2) Evolution, Creativity, and ADD

3) Sexual Selection and the Mind: A Talk with Geoffrey Miller

4) The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature


Is War Unavoidable?
Name: Maria Scot
Date: 2003-11-17 14:14:37
Link to this Comment: 7290


<mytitle>

Biology 103
2003 Second Paper
On Serendip

The question that I sought to answer with this paper was whether humans are biologically destined to wage war on one another. Admittedly, something of a broad topic. It seemed to me from news headlines and various history classes over the years that wars, in general, are fought over race, ethnicity or religion. Obviously, often the divides that exist between two ethnic groups don't surface in the form of war or conflict until an issue such as territory comes up. Yet even in territory disputes, the conflict itself still is rooted in the distinction the two sides see in one another: "no, you can't share this lake with us because you look differently/speak differently/worship a different god". Race is not a voluntary trait; it is genetically determined. Ethnicity is, to some degree, a plastic concept; created by human perception of boundaries and distinctions. Religion is an identity that one actively assumes, it involves participation and the adoption of a belief system. From this, one can see that the nature of the distinction is not so important as the distinction itself being made. From this, it would be easy to slip into the assumption that all it takes is the presence of difference to incite violence between populations; but this, I think, does not give humanity enough credit. My goal in this paper is to present an argument that while perhaps inclined, humans are by no means destined to wage war on one another.

Violence and war are, by most people, considered along with reproduction to be two of humanities most fundamental instincts. People point to our bloody past as well as to the prevalence of violence among other animal to whom we are genetically similar as evidence of the futility in pursuing a peaceful future for the world. It is true that even animals such as chimps-who share 98 percent of the same genes with humans- have been known to hunt down and exterminate different groups within their own population (1). Chimps also can be sexually aggressive and violent and dominate females who otherwise would not mate with them (4). But another species, the bonobo, uses sex to deal with conflict that arises and does not often resort to violence. There is no reason that we be more like the warring chimps than the bonobo, or ideally, a nice middle ground where we rely not solely on violence or sex to resolve problems. The hamadryas baboon shows even more restraint, even though they are fond of peanuts, if one is thrown before two males, both will ignore it as it is not worth the fight that would ensue (1). Even Jane Goodall, who spent years studying the acts of chimps in their natural habitats does not feel that the violence that is part of their natures has any definitive impact on the behaviour of humans (3). What makes chimps especially interesting in terms of their behaviour towards their peers is that they do have an emotional awareness that many animals are without. It poses a question that was well articulated by Dr. Goodall, "How should we relate to beings who look into mirrors and see themselves as individuals, who mourn companions and may die of grief, who have a consciousness of 'self'? (2)" How much of the behaviour of the animals can we see as interesting and relevant to our own? While studying the behaviour of animals can perhaps provide insight into the issue of our biological predilection for death and destruction, it is also important to remember that these are monkeys and we are not.

While most countries have had wars, indeed, almost all have, that does not mean that we are destined in the future to do so. It does not mean that there is a biologic basis for war. There are inborn tendencies to defend offspring, to be territorial and these have resulted in wars, no doubt, but if there was another way to deal with those conflicts, would wars still persist? Even if there is not any compelling animalistic impulse to go to war or to hurt one another, human history would suggest that there is some compelling force that must make warfare appealing or at least appear necessary to the human mind; something that makes people regardless of location, race or nationality, prone to warring. Chris Hedges, author of 'War is A Force that Gives Us Meaning' (and incidentally, the man about to marry the mother of my little brother's best friend.) describes the appeal of war as giving "us what we long for in life. It can give us purpose, meaning, a reason for living" (1). So then it perhaps is not an impulse to wage war that is so central to the human nature, but an impulse to have a sense of meaning, a sense of self-both of which can be found on the battlefield, but not only on the battlefield. It is the process of establishing what you are by identifying and attacking those that you aren't.
There is the disturbing fact that militarism appears to have existed in roughly 95% of the societies that we know of (1). Within mane of those societies, warriors, traditionally, have been heralded as heroes (1). And yet, the militaristic prowess (or lack thereof) of many countries today is very different from their past. The Swedes have not fought a war for almost two hundred years, yet they are descended from the Vikings, who fought all the time (1). The militaristic sentiments of past generations clearly don't translate genetically to future generations, it is a matter of culture and of choice-two things that we can if not determine, then at least control.
While it would be idle to argue that there is not a side to human nature that craves conflict, it would also be inaccurate to say that our biology makes endless war inevitable. The behaviour of those animals to whom we are genetically similar-in this case primates- can provide proof for both arguments; both that we are capable of atrocities (which we already knew) and that we are capable of peaceful conflict resolution. Even now there is evidence surfacing in the form of studies that are hopeful. In particular the results of a game theory experiment in which the subjects were able to risk everything to gain more for themselves, or to settle for a lesser gain that was reliable and harmed no one else in the group. Around the world, the majority of subjects chose the latter option (1). Proving that while we are far from perfect, that it is not our fate to perpetually engage in self-destructive behaviors.

Sources
1) NYT article: 'Is War Our Biologic Destiny?'
2)Official Chimp Site of Jane Goodall
3) Jane Goodall's main page.
4) NYT article 'Are Men Necessary?...'


You Can't Smell It, But It's The Pain of Your Exis
Name: Diana E. M
Date: 2003-11-18 03:18:32
Link to this Comment: 7303

I sit at a bar and there is something about him that makes him different from the guy behind. I tell my friend, "Isn't he attractive?" and she says, "not in a million years!" However, I proceed to say that if I were gutsy enough I'd start a conversation with him as I find him greatly striking - despite my friend's disgust. My friend and I leave the bar and I can't help but wonder why the random attraction to equally random people. What is it abut other human beings that give us "that feeling" way before we even get a chance to know them personally? What is it that sets one person apart from the other when it comes to sexual attraction? Is it animal instinct, is it social predispositions, is it mental archetypes, or is just plain chemical? Is it all of the above? My focus of research will be on the "chemical" part, so if you're a romantic, get ready to kiss cupid goodbye. He's out of arrows and his bow is gone.

Research done on this subject has brought about the theory that sexual attraction could potentially derive from hormones called Pheromones. These are airborne, mostly odorless chemicals that alter sexual behavior, mark territory, and influence reproduction throughout the animal kingdom. But whether humans send and receive "sex chemicals" is a hot and bothered topic. Recent studies suggest that chemicals emanating from our pores do affect the behavior and biochemistry of others. Fragrance companies have caught "whiff" of this research, and the Internet abounds with products sporting names such as "Primal Instinct" or "Rogue Male" promising to make you an irresistible sex magnet(5)
While many scientists believe that human pheromones exist, they disagree about whether they have identified any specific chemical compound that causes other humans to react to it in a specific way. In the popular understanding, pheromones cause an instinctual, almost automatic sexual response, which scientists call a "releaser" effect. That effect is well studied in animals, but has never been observed in humans. Nevertheless, fragrance companies are focusing -- and funding -- research concerning pheromones' potential for sexual arousal. (1) We know that animals have signaling chemicals that induce sexual behaviors. For example, a male pig secretes the pheromone androstenone in his saliva, and when the female "smells" it, she goes into a mating stance. If humans do produce pheromones, the underarm is where we might do so, with its many glands and its proximity to a companion's nose. Our sebaceous glands secrete a clear liquid that becomes mixed with thousands of odorless compounds oozing from other glands. Bacteria on our skin break down those compounds into volatile molecules, both odoriferous and odorless, producing an "odor print" as unique as our fingerprints. Any pheromones among them would drift into our companion's nasal passage and stimulate specialized but still elusive receptors. Finding those receptors will resolve the dispute about whether humans have a "vomeronasal organ" - devoted to sensing pheromones, as many other animals do, or whether pheromone receptors are interspersed with olfactory receptors in the nasal passage.(4)

George Preti, an organic chemist at the Monell Chemical Senses Center in Philadelphia, first began researching this topic in the 1980s in collaboration with Winnefred Cutler of the University of Pennsylvania's psychology department. They hoped to explain the observation that women living together fall into menstrual synchrony, a finding made in 1971 by Martha McClintock, a leading pheromone researcher at the University of Chicago. Preti and Cutler discovered that women exposed to just the underarm extracts of other women adjusted their menstrual cycles to be in synch, and that male underarm extracts made women with irregular cycles more regular. They hypothesized that those underarm extracts must contain pheromones, because the effects could not be explained in any other way, and they were consistent with the way pheromones function in other mammals. (1)Recently, it has been reported that male underarm extracts can affect the cycles of a specific reproductive hormone in women. Those extracts also affected the mood of women, making them calmer and more relaxed. On the other hand, a 2001 study conducted with males by researchers at the University of Texas, described the smell of a woman's T-shirt as more "sexy" or "pleasant" during the fertile stage of her menstrual cycle than the shirt of the same woman during her infertile stage. (3) The reasoning behind both these tests is that two compounds with mixed reputation as pheromones - androstadienone, (AND) and estratetraenol (EST) and copulins - were at play. AND is a derivative of testosterone, EST is a poorly understood relative of estrogen and copuline is a strictly female substance, which is found in human vaginal secretions. (has been shown to both elevate male testosterone levels - directly linked to increased sex drive - and positively affect perceptions of female attractiveness in targeted males )Could AND and EST be human pheromones? Some scientists are ready to say yes, because these chemicals considerately changed brain patterns as detected by EEGs, functional MRIs, and PET scans, and induce mood changes. (2) However, it needs to be noted, and not overlooked that those results were obtained from solutions of pure compounds with a thousand times the concentration found in humans.

These studies, however, did not examine physiological or biochemical changes and critics contend that the data did not support those conclusions. They point to inconsistencies with the numbers and to the small sample sizes and short time period of the studies. But the biggest problem is that the experiment can't be replicated because the chemical composition of what was being tested is unknown. Nonetheless, pheromone research is reshaping the fragrance industry, indicating that the romance of scent is not just the fragrance you put on your skin, but also the chemicals that are coming out of your pores. Still, there are so many factors affecting human communication that if pheromones do play a part, it isn't the main part.
So, I did end up developing the courage to approach this non-descriptive male at the bar and I proceeded to have the most obtuse conversation possible. The physical attraction remained, but anything else flew out the window of the bar as quickly as I realized my friend's phrase, "not in a million years," was exactly right. There was absolutely nothing interesting about him, nothing that fit my intellectual needs, nothing that suited my ideas of a good partner, of a possible good dad, or even less a possible "show and tell" prospect for my Thanksgiving dinner. So what happened? Well, the pheromones apparently played a role in awakening my sexual desire towards another member of my species, but it certainly did not do the trick of actually following it up. Therefore, when need be, put the blame for your failed relationships where it truly belongs or if you still remain a romantic blame cupid for his deplorable aim.

References:

1), M. Delude, Cathryn Looking for love potion number nine: Scientists and perfumers are searching for the chemical scent that drives humans wild, The Boston Globe, Published by The Globe Newspaper Company, September 2, 2003
2 2), Kaufman, Kasey Pheromones at First Sniff, WBZ4 News Report,
3 3), Lee, Scarlett Pheromones: The Olfactory Love Letter, Varsity Science and Technology.
4 4), Pines, May, A Secret Sense in the Human Nose: Pheromones and Mammals, Seeing, Hearing and Smelling The World, A Report From the Howard Hughes Medical Institute
5 5), Pheromone fragrances


Are Genetics Responsible for Allergies? A Study In
Name: Melissa Te
Date: 2003-12-03 17:30:15
Link to this Comment: 7450


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Everyone has either suffered from some kind of allergy, or knows somebody who has suffered from allergies. Allergies are the source of irritating symptoms, ranging from a painless skin rash to life-threatening breathing problems. For years, researchers have been trying to find out the source of these allergies. Some have suggested that environmental factors or early exposure to certain foods can cause allergies later in life, while others say that allergies are caused by genetics. To test the latter theory, many researchers study identical twins to see if sets of twins share allergies. If both twins were to share a particular allergy, than this may prove that allergies are genetic.

To completely understand the remainder of this essay, one must understand the difference between identical twins and fraternal twins. Twin zygosity is the genetic relationship of twins. There are two types of twins: monozygotic twins, also known as identical twins, and dizygotic twins, also known as fraternal twins. Identical twins have exactly identical DNA strands; they are same sex and they have very similar physical traits. They come from one egg that is fertilized by one sperm. Some time after conception, the egg splits resulting in two babies. Fraternal twins only have half identical DNA; that is, only one strand of the double-stranded DNA is the same. They come from two individual eggs that are fertilized by two individual sperms. They are either same sex or different sex, and are just like siblings of same parents born at different times. There are other kinds of twins as well; for example, "mirror-image twins," "polar body twins," and "half-identical twins." These names refer to the time that the egg splits in identical twins. This essay, however, will deal with only identical and fraternal twins (5). The question now is, Are identical twins allergic to the same things? Since identical twins have exactly identical DNA, the sharing of allergies can shed some light on the role of genetics in allergies.

All sorts of food allergies affect eight percent of children and two percent of adults in the United States. Allergic reactions happen because one's immune system overreacts to regular foods that are ordinarily harmless to the general population (7). An allergy affecting many children and adults recently in the United States is an allergy to peanuts. In the last few years, tremendous amounts of people have developed this allergy, which seems, in most cases, to be very severe. To be exact, scientists estimate that three million Americans are at risk at having an allergic reaction to peanuts (7). In the past, some have said that the allergy is caused by early exposure to peanuts, either in early childhood or during prenatal stages. Now, many researchers say that a gene causes this allergy.

A study was done on 58 pairs of twins, consisting of 44 pairs of fraternal twins and 14 pairs of identical twins. In every set of twins, at least one of the two had a convincing history of a peanut allergy. The twins were observed for signs of allergic reaction, including hives, wheezing, repetitive coughing, vomiting and diarrhea, within sixty minutes of eating peanuts. The results were as such: 65 percent of the identical twins shared the allergy, while only seven percent of the fraternal twins shared it (3).

Something interesting found was that the allergic symptoms among individual sets of twins could vary. For example, one twin may have a skin rash, while the other twin may experience asthma in reaction to the peanuts. These variations may be attributed to early exposure, environmental factors, infections, medications, and so on (1).

Because of these results, Mt. Sinai School of Medicine claims that genetics accounts for 81.6 percent of the risk of peanut allergy (3). Similarly, a group of British researchers say that, considering genetic and environmental factors, the allergy is inherited 82 to 87 percent of the time. They also say that when ignoring the genetic factors, that percentage drops to 18.99 percent (7).

In response to this study, a professor of Biochemistry and Molecular Biology at the University of Arkansas Medical School is in the process of creating an allergy-free peanut by genetic engineering (3). If this allergy is, in fact, genetic, then this may be a wonderful alternative to those suffering from the allergy.

Food allergies are not the only types of allergies affecting most people. Asthma is, at times, a severe allergy that causes problems in the airways of the lungs. When one has an asthma attack, the muscles surrounding the airways contract, the walls of the airways swell, and mucus is produced inside the airways, all making it very difficult for air to pass through while breathing. This attack can be caused by animal hair, colds or infections, dust, mold, pollen, cigarette smoke, or even weather, exercise and air pollution. Researchers used to think that asthma was solely caused by environmental factors, for example air quality, but now they are considering the effect of genetics as well. A study took place in Arizona involving 344 families. From families where neither parents had a history of asthma, only six percent of children suffered from asthma. In families where one parents suffered from the condition, only 20 percent of children suffered as well. And in families where both parents had asthma, 60 percent of the children had asthma as well. This shows that there is a strong link between asthma and genes (8).

In another study, the Department of Biological Psychology at Vrije Universiteit in Amsterdam studied 3600 pairs of twins from the Netherlands. Those observed consisted of pairs of identical and fraternal twins, all five years of age. The researchers had the parents fill out a questionnaire inquiring of the following: a) the presence of wheezing and coughing in the last twelve months, and b) if the twin children were ever diagnosed with asthma, allergies, hay fever, eczema, bronchitis, or pneumonia. They found that 50 to 80 percent of the time, identical twins shared allergies. However, fraternal twins shared allergies only 25 to 40 percent of the time, which is half of the results of the identical twins (2).

In Australia, the Australian National Health and Medical Research Council Twin Registry studied 3808 pairs of fraternal and identical twins. The twins were of all ages, from children to adults, and were being observed for asthma, wheezing and hay fever. The results were similar to the last study; 65 percent of the identical twins shared allergies whereas only 25 percent of the fraternal twins shared allergies (6).

Another study was done hypothesizing that genetics and environment were both very involved in suffering from asthma. Of 84 sets of twins, at least one of the two had a history of asthma. Among the 84 pairs, 39 pairs were identical twins and 55 pairs were fraternal twins. The study showed that among the identical twins, both twins suffered from asthma 59 percent of the time; and among the fraternal twins, both twins suffered from the condition only 24 percent of the time. When looking at the results of the identical twins, they show that 41 percent of the time only one of the two will get asthma. This neither proves that genetics are the main factor nor that the environment is the main factor, but instead that both genetics and the environment are large roles in developing the condition of asthma (8).

The National Jewish Medical and Research Center has been observing identical and fraternal twin children to see if there is a correlation between children with severe allergies and behavioral problems. The behaviors that they are most concerned about are aggressiveness, depression and irritability. The study wants to see if the behaviors are problems because allergies are a nuisance for the children, or if the behaviors and allergies are linked genetically. The Center observed 200 twin children between the years of three and eleven. They found more of a correlation between the allergies and the behaviors among the identical twins than among the fraternal twins. Their claim is that allergies and behavioral problems are both genetic and linked. In fact, they say that genetics accounts for over 70 percent of the relationship between allergies and aggressive behavior. If this is true, then children with allergies may have a higher chance of having behavior problems as well. More so, a genetic link may potentially lead to a treatment for both allergies and behavior problems (4).

All of these studies have tested for the same thing, whether or not allergies are genetic. The results are not exact, but one thing is in common: allergies among identical twins, who have identical DNA strands, are shared more often than among fraternal twins, who have only one strand of DNA in common. The percentages of the results for identical twins are clearly not 100 percent. Some may say that the results are close enough to 100 percent to say that genetics are the cause, while others may say that it is not close enough to make any definite conclusions on the matter. Perhaps a more involved study should take place, one that involves extensive research on each specific kind of allergy among twins. Maybe it is possible for some allergies to be genetic while others may be products of environment, diseases, and so on. In short, all of these cases suggest that genetics may have something to do with allergies, but at this point, that is all that it is: a suggestion. The important thing is to know that genetics and the environment could potentially be the main factors in developing allergies; so if one twin in a set has an allergy, it is a good idea to have the other twin evaluated just in case.


References

1)Twins and Allergies
2)A Study of Asthma and Allergies
3)Genes Cause Most Peanut Allergies
4)Genetic Link
5)Proactive Genetics Inc.
6)Entrez-PubMed
7)AAAAI Patients and Consumers Center
8)Asthma and Genetics


"WAYS OF KNOWING, MODES OF ACTING": THE THERAPEU
Name: Anna Katri
Date: 2003-12-04 07:47:23
Link to this Comment: 7459


<mytitle>

Biology 103
2003 Second Paper
On Serendip

THE LANGUAGE OF PERFORMANCE:
ENGAGING THE SPECTATOR IN INTELLECTUAL
INTERCOURSE WITH THE SPECTACLE


Life, as it is represented through various media, has a brainwashing effect on the spectator: he consumes a fabricated world rather than producing one of his own. The unconscious is constantly repressed, while the conscious is force fed images which basely appeal to the controlled linear processes of the brain. Psychiatrist C.G. Jung writes:

"The source of numerous psychic disturbances and difficulties occasioned by man's progressive alienation from his instinctual foundation, i.e., by his uprootedness and identification with his conscious knowledge of himself, by his concern with consciousness at the expense of the unconscious. The result is that modern man can know himself only in so far as he can become conscious of himself--his consciousness therefor orients itself chiefly by observing and investigating the world around him, and it is to its peculiarities that he must adapt his psychic and technical resources. This task is so exacting, and its fulfillment so advantageous, that he forgets himself in the process, losing sight of his instinctual nature and putting his own conception of himself in place of his real being. In this way he slips imperceptibly into a purely conceptual world where the products of his conscious activity progressively replace reality. Separation from his instinctual nature inevitably plunges civilized man into the conflict between conscious and unconscious, spirit and nature, knowledge and faith, a split that becomes pathological the moment his consciousness is no longer able to neglect or suppress his instinctual side." (1)

The prozac world we inhabit is a direct result of doctors eager to "fix" or "cure" disorders through administering prescription drugs. These drugs don't cure diseases, but rather numb their symptoms; the patient acts their daily ritual of dealing with life in a zombie like trance instead of confronting the horror, terror, and chaos essential to the Nature of the world so as to better understand the self and the self's place in it. It's easier to turn off the receptors that trigger emotions, ideas, or urges we don't like facing than to explore their origin. This method of treatment is not only dangerous, but frightening, because it threatens the very existence of humanity by crippling the self's internal communication necessary to forming individual identity. This calls for a radical change in the medical health care system (2)); where responsibility is placed on doctors to approach a patient's psychosis on equal ground with the rational consciousness which seeks to diagnose and treat it, while challenging patients to reject the petrified idea that the mysterious depths of our selves and our relationship to the world are somehow limited by the frontiers of language and reality.

The biological duality of theater as both a place and an art form (10) consisting of live representations which require the player and spectator to be present in the space and to each other, simultaneously triggers an autonomous unconscious reaction within the spectator's self--which, I argue, is a psychological process of renewal or rebirth of the spectator's spirit resulting from the exploration and emergence in the depths of theatrical ecstasy. The spectator consciously allows himself to entertain the psychotic idea and virtual reality of the spectacle: essentially, he capitulates himself to madness (8).

"The stage is a concrete physical place which asks to be filled, and to be given its own concrete language to speak. I say that this concrete language, intended for the senses and independent of speech, has first to satisfy the senses, that there is a poetry of the senses as there is a poetry of language, and that this concrete physical language to which I refer is truly theatrical only to the degree that the thoughts it expresses are beyond the reach of the spoken language. These thoughts are what words cannot express and which, far more than words, would find their ideal expression in the concrete physical language of the stage. It consists of everything that occupies the stage, everything that can be manifested and expressed materially on a stage and that is addressed first of all to the senses instead of being addressed primarily to the mind as is the language of words...creating beneath language a subterranean current of impressions, correspondences, and analogies. This poetry of language, poetry in space will be resolved precisely in the domain which does not belong strictly to words...Means of expression utilizable on the stage, such as music, dance, plastic art, pantomime, mimicry, gesticulation, intonation, architecture, lighting, and scenery...The physical possibilities of the stage offers, in order to substitute, for fixed forms of art, living and intimidating forms by which the sense of old ceremonial magic can find a new reality in the theater; to the degree that they yield to what might be called the physical temptation of the stage. Each of these means has its own intrinsic poetry."
(3) -Antonin Artaud, 'The Theater And Its Double'

Drawing from the Homeopathic Law of Similars (4) and Jung's statement that a schizophrenic is no longer schizophrenic when he feels understood by someone else (1) -- I nurture the idea that an entertainment of the very "madness" that afflicts the self and is essential to Artaud's organic theater, both illuminates the spectator's understanding of himself and his relationship to the world around him; and provides the self with a "safe" space to indulge the non rationality of language and culture, seeking understanding and therapeutic regeneration through the sacred kinship of player and spectator communicating via the language of gesture and symbolism. Active (whether conscious or unconscious) indulgence in the delusional reality and fantasy of the spectacle frees the spectator's instinctive impulses and challenges his archetypes (8). The result is a fascinating method of communication and web-like interplay between the spectator and player, the spectator and the spectacle, and the spectator's unconscious and conscious being; a suspension of the normal communicative, analytical, and articulative limitations of the brain to allow for understanding from reflection of self in space.

The unconscious, explained in Freudian terms (5), is the source of our motivations, desires, drives, and primitive instincts. It includes all things that are not easily available to awareness, housing those experiences that we cannot bear to consciously confront, which might result from a trauma of sorts. In our lives we are driven to deny and resist these motives, which makes them inaccessible to the conscious state of mind one normally occupies. It is difficult, if not impossible to know what's "in" our unconscious by thinking about it. The unconscious appears to us in a disguised or fragmented form (6), lacking the coherency or controlled linear processes central to the conscious. True theater, as Artaud describes (3), is organic in nature and boundless in time and space, breaking the barrier of language to create the psychologically disastrous void (9) that suspends all normal limitations of the player and spectator to make them see the world in the most abstract, mystical sense. Freedom from the constraints of self and "being" leaves the spectator present and open to the player for communication in the space, resulting in an exploration of the very nature of the unconscious. The abandonment of conscious existence from the world outside the theatrical performance requires the spectator to adopt the language of the space, living in the theatrical dialogue of gesture, movement, music, and dance; while actively engaging the senses in an open awareness of and communication with the various hieroglyphs, symbols, and sign systems used to communicate the play's larger metaphysical concepts. The biological boundlessness of this theater creates an atmosphere of decadent danger for the spectator. For the player to conjure a virtual reality, she must forge a sacred connection with the bodily presence of the spectator. This relationship has the potential to turn disastrous (9) for the spectator, as it is a delicate balance of mutual understanding for the purpose of knowledge (1). As master of the unknown, the player brings that which "does not yet exist into being" (3). Thus, she seduces the spectator into sensory intercourse, "sacrificing his own individuality so that it may be assimilated by that of the other" (1) --energizing the physical space through the absorption of the spectator. The cruel nature of the spectacle (3) manifests in the empty space of the spectator's mind. Bedazzled, mystified, terrified, and tempted by the disaster (9): the spectator undergoes an exorcism, loosing all power over his conscious Other, to essentially "die without disappearing" (3), subjected to the depths of the spectacle.

The language of the spectacle is the same as the language of dreams: in Lacan terms, it is the ultimate "language of the Self" (7), and a form of internal communication. The content of the dream reveals ideals, aspirations, ambitions, notions of perfection--all of which are in conflict with, or illuminate the dreamer's tension with the external world. Therefor, the dreamscape of the spectacle gives voice to the repressed ideals of the spectator which clash with cultural reality. The spectator "finds himself living in a psychic modality quite different from his surroundings. He is immersed in a myth world ... His emotions no longer connect with ordinary things, but drop into concerns and titanic involvements with an entire inner world of myth and image. Once lived through on this mythic plane, and once the process of withdrawal nears its end, the reconnection to the specific individual problems must again be encountered and worked upon. The archetypal affect-images await a kind of reinsertion into their natural context in the complexes, and their projective involvements in outer life" (8). As the spectator encounters the shadows of the images produced in the now empty space, he reclaims and reorganizes his self. Forced to confront the ideas, and motivations generated by the spectacle, the individual begins to make process in conquering his own performance anxieties in everyday life.

References

Works Cited:
1) Jung, C.G. The Undiscovered Self. The New American Library: New York, 1957. pp., 64, 92. An excellent introduction to modern psychiatry, Jung was a pioneer (with Freud) in exploring the conscious and unconscious aspects of the human psyche.

2Paul Grobstein, 'Psychoanalysis, Neuroscience, and Evolution'; Course forum area, on the Serendip web site, Provocative insights and dialogue in response to how "psychoanalysis and neuroscience can work together to improve the mental healthcare system." Explores using theater as a form of therapy, specifically the relationship between spectacle and spectator.

3) Artaud, Antonin. The Theater And Its Double. Grove Press: New York, 1958. pp., 13, 36-8. The figurehead of the avant garde theater movement, Artaud's manifesto claims the nature of the spectacle as magical and terrifying; space that speaks a language of gesture and symbols capable of transcending man beyond the limits of reality. One of the greatest books of all time, it serves as proof of the madness essential to genius.

4The Law of Similars, Homeopathic medicine is a natural alternative to conventional healthcare. Illnesses are treated in a manner which seeks to understand the phenomenon, treating the body and mind as a whole, restoring its balance and harmony without prescription drugs.

The Homeopathic Law of Similars:
The principle that like shall be cured by like, or Similia similibus curantur. This principle, recognized by physicians and philosophers since ancient times, became the basis of Hahnemann's formulation of the homeopathic doctrine: the proper remedy for a patient's disease is that substance that is capable of producing, in a healthy person, symptoms similar to those from which the patient suffers.
In other words, a substance produces symptoms of illness in a well person when administered in large doses; if we administer the same substance in minute quantities, it will cure the disease in a sick person. Hahnemann suggested that this is because nature will not allow two similar diseases to exist in the body at the same time. Thus homeopaths will introduce a similar artificial disease into the body which will push the original one out. The dose is small so that there is no danger of any long term side effects.

5)Dr. C. George Boeree's Personality Theories, A resourceful site presentation of Freudian Theory.

6)Psychoanalysis & Sigmund Freud, Another Freud site, but more pertinent to the discussion of: dreams, displacement, the repressed, the unconscious, and sublimation.

7)Montage, Realism, and the Act of Vision, A lengthy, but fantastic excerpt discussing the psychoanalyses of Jacques Lacan, his idea of the "fragmented body," and the necessity for symbolization in the Nature vs. Culture realm.

8) Robert Couteau's review of John Weir Perry's The Far Side of Madness, Out of the four psychoanalyses offered in this paper, Perry's radical theories on hysteria, schizophrenia, and other behavioral disorders deserve careful attention. He believes mental disorders spring from the patient's pre psychotic personalities, and that during psychosis the patient creates and enacts a drama in a fragmented language of myth and symbolism.

9) Blanchot, Maurice. The Writing of the Disaster. New Edition. University of Nebraska Press: 1995. pp., 1-8, 58-9. A great philosopher of Fragmentation of the self and world, and the Nature of the Disaster which plunges the self into the void. For Blanchot, the disaster solves everything.

10) McAuley, Gay. Space in Performance: Making Meaning in the Theatre. University of Michigan Press, Ann Arbor: 1999. pp., 7, 40, 92, 107. One of the heaviest influences on this paper-- the idea of performance space, space in performance, spectacle and spectator, and the importance of a performative language which transcends the play's text in creating the theatrical spectacle. McAuley notes the twisted psychology involved in a player's "conning" the spectator along for the theatrical joy-ride.

Additional Sources:

11)Sue Broadhurst's "Liminal Aesthetics", An intriguing essay on the "aesthetic theorizations" of philosophers and playwrights with regard to experimental theater, or "liminal performance," which the author defines as "being located at the edge of what is possible."


12)Victor Grauer's "Montage, Realism, and the Act of Vision".

13) Richard van Oort's "Performative- Constative Revisited: The Genetics of Austin's Theory of Speech Acts".

14)Art and Pain Abstracts. Estelle Barrett, "Reconciling Difference: Art as Reparation and Healing.


Social Anxiety
Name: Paula Andr
Date: 2003-12-11 19:33:10
Link to this Comment: 7511


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Social Anxiety

A woman hates to stand in line in the grocery store because she's afraid that everyone is watching her. She knows that it's not really true, but she can't shake the feeling. While she is shopping, she is conscious of the fact that people might be staring at her from the big mirrors on the inside front of the ceiling. Now, she has to talk to the person who's checking out her groceries. She tries to smile, but her voice comes out weakly. She's sure she's making a fool of herself. Her self-consciousness and anxiety rise to the roof...(Richards 1) (1).

A student won't attend her university classes on the first day because she knows that in some classes the professor will instruct them to go around the room and introduce themselves. Just thinking about sitting there, waiting to introduce herself to a roomful of strangers who will be staring at her makes her feel nauseous. She knows she won't be able to think clearly because her anxiety will be so high, and she is sure she will leave out important details...The anxiety is just too much to bear---so she skips the first day of class to avoid the possibility of having to introduce herself in class... (Richards 2) (2).

These are just two examples of how people who suffer from social anxiety disorder feel about social situations and everyday interactions. Their fears can be paralyzing.

Social anxiety disorder is the third largest psychological problem in the United States. It affects approximately 15 million Americans every year. It is a widely misunderstood disorder, where nearly 90% of people with social anxiety disorder are misdiagnosed. They are often misdiagnosed with schizophrenia, manic-depression, clinical depression, panic disorder, and or personality disorder (Richards 1-3). Misdiagnosis and undertreatment of anxiety disorders, according to "The Economic Burden of Anxiety Disorders," a study commissioned by the ADAA, costs the United States more than $42 billion a year and more than $22.84 billion is linked to the repeated use of healthcare services for symptoms that mimic physical illness. In addition, people with anxiety disorder are three-to-five times more likely to go to the doctor and six times more likely to be hospitalized for psychiatric disorders when compared to those who do not suffer from anxiety disorders ("Brief Overview of Anxiety Disorders" 2) (3).

Social anxiety disorder can be defined as the persistent fear of one or more social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others, and where exposure to such situations provokes anxiety. People who suffer from social anxiety avoid or endure with severe anxiety or distress these much-feared situations. The avoidance, anxious anticipation, or distress in the social or performance situation interferes tremendously with the person's normal routine at work, in school, during social activities, and/or in relationships. Although people who suffer from social anxiety disorder recognize that this fear is unreasonable or excessive, they cannot merely will themselves to stop having these unreasonable or excessive preoccupations. Because social anxiety disorder has only been officially recognized since 1980, and since the problem did not become adequately explained until 1987, the definition is contestable, and not totally accurate ("DSM—IV Definition of Social Anxiety Disorder 1-2) (4).

The causes for social anxiety disorder are not fully known. They continue to be investigated and researched. However and according to the National Institute of Mental Health, some investigations implicate a small structure in the brain called the amygdala. The amygdala is a central site in the brain that controls fear responses. Activity at this site may be linked/responsible for social anxiety symptoms. Social anxiety symptoms include heart palpitations, faintness, blushing, and profuse sweating. Other investigations are trying to explore whether there is a biochemical basis for the disorder. Scientists are exploring whether heightened sensitivity to disapproval may be physiologically or hormonally based. Lastly, other researchers are investigating environmental influences on the development of social phobia ("National Institute of Mental Health--Facts about Social Phobia" 1) (5).

Although more research needs to be done regarding the causes and the characteristics that define social anxiety disorder, several treatment options are available. Social anxiety disorder can be treated with medication, psychotherapy, or both. A number of medications originally prescribed for the treatment of depression are now being used to treat anxiety disorders. They are selective serotonin reuptake inhibitors and monoamine oxidase inhibitors. Selective serotonin reuptake inhibitors act on the brain on a chemical messenger called serotonin; they tend to have fewer side effects than older antidepressants. Monoamine oxidase inhibitors are the oldest of the antidepressant medications; phenelzine, the most commonly prescribed MAOI, is helpful for people with panic disorder and social anxiety disorder. Meanwhile, high potency benzodiazepines and beta-blockers are specifically used as anti-anxiety medications. High potency benzodiazepines are used to relieve the symptoms caused by anxiety. They have few side effects but can be addictive. They are prescribed for short periods of time. Beta-blockers, often used to treat heart conditions, have been found to be useful for patients suffering from social anxiety disorder. They may be prescribed in advance of a situation that would produce anxiety like an oral presentation in order to keep anxiety related symptoms such as heart pounding, shaking hands, and other physical symptoms from developing ("National Institute for Mental Health—Anxiety Disorders" 9-10) (6).

Although medication may be very helpful and useful for the treatment of social anxiety disorder, cognitive behavioral therapy seems to be very effective and useful. The goal of cognitive behavioral therapy is to reduce anxiety by eliminating beliefs or behaviors that help maintain the anxiety disorder. Cognitive behavioral therapy has two components. The cognitive component helps people change thinking patterns that keep them from overcoming their fears. The behavioral component of CBT seeks to change people's reactions to anxiety provoking situations, exposure being an integral element in the behavioral component. Exposure is a technique used to help people suffering from anxiety disorders confront the things they fear. For example, when using the exposure technique, a person with social phobia may be encouraged to spend time in feared social situations without giving in to the temptation to flee. However, the exposure technique of CBT will only be used when the patient is ready. It cannot be done without the patient's consent. In order for the exposure technique to be effective, it must be done gradually and with permission. In order for cognitive behavioral therapy to be most effective, it must be catered to the person's specific anxieties. CBT usually lasts about twelve weeks. It is sometimes conducted in a group, with permission of course, and the positive effects are noted to last longer, upon discontinuing therapy, when compared to medication ("National Institute for Mental Health—Anxiety Disorders" 11-12) (7).

Social anxiety disorder is still widely and seriously misunderstood both by the medical world and by society in general. It is very important not only to medically and scientifically be able to obtain more information on social anxiety disorder, how it develops, and how it can be treated, but, it is equally as important to make this information accessible to a larger community. Information alone cannot educate people about what this disorder looks like and what people who suffer from social anxiety feel and think, but it is one step in a larger process. Social anxiety, like several other mental health related disorders, carries a stigma, which can only be dismantled if people are forced to recognize that these disorders exist and that they can be treated. In general, people need to learn to understand that people can overcome social anxiety. Overcoming social anxiety or learning how to live with it is a difficult task, but it is possible to live a "normal" life. Although it is more important for people who suffer from social anxiety disorder to be convinced that overcoming this disorder and its stifling effects are possible, it is equally as important for a larger audience to be aware that this disorder exists.


References

1)What is Social Anxiety ,

2)What is Social Anxiety ,

3) Brief Overview of Anxiety Disorders ,

4) Diagnostic Statistical Manual—IV Definition of Anxiety Disorder,

5) Facts about Social Phobia ,

6)Anxiety Disorders,

7)Anxiety Disorders,


The third leading cause of death amongst teenagers
Name: Ramatu Kal
Date: 2003-12-13 01:50:55
Link to this Comment: 7521

Did you know that suicide is currently the third leading cause of death among teenagers in the United States? (4). In 1992, more teenagers and young adults died from suicide than those who died from stroke, cancer, heart disease, AIDS, birth defects, pneumonia, influenza and chronic lung disease combined
(4). Suicide is definitely a compelling problem amongst youth in the U.S today.

It is estimated that 300 to 400 teen suicides occur per year in Los Angeles County; which is equivalent to one teenager lost every day (1). Many concerned people ask, "What is going on?" and "Why is this happening?" Among many things, some suicidal youths experience family trouble, which leads them, to doubt their self-worth and make them feel unwanted, superfluous, and misunderstood. According to one study, 90 percent of suicidal teenagers believed their families did not understand them. Young people reported that when they tried to tell their parents about their feelings of unhappiness or failure, their mother and father denied or ignored their point of view
(1). Suicide can be prevented; in fact, suicide prevention has saved over ten percent of teens who have tried to attempt suicide (1). In this paper I will prove that although, suicide is a serious epidemic amongst teens in the U.S., it can also be prevented.

"I'm depressed." You might say it casually to refer to sadness that engulfs you and then goes away. But depression is also a mental health illness that may require help from an experienced professional(1). Depression has been considered to be the leading cause of teen suicide in the 20th century, affecting approximately eight million teens in North America (2). Recent studies show that greater than 20% of adolescents in the general population, have emotional problems and one-third of adolescents attending psychiatric clinics suffer from depression (2). Being a young person in today's world is no easy task, they have to deal with increasingly difficult decisions and pressures every day. Tragically, young people feel they are not able to cope, that there is no one who either cares enough or is able to help them cope with their worries. They become desperate enough to take their own lives.

Some teens, who have committed suicide because of depression, come from homes with family problems. "Families who use guilt as a means of controlling behavior, make talking honestly and directly, difficult for the teen." (2). Too often parents and other adults criticize the child rather than the behavior. Apparent loss of love contributes to the risk of suicide. This was true in the suicide case of Katja Lewis, who committed suicide at the young age of 19. "Katja was a young woman, searching for the answers, always unsure of herself, looking for love but never seeming to find it . Katja, decided life was no more worth living, on Tuesday, September 30th 1997; when she took an overdose of anti-depressants and left this world sometime around 3 a.m. She never saw the sun rise again, and she never will. (3).

Divorce, the formation of a new family with step-parents and step-siblings, or a move to a new community can be stressful and can build up self-doubts. In some cases, suicide appears to be a "solution." Seventeen-year-old Charles Burnes committed suicide just two months after his parents divorced. It was said he was never the same after his parents divorce. He began to slowly withdraw from society until one day he gave up and ended his life. (2).

These two stories above are just some of the many suicide cases sweeping the country everyday. Each year, more than 400,000 teenagers attempt suicide and more than half of that number actually commits suicide. People might ask, "What can we do to prevent this epidemic?" There are many things we as individuals can do to help our friends, husbands, and family members who might be thinking about suicide.

There are many ways teen suicide can be prevented. Psychologists say that parents who feel that their child is suicidal or troubled should ask him or her to talk about their feelings. The parent should reassure them that they are loved, and remind them that no matter how awful their problems seem, they can be worked out. Listen carefully. Do not dismiss. The important thing is to pay attention. Encourage them to talk. Listen. Be on their side. Reassure without dismissing. (1). It is very important to talk to someone who might be contemplating suicide. Do not accuse people of being suicidal, listen and let them do most of the talking. The important thing to do is to continue to listen to the person who is suicidal, "Bringing up the question of suicide and discussing it without showing shock or disapproval is one of the most helpful things you can do. This openness shows that you are taking the individual seriously and responding to the severity of his or her distress" (2).

Suicide is a problem affecting many teenagers today. Often teens are depressed and do not know who or where to turn, so they find comfort in killing themselves. Family issues can be sufficiently overwhelming. Some teens feel they cannot handle life. As a result, they hurt themselves to revel how much they are hurting inside. It does not seem right that a teenager who has lived for such a short time would choose to die. Suicide doesn't have to happen. Teens need to feel wanted, and to find someone in whom they can confide. It's important that teens who are suicidal seek help from someone who will help them realize that there is so much more to life, than trying to end it. As we can see here, suicide is definitely an epidemic affecting millions of people in general. If someone you know is thinking about suicide, make sure you find the best way to help them, that might even mean seeking some form of counseling.

Below are some useful facts about teen suicide:

Teen Suicide Fact
Each year 500,000 young adults, aged 15 to 25, attempt suicide.
Each year 5,000 young adults succeed.
Suicide is the third leading cause of death among 15 to 25 year olds.
Suicide is the sixth leading cause of death among 5 to 14 year olds.
Young adult males succeed a suicide almost two times as often as any other group.
Without treatment, of those who attempt suicide, 80 percent are likely to try again.
Teen depression almost always leads to suicidal thoughts.

While the above teen suicide facts are astounding, here are some positives about teen depression and suicide:
The number one cause of teen suicide is untreated depression.
Most suicidal teens respond positively to psychotherapy and medication.
Nearly 90 percent of depressed people benefit from medication.
Those contemplating suicide can be "talked out of it."

WWW Sources

1)Teen depression homepage, a rich resource on how to prevent teen suicide

2)Teen depression homepage, a rich resource on causes of suicide.


3)Teen depression homepage, a personal story on teen suicide

4)Teen depression homepage,facts about suicide


Menstrual Synchrony
Name: Julia Wise
Date: 2003-12-13 20:48:46
Link to this Comment: 7523


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Generations of women have noticed it: you and your sister, or your roommate, or lover, or mom, get your periods at the same time. It doesn't always happen, but it catches the attention when it does. Female rats living in the same air space ovulate at the same time. Menstruation in monkeys synchronizes with the full moon (7). So is it all in our heads, or is the same pattern present in humans?

The clearest argument against the existence of menstrual synchrony is that since the length of the menstrual cycle varies from person to person (2), two women with different cycle lengths will never synchronize. They may menstruate at the same time, but the next month they will be a little different, the next month more different, and so on. By this argument, synchrony is simply a myth.

I cannot believe this argument, since it assumes that menstruation can be graphed and analyzed like a sine wave. Human bodies rarely adhere to perfectly timed schedules. Many women have irregular periods, and the regularity of the menstrual cycle changes at different stages of life (3). So if a woman with a cycle of 25 days and another with cycle of 28 days live together, they might both shift to a cycle of 26 or 27 days. In this way, synchronization would still be quite possible.

So if this phenomenon does exist, what explanation can there be for it? One theory is that lunar cycles may have some connection to the pattern. At first this makes some sense, since both cycles happen about thirteen times each year. A study on the Dogon people of Mali found that although they had no electricity and spent most nights outdoors, thus being as likely as anyone to be affected by the light of the moon, menstrual cycles among the Dogon did not match up to any lunar phase. Another theory
was that synchronization might be due to the same factor that causes pendulum clocks near each other to tick at the same time. This was later shown to be purely mechanical, though, with the swinging of the heaviest pendulum merely rocking the shelf a little and throwing off the beat of other clocks (1).

The most likely theory is some kind of hormone change. Women's menstrual cycles respond to contact with men, becoming shorter and more regular. So rather than a mechanical synchronization, like pendulum clocks, the cause is more likely chemical. In 1971, Martha McClintock published a study about the 135 women in her dorm at Wellesley College. She found that the synchronization of menstruation between roommates and close friends did increase after the women began living together. McClintock's explanation was pheromones. She co-authored a followup experiment, exposing women to chemical compounds from the armpits of other women. She concluded that this did alter menstruation (4).

There are a number of problems dealing with the statistics of this and other experiments, though. The data can be interpreted to either support or negate McClintock's conclusion, depending on how it is analyzed (4).

Why synchronize? One possible benefit of simultaneous ovulation for the whole population is simultaneous birthing. When female rats give birth at the same time, their pups are significantly healthier and more likely to survive. Certain times of the year may be better for births, as when lambs are born in spring rather than fall. So synchrony may have developed because it is helpful in raising healthy young (6).

The research I have done has increased rather than answering my list of questions. Is the math used in these studies wrong? Most articles I found denouncing the theory were written by men; most supporting it (or even denouncing it but wishing it were true) were by women. So is this just something women want to believe because it would be cool and bring us closer together? Also, according to McClintock, some women responded strongly to other women's pheromones, while others did not respond at all (6). Does this mean that it is not strictly group behavior but leader/follower behavior,
with some women's cycles setting the trend for the others? If so, does this chemical leadership correlate to any kind of social behaviors, like alpha females among wolves? My conclusion can only be that despite all those sex-ed videos from seventh grade, menstruation is still awfully confusing.

References

1) "Blood, Bread, and 'Menstrual Mind'?", dealing with Judy Grahn's book on menstruation
2) "Convergence and Divergence of Menstrual Rythms", analyzing the math used on Martha McClintock's study
3) "Menstrual Cycle Length as Function of Age"
4) "Do the Menstrual Cycles of Women Living Together Tend to Synchronize?"
6) "Menstrual Synchrony", an interview with Martha McClintock
7) "Converging Menstrual Cycles"


Why Gamble?
Name: Enor Wagne
Date: 2003-12-14 21:06:34
Link to this Comment: 7527


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Why Gamble?

For centuries, people have indulged in different types of gambling: poker, horse races, bingo, lottery, and slot machines. Gambling has seduced any and almost everyone between the ages of sixteen and ninety years old. Before turning eighteen, the legal age of casino and horse race admittance, those younger make monetary bets on football and high school stunts. Gambling is even more prevalent today than it was yesterday with the added attraction of on-line casinos, offering jackpot equivalent to twenty years salary in exchange for a credit card / debit card number. Gambling was suppressed in the 1920's as a result of Prohibition and because of this will forever lure people into its taboo trap. Gambling as sport is hard to resist because it offers immediate gratification. Not only is there a chance that you may quadruple the amount of money that you lay down, a literal payoff, but there is also a feeling of hope, an alternate limbo between reality and fantasy that can be translated into a sort of mental payoff. The question is: is it all about the money?

It couldn't be all about the money, unless the general public was extremely stupid. The odds of winning the lottery are lesser than the odds of someone being struck by lightning (1 in 649,739) or than someone being killed by a terrorist attack abroad (1 in 650,000). (7). It has been said, "If you bought 100 tickets a week your entire adult life, from the age of 18 to 75, you'd have a 1 percent chance of winning the lottery". (7) Now, a number of psychological studies have been done which indicate that the desire to play the lottery has more to do with the inability or unconcern of a person to calculate the total sum of their own money over time spent of these dollar tickets. The hope and fantastic feeling they receive is worth more than the dollar they give the 7-11 clerk at that time.

Casino games create a different sensation. Whether it be cards, slots, or dice games after being seated in front of it for an hour or two there will generally be a win, some kind of win. Usually that win is small. It serves the person, or the brain, with a sort of reward. The reward entices the person to want to continue their game so to get another reward (7).

The basis for this affirmative award is biological. Research done at the Massachusetts General Hospital has showed similar brain activity induced by prize money to food and drug rewards. The scientists measuring this brain activity compared it with giving a cocaine addict an infusion of cocaine. (2) An experiment was set up wherein the brain activity of the subjects was measured while they gambled. "Each subject was offered one of three spinners: a 'good spinner' offered them a chance to earn $10, $2.50, or nothing; an 'intermediate spinner' offered $2.50, $0, or -$1.50; and a 'bad spinner' let them win nothing or lose, -$1.50 or -$6." (3) The brain activity was measured with a high-field functional magnetic resonance imaging, otherwise known as an fMRI, while they were spinning for six seconds and after then after they had spun. The results showed that the brain activity proved to be strong, moderate, and low in accordance with the level of spinning - good, intermediate and bad. The proportions always demonstrated the expected brain activity. The scientists performing this experiment came to the conclusion that money serves as the same type of reward to humans as does drugs and food; it sets into motion a reward mechanism in the brain providing relative stimulus to the amount of reward or loss which is taking place. "The similarity suggests that a common brain circuitry is used for various types of rewards."(3)

Considering the conclusion of this experiment to be true, there still remains an unsettling question pertaining to gambling and brain circuitry. Why do some people gamble more than others? At first I searched for some demographic conclusions to support a hypothesis that some group of people gambled more than others. However, there simply isn't much discrimination when it comes to gambling.

The National Opinion Research Center, a government based study, showed that there is no gender gap in terms of gambling: the 1998 statistic showed 49% women and 51% men gamble in general. (1) The consensus showed that all different ages gamble. Some specifications were made like people between thirty and sixty tended to gamble with more money than the younger and older, but that seems natural because that range probably gains the most salary. It also specified that those under eighteen tended to play less in casino, lottery and horse races but that is because they were not allowed in. Thus, those under eighteen were showed to make more wagers outside of a gambling facility than the other age groups. Depending on the game, there seemed to be a pretty even distribution of race among gamblers. The bottom line being; the desire to gamble does not depend on any specific background or gender or age or culture. It depends on the human desire to gain monetary pleasure, to get something for little to nothing, to be rewarded via dollars rather than food or drugs.

The demographic statistics and equalities listed above still do not account for why some crave gambling more than others. Distinctions have been made among gamblers. The categories are as follows: non-gambler, low-risk gambler, at-risk gambler, problem gambler and pathological gambler. (1) The desire to gamble becomes increasingly more prevalent and obsessive as the levels progress.

A pathological gambler, according to the DSM-IV criteria is constantly preoccupied with gambling, increases the amounts of money spent over time on gambling so not to achieve a tolerance, cannot stop gambling, gambles as an escape, attempts to 'break even' after having lost money, lies constantly to friends and family about gambling, sometimes commits illegal acts to support gambling, risks significant relationships, jobs, or education for gambling, and uses the financial help of others to be 'bailed out' of some situation caused by gambling. (1) Why are these people so obsessed with gambling that it takes over their lives? It has been hypothesized that pathological gamblers have dysfunctional reward pathways. "When the pathways function correctly, one important result is a release of dopamine, a neurotransmitter that can stimulate pleasurable feelings." Pathological gamblers have been proven to have lower activity in an enzyme that breaks down neurotransmitters. This may create a problem for serotonin distribution. Also, researchers have identified a greater amount of certain genetic configurations in pathological gamblers, a variation which may be responsible for the deficient reward pathway. (4) The medication prescribed to some of these pathological gamblers who were tested increased their serotonin levels and seemed to have positive effects in the way of their resisting the urge to gamble.

Many equate the pathological desire to gamble with a problem in the decision-making area of the brain, a constant lapse in judgment so to speak. The areas of the brain associated with the decision-making process are the middle frontal, inferior frontal and orbital gyrus. (4)

While this neurological analysis may offer some understanding to why people gamble for monetary / reward purposes, it does not explain the bigger relationship between human beings and gambling. Gambling does not necessarily need to involve money; it can instead be translated to a risk. People gamble everyday whether it be the tasting of a new food or skipping an important business meeting. It seems that gambling is a part of life necessary to perpetuate the human species.

Diversification, a part of natural life, involves adapting to different environments and niches. Say a bee only acquired nutrients from one specific flower, never venturing out to samples other types of pollen, what would happen? Suppose one winter that specific type of flower failed to survive, or some sort of spontaneous extinction occurred, all the bees who fed off this flower would become extinct as well. The same sort of thing may occur if a person moved to a different country, wherein the food looked completely different. In order to stay alive, that person would have to take a chance on a new type of diet. Human beings, as well as a majority of the remaining Animal Kingdom are inclined to diversify and adapt to new surroundings in order to stay strong and able to perpetuate their species.

The same notion of adaptation for survival applies to drastic temperature changes and the effect it has on the body. (5) "Although shell temperature is not regulated within narrow limits the way internal body temperature is, thermoregulatory responses do strongly affect the temperature of the shell, and especially its outermost layer, the skin." The temperature of the environment is directly related to the thickness of this shell. If the shell is needed to conserve heat, it may expand to a several centimeters underneath the skin's surface, however, if the environment is warm, then the shell will tend to only be about one centimeter thick. This shell of warmth protects people in the case that they wish to change environmental settings, or so the same species can survive in all different locations. The complex nature of the human body responds well to their desire to gamble, to diversify, to extend their minds and risk.

Whether it be monetary, behavioral or just plain desire to risk, humans are drawn towards the new and the chancy. It is the danger of loss and the thrill of life that keeps us breathing.

References

1)Government issued National Gambling Study, Approximately one hundred pages of gambling statistics and surveys issued to different casinos across the nation. Provides the DSM-IV criteria for Pathological Gambler. Also explians the different catagories of gamblers.

2)Science Daily Homepage, An article containing neurobiological conclusions about gamblers.

3)Scientific America Homepage, More neurobiological findings, explains a relevant experiment performed which relates to the study of gambling in relation to biology.

4)The Wager homepage, Denotes the hypothesised difference between gamblers and pathological gamblers in biological terms.

5)Study of Temperature and Human Beings, this article discusses the adaptive mechanisms of the human body.

6)Mathamatical Statistics about Gambling, explains statistics for winning in a mathamatical fashion.

7)Gene Expression homepage, Statistical information about the likliness of succeeding as a gambler.

8)Gambling, Biology, and Psychics, Article offers alternative suggestions about why one may gamble.

9)Could Gambling Save Science?, Article links gambling to science as a matter of human interest.

Book References

1)Alvarez,A. The Biggest Game in Town. New York: Chronicle Books, 2002.

2)Brunson,Doyle. Doyle Brunson's Super System. Cardoza Pub, 1979.

3)Dostoeyevsky,Fyodor. The Gambler. New York: Viking Press, 1966.


Earworm: The Song That Won't Leave Your Head
Name: Diana E. M
Date: 2003-12-14 23:31:34
Link to this Comment: 7529


<mytitle>

Biology 103
2003 Second Paper
On Serendip

I woke up and I was mortified. It was the first thing in my mind when I opened my eyes and I just could not believe this silly little thing had become as involuntary as breathing. I tried another song, but it would come back without me realizing it. I walked to work and it came with me, I sat in class and it spoke louder that my professor's voice, I even took a nap and it kept me awake. I had a stupid song stuck in my head and it wouldn't go away.


What is it that happens in the brain that causes this annoyance to go on for days? And why does it remain in the head even when it's driving us so crazy that we want to scream in pain? According to research done by Professor James Kellaris at the University of Cincinnati, (1) getting songs stuck in our heads happens to most if not all of us. His theory shows that certain songs create a sort of "cognitive itch" - the mental equivalent of an itchy back. So, the only way to 'scratch' a cognitive itch is to rehearse the responsible tune mentally. The process may start involuntarily, as the brain detects an incongruity or something "exceptional" in the musical stimulus. The ensuing mental repetition may exacerbate the "itch," such that the mental rehearsal becomes largely involuntary, and the individual feels trapped in a cycle from which they seem unable to escape.


But why does this happen? Apparently, repetition, musical simplicity and incongruity are partly responsible for the annoyance. (2) A repeated phrase, motif or sequence might be suggestive of the very act of repetition itself, such that the brain echoes the pattern automatically as the musical information is processed. Still, simpler songs appear more likely to make your brain itch, - like Barnny's "I love you, you love me" tune - but at the same time a song that does something unexpected can cause the brain to latch on because of whatever unconscious cognitive incident occurred at that very moment. These traits of simplicity, repetition and circular composition1 are potent because we don't remember songs as one complete image, like a picture, but as temporal sequences that unfold in our brains. (3) In other words, we don't "see" an entire song in our head; instead, one image (or line in a song) triggers the subsequent one. If there is a circular quality to a song, then it ends up being a kind of neural network loop that just keeps cycling around and around.


However, it has also been argued that there is a tendency to remember an incomplete task rather than a completed one. So when a chorus keeps looping around in our mind, preventing us from remembering how a song ends, we experience the Zeigarnik effect2 - where we just want to complete the song. The brain has a natural tendency to focus on incomplete problems, such as getting through the song and, in a sense, obsessing about it. This is not done consciously. It's simply a kind of an unconscious need to complete a problem.


The typical episode lasts from a few hours (in 55% of people) to a full day (23%). A quarter was haunted by songs or jingles for several days (17%) or longer than a week (5%). (4) Interesting facts show that women report more irritation and frustration as a result of earworms - the term used for "a song stuck in your head" - as well as people who are constantly exposed to music. More so, there may be a connection between earworms and a person's level of neurosis as this may cause them to react quicker to the onset of an earworm. (6) Usually, neurotic individuals experiencing an earworm are easily exacerbated by wondering how long it's going to last, or simply by predisposing themselves to the idea that it is going to remain there for a while. Seventeen percent and less of the individuals who were part of Kellaris's sample group expressed that the earworm lasted three days or more days and 100% of these individuals presented signs of mild to severe neurosis. (1)This lead the investigation to conclude that being susceptible to neurosis increases the chances of being "annoyed" for longer periods of time.


So how do we keep earworm from wriggling through our head? Research shows that there really isn't a proven psychological tonic, but deconstruction is a strategy that is often recommended by cognitive psychologists. (3) The trick is to ask ourselves, "why is this song in my brain? What do I hate or like about it? And what the heck do those lyrics mean? Apparently, the reason it's in your heads is because we've got this kind of cognitive itch that leads the to brain to conclude that something is not quite complete, so we try to complete the task in some other way, namely, by repetition. So, it would seem reasonable to think that if we can't remember the words or the complete song, we try to analyze it or think about the song in another way so that we have a sense of accomplishment - that we've actually resolved some question about the song. (4) When the brain thinks, "ok, well, I've done my job," it creates a sense of closure which may free us from the affliction, allowing our brains to move on to more important matters.


The research is of particular interest to both the pop industry looking to boost sales and to advertisers, who often use jingles to get their brand name stuck in the head of listeners. For both advertising purposes and pop music purposes, it's of great value to know that once and something heard is not forgotten quickly or easily. This is why TV or radio commercials create jingles that are "catchy" phrases that are "fun" to sing along with when listened to. (1) These are often short and concise, allowing the consumer to associate the tune to a particular brand or product. For advertisers, being able to get to invade consumer's mind in such a way that they will pick one brand over another based on the silly jingle is exactly what makes the big difference. Say, buying "Mr. Clean, Mr. Clean," as opposed to any other disinfectant or "cha cha cha, Charmin" instead of Kleenex tissue paper. As for songs, the issue is a little more random. Though, it is true that one is bound to repeat Britney's "ooh baby, baby. I did it again" as opposed to a less popular song, the chances of being "infected" are the same. Equally, the conscious or unconscious significance a particular song has for an individual could possible lead for the involuntary repetition of a song's segment. This is why, Kellaris recommends analyzing the relation a song could have with our personal lives.


Just a few weeks back, I came across Rockwell Church's new hit, "Chemical." I liked the song so much that I played it on repeat over and over again, singing it every time it started to play. I was aware of the fact that the song did hit some emotional nerves, which is why I wanted to hear it repeatedly. The song was kind of pseudo-therapy that allowed me to cope with whatever I was feeling at the moment. However, after the twentieth time, I got so tired of it, I turned it off, but unfortunately, it was now permanently lodged in my brain - I was just unable to shake it off.

As I began to research this paper, I became aware that I was part of the 17-5 percent of the population who experiences severe cases of cognitive itch - not only with songs, but with poems, sentences, images. Somehow, these just get wedged in my brain, without my realizing and just stay there for days on end. Nonetheless, I must admit that the Rockwell Church earworm was completely my fault and I was so fascinated by it, that I even went on to write my second web paper on it - the effects of chemicals in interpersonal attractions. So much for an obsessive mind! When I asked Pofessor of Biology, Paul Grobstein, what the heck was happening in my brain, he gave me a similar explanation to this paper, and then went on to candidly say, "you're just neurotic." That too was true.
I have unsuccessfully completed my 5 page assignment without several short-lived earworms. I do listen to music a lot when typing, but I must admit I was predisposed to thinking I was inevitably going to suffer from brain itch while I completed my essay. Nevertheless, I was forced to think of some tunes to illustrate the evils of earworms and the different types there were. So, in payback for what has already been lodged in my head for the sake of this paper, I shall pass it on to you. Sing along...
"This is the song that doesn't end. And it goes on and on my friends. Some people, started singing it not knowing what it was, and they continued singing it forever just because this is the song that doesn't end and it goes on and on my friends, some people started singing not knowing what it was and they continued singing it forever just because this is the song that doesn't end and it goes on and on my friends, some people started singing not knowing what it was, and they continued singing it forever just because this is..."

-Lamb Chop, Charlie Horse, Hush Puppy, and Shari.


1 Which reminds me of Lamb Chop's diabolic tune to the "never ending song."
2 Named after Russian psychologist Blum Zeigarnik who came up with the theory in 1927.

.

References

References

1) Brain Itch Keeps Song in the Head, Wednesday, 29 October 2003

2)Why Songs Get Stuck in Our Heads All Day?

3) Kevlar, Cognitive Itch, 2003

4) Gigson Stacey, No Cure For Songs Stuck in Your Head

5) Ask us at U of T

6) The Wrong Song Stuck In Your Head

7) Annoying Songs Stuck in the Head


Female Genital Mutilation
Name: La Toiya L
Date: 2003-12-14 23:44:14
Link to this Comment: 7530


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Female Genital Mutilation (FGM), also known as female circumcision, is a destructive and invasive procedure involving the removal or alteration of female genital. The procedure is carried out at a variety of ages, ranging from shortly after birth to some time during the first pregnancy, but most commonly occurs between the ages of four and eight. There are three main types of FGC that are practiced: Type I (Sunna circumcision), Type II (Excision), and Type III (Infibulation). These three operation range in intensity, from the "mildness" of Type I, to the extreme Type III.

The practice occurs in Africa, the Middle East, parts of Asia, and in immigrant communities in Europe and North America. An estimated 135 million of the world's girls and women have undergone genital mutilation, and two million girls a year are at risk - approximately 6,000 per day - about one every 15 seconds. (1) Although Female Genital Mutilation predates Islam and is not practiced by the majority of Muslims, it has acquired this religious dimension. However, FGM is a cross-cultural and cross-religious ritual. In Africa and the Middle East it is performed by Muslims, Coptic Christians, members of various indigenous groups, Protestants, and Catholics; to name a few.

The type of mutilation practiced, the age at which it is carried out, and the way its done varies according to a variety of factors, including the woman or girl's ethnic group, what country they are living in, whether in a rural or urban area and their socio-economic background. The first and "mildest" type of FGM is called "sunna circumcision" or Type I. The term "Sunna" refers to tradition as taught by the prophet Muhammad. This specific procedure involves the removal of the tip of the clitoris and/or its covering (prepuce). The second type of FGM, Type II also known as clitoridectomy, involves the partial or entire removal of the clitoris, as well as the scraping off of the labia majora and labia minora. This procedure is often used by countries that are prohibited from using the more extreme procedures. Clitoridectomy was invented by Sudanese midwives as a compromise when British legislation forbade the more extreme operations in 1946. (2)

Infibulation (a.k.a. Pharaonic circumcision) is the third and most dramatic procedure of FGM's; it has been banned in some countries but still exists in others. This most extreme form consists of the removal of the clitoris, the adjacent labia minora, labia majora, and the urethral and vaginal openings are cut away. The vagina is then stitched or held together with thorns, leaving a small opening for menstruation and urination. To engage in intercourse the woman is then cut open by her husband on their wedding night. The woman's vagina is often restitched if her husband leaves on a long trip to secure fidelity. Female Genital Mutilations are mostly done without anesthesia and in unsanitary conditions where instruments such as unsterilized razor blades, tin lids, scissors, kitchen knives, and broken glass are used. (3)These instruments are frequently used on several girls in succession and are rarely cleaned.

In various cultures there exist different "justifications" for Female Genital Mutilation including preserving cultural and gender identity, controlling women sexuality, and other things as well. Other arguments supporting FGM are that it will reduce promiscuity, increase cleanliness, and enhance femininity. In cultures where FGM is common, marriage prospects are higher for a woman who has undergone the procedure. Many people in FGM-practicing societies, especially traditional rural communities, see FGM as a large part of their cultural identity and regard it as so normal that they cannot imagine a woman who has not undergone mutilation. Jomo Kenyatta, the late President of Kenya, argued that FGM was inherent in the initiation which is in itself an essential part of being Kikuyu, to such an extent that "abolition... will destroy the tribal system". (4)

Societies that practice infibulation are strongly patriarchal. Preventing women from indulging in "illegitimate" sex, and protecting them from unwilling sexual relations, are vital because the honor of the whole family is dependent on it. Infibulation does not, however, provide a guarantee against "illegitimate" sex, as a woman can be "opened" and "closed" again. Unmutilated women are often regarded as unclean and are not allowed to handle food and water. In some areas of Africa, in relation to pregnancies, FGM is delayed until two months before a woman gives birth. This practice is based on the belief that the baby will die if she/he comes into contact with their mother's clitoris during birth or the baby will be hydrocephalic (born with excess cranial fluid) if its head comes in contact with the clitoris. In some societies the clitoris may represents other harmful things including:

--The clitoris is dangerous and must be removed for health reasons. Some believe that it is a poisonous organ that can cause a man to sicken or die if contacted by a man's penis. Others believe that men can become impotent by contacting a clitoris.

--Bad genital odors can only be eliminated by removing the clitoris and labia minora.

--FGM prevents vaginal cancer.

--An unmodified clitoris can lead to masturbation or lesbianism.

--FGM prevents nervousness from developing in girls and women.

--FGM prevents the face from turning yellow.

--FGM makes a woman's face more beautiful.

--If FGM is not done, older men may not be able to match their wives' sex drive and may have to resort to illegal stimulating drugs.

--An intact clitoris generates sexual arousal in women which can cause neuroses if repressed.
(1)
The most common justification for FGM is its tradition. Many of the practitioners are unwilling to change their customs and are often kept ignorant of the real implications of FGM, and the extreme health risks that it involves. These beliefs and baseless fears are among the many justifications of this appalling procedure.

Aside from the obvious pain and torture that these women must go through there are several other serious and fatal effects. During the procedure pain, shock, hemorrhage and damage to the organs surrounding the clitoris and labia can occur. The use of the same unsterilized instrument on several girls can cause the spread of infections including HIV, which in most cases can lead to AIDS. Extreme discomfort and pain is felt as a result from the common chronic infections, intermittent bleeding, abscesses and small benign tumors of the nerve from clitoridectomy and excision. As a result of infibulation there are even more serious long-term effects: chronic urinary tract infections, stones in the bladder and urethra, kidney damage, reproductive tract infections resulting from obstructed menstrual flow, pelvic infections, infertility, keloids (raised, irregularly shaped, progressively enlarging scars) and dermoid cysts. In order for sexual intercourse to take place the opening of the vagina must gradually and painfully dilated, in which most cases cutting is necessary. The sewn vagina is also cut open during pregnancy so that the child may pass through, and they are then sewn back together to make them "tight" for their husbands. The constant cutting and restitching can result in tough scar tissue.

The issue of FGM has received increasing global attention and controversy over the past several years for various reasons. Yes it is true that the practice must be dealt with in order save these women and promote Human Rights, but we must take into consideration the strong and unbinding presence of cultural importance and implications which make the matter much harder to resolve in a way that is effective. Although Female Genital mutilation is seen as barbaric, we must shed light on the fact that up until a few decades ago, the clitoris was still believed to be a very dangerous part of the body. "Elimination of clitoral sexuality is a necessary precondition for the development of femininity." (Sigmund Freud) As recently as 1979, these surgeries were performed on women in the United States. In this sense the controversy regarding FGM can be seen as a form of hypocrisy and cultural imperialism, which leads people to ask, "What right do others have to criticize our way of life?" leading them to challenge Western practices such as giving people up for adoption, sending parents to elderly homes, or conducting abortions. This forces me to think open-mindedly about this issue. The first place we should start, as people who are concerned, is educating. Members of African societies whom perform these procedures must first know of the fatal consequences. Educating these communities and higher officials will shed light on the risk and danger involved, and then those communities can work on shaping a society which keeps its culture, traditions, and rituals but reexamines the bloodiness and torture involved in them. To just eradicate these rituals won't do much good, because we can erase the practice but we can't simply erase the mentalities attached. Men and other authoritative figures will continue to instill these same values of superiority and dominance thus creating a majority where women will be caught between the social norms of the "educated" and those of the majority. The struggle is not getting rid of these horrendous practices, more importantly the struggle involves reshaping the consciousness involved, which isn't so easy a task.

(Additional Material)

Testimony

"I was genitally mutilated at the age of ten. I was told by my late grandmother that they were taking me down to the river to perform a certain ceremony, and afterwards I would be given a lot of food to eat. As an innocent child, I was led like a sheep to be slaughtered.

Once I entered the secret bush, I was taken to a very dark room and undressed. I was blindfolded and stripped naked. I was then carried by two strong women to the site for the operation. I was forced to lie flat on my back by four strong women, two holding tight to each leg. Another woman sat on my chest to prevent my upper body from moving. A piece of cloth was forced in my mouth to stop me screaming. I was then shaved.

When the operation began, I put up a big fight. The pain was terrible and unbearable. During this fight, I was badly cut and lost blood. All those who took part in the operation were half-drunk with alcohol. Others were dancing and singing, and worst of all, had stripped naked.

I was genitally mutilated with a blunt penknife.

After the operation, no one was allowed to aid me to walk. The stuff they put on my wound stank and was painful. These were terrible times for me. Each time I wanted to urinate, I was forced to stand upright. The urine would spread over the wound and would cause fresh pain all over again. Sometimes I had to force myself not to urinate for fear of the terrible pain. I was not given any anesthetic in the operation to reduce my pain, nor any antibiotics to fight against infection. Afterwards, I hemorrhaged and became anemic. This was attributed to witchcraft. I suffered for a long time from acute vaginal infections."

Hannah Koroma, Sierra Leone
(4)
.

References


1)Female Genital Mutilation In Africa, The Middle East & Far East, Nice site, but focused around breaking down religious misconceptions.

2)Female Genital Cutting (FGC): An Introduction, This Site is basic, yet informative.

3)Female Genital Mutilation; Contemporary Human Rights Issues, Short and to the point.

4)What is female genital mutilation?, This site is nice pretty good because it's outlined by very common yet helpful and interesting questions.



<mytitle>

Biology 103
2003 Second Paper
On Serendip

Female Genital Mutilation (FGM), also known as female circumcision, is a destructive and invasive procedure involving the removal or alteration of female genital. The procedure is carried out at a variety of ages, ranging from shortly after birth to some time during the first pregnancy, but most commonly occurs between the ages of four and eight. There are three main types of FGC that are practiced: Type I (Sunna circumcision), Type II (Excision), and Type III (Infibulation). These three operation range in intensity, from the "mildness" of Type I, to the extreme Type III.

The practice occurs in Africa, the Middle East, parts of Asia, and in immigrant communities in Europe and North America. An estimated 135 million of the world's girls and women have undergone genital mutilation, and two million girls a year are at risk - approximately 6,000 per day - about one every 15 seconds. (1) Although Female Genital Mutilation predates Islam and is not practiced by the majority of Muslims, it has acquired this religious dimension. However, FGM is a cross-cultural and cross-religious ritual. In Africa and the Middle East it is performed by Muslims, Coptic Christians, members of various indigenous groups, Protestants, and Catholics; to name a few.

The type of mutilation practiced, the age at which it is carried out, and the way its done varies according to a variety of factors, including the woman or girl's ethnic group, what country they are living in, whether in a rural or urban area and their socio-economic background. The first and "mildest" type of FGM is called "sunna circumcision" or Type I. The term "Sunna" refers to tradition as taught by the prophet Muhammad. This specific procedure involves the removal of the tip of the clitoris and/or its covering (prepuce). The second type of FGM, Type II also known as clitoridectomy, involves the partial or entire removal of the clitoris, as well as the scraping off of the labia majora and labia minora. This procedure is often used by countries that are prohibited from using the more extreme procedures. Clitoridectomy was invented by Sudanese midwives as a compromise when British legislation forbade the more extreme operations in 1946. (2)

Infibulation (a.k.a. Pharaonic circumcision) is the third and most dramatic procedure of FGM's; it has been banned in some countries but still exists in others. This most extreme form consists of the removal of the clitoris, the adjacent labia minora, labia majora, and the urethral and vaginal openings are cut away. The vagina is then stitched or held together with thorns, leaving a small opening for menstruation and urination. To engage in intercourse the woman is then cut open by her husband on their wedding night. The woman's vagina is often restitched if her husband leaves on a long trip to secure fidelity. Female Genital Mutilations are mostly done without anesthesia and in unsanitary conditions where instruments such as unsterilized razor blades, tin lids, scissors, kitchen knives, and broken glass are used. (3)These instruments are frequently used on several girls in succession and are rarely cleaned.

In various cultures there exist different "justifications" for Female Genital Mutilation including preserving cultural and gender identity, controlling women sexuality, and other things as well. Other arguments supporting FGM are that it will reduce promiscuity, increase cleanliness, and enhance femininity. In cultures where FGM is common, marriage prospects are higher for a woman who has undergone the procedure. Many people in FGM-practicing societies, especially traditional rural communities, see FGM as a large part of their cultural identity and regard it as so normal that they cannot imagine a woman who has not undergone mutilation. Jomo Kenyatta, the late President of Kenya, argued that FGM was inherent in the initiation which is in itself an essential part of being Kikuyu, to such an extent that "abolition... will destroy the tribal system". (4)

Societies that practice infibulation are strongly patriarchal. Preventing women from indulging in "illegitimate" sex, and protecting them from unwilling sexual relations, are vital because the honor of the whole family is dependent on it. Infibulation does not, however, provide a guarantee against "illegitimate" sex, as a woman can be "opened" and "closed" again. Unmutilated women are often regarded as unclean and are not allowed to handle food and water. In some areas of Africa, in relation to pregnancies, FGM is delayed until two months before a woman gives birth. This practice is based on the belief that the baby will die if she/he comes into contact with their mother's clitoris during birth or the baby will be hydrocephalic (born with excess cranial fluid) if its head comes in contact with the clitoris. In some societies the clitoris may represents other harmful things including:

--The clitoris is dangerous and must be removed for health reasons. Some believe that it is a poisonous organ that can cause a man to sicken or die if contacted by a man's penis. Others believe that men can become impotent by contacting a clitoris.

--Bad genital odors can only be eliminated by removing the clitoris and labia minora.

--FGM prevents vaginal cancer.

--An unmodified clitoris can lead to masturbation or lesbianism.

--FGM prevents nervousness from developing in girls and women.

--FGM prevents the face from turning yellow.

--FGM makes a woman's face more beautiful.

--If FGM is not done, older men may not be able to match their wives' sex drive and may have to resort to illegal stimulating drugs.

--An intact clitoris generates sexual arousal in women which can cause neuroses if repressed.
(1)
Aside from the previously stated beliefs the most common justification for FGM is its tradition. Many of the practitioners are unwilling to change their customs and are often kept ignorant of the real implications of FGM, and the extreme health risks that it involves. These beliefs and baseless fears are among the many justifications of this appalling procedure.

Aside from the obvious pain and torture that these women must go through there are several other serious and fatal effects. During the procedure pain, shock, hemorrhage and damage to the organs surrounding the clitoris and labia can occur. The use of the same unsterilized instrument on several girls can cause the spread of infections including HIV, which in most cases can lead to AIDS. Extreme discomfort and pain is felt as a result from the common chronic infections, intermittent bleeding, abscesses and small benign tumors of the nerve from clitoridectomy and excision. As a result of infibulation there are even more serious long-term effects: chronic urinary tract infections, stones in the bladder and urethra, kidney damage, reproductive tract infections resulting from obstructed menstrual flow, pelvic infections, infertility, keloids (raised, irregularly shaped, progressively enlarging scars) and dermoid cysts. In order for sexual intercourse to take place the opening of the vagina must gradually and painfully dilated, in which most cases cutting is necessary. The sewn vagina is also cut open during pregnancy so that the child may pass through, and they are then sewn back together to make them "tight" for their husbands. The constant cutting and restitching can result in tough scar tissue.

The issue of FGM has received increasing global attention and controversy over the past several years for various reasons. Yes it is true that the practice must be dealt with in order save these women and promote Human Rights, but we must take into consideration the strong and unbinding presence of cultural importance and implications which make the matter much harder to resolve in a way that is effective. Although Female Genital mutilation is seen as barbaric, we must shed light on the fact that up until a few decades ago, the clitoris was still believed to be a very dangerous part of the body. "Elimination of clitoral sexuality is a necessary precondition for the development of femininity." (Sigmund Freud) As recently as 1979, these surgeries were performed on women in the United States. In this sense the controversy regarding FGM can be seen as a form of hypocrisy and cultural imperialism, which leads people to ask, "What right do others have to criticize our way of life?" leading them to challenge Western practices such as giving people up for adoption, sending parents to elderly homes, or conducting abortions. This forces me to think open-mindedly about this issue. The first place we should start, as people who are concerned, is educating. Members of African societies whom perform these procedures must first know of the fatal consequences. Educating these communities and higher officials will shed light on the risk and danger involved, and then those communities can work on shaping a society which keeps its culture, traditions, and rituals but reexamines the bloodiness and torture involved in them. To just eradicate these rituals won't do much good, because we can erase the practice but we can't simply erase the mentalities attached. Men and other authoritative figures will continue to instill these same values of superiority and dominance thus creating a majority where women will be caught between the social norms of the "educated" and those of the majority. The struggle is not getting rid of these horrendous practices, more importantly the struggle involves reshaping the consciousness involved, which isn't so easy a task.

(Additional Material)

Testimony

"I was genitally mutilated at the age of ten. I was told by my late grandmother that they were taking me down to the river to perform a certain ceremony, and afterwards I would be given a lot of food to eat. As an innocent child, I was led like a sheep to be slaughtered.

Once I entered the secret bush, I was taken to a very dark room and undressed. I was blindfolded and stripped naked. I was then carried by two strong women to the site for the operation. I was forced to lie flat on my back by four strong women, two holding tight to each leg. Another woman sat on my chest to prevent my upper body from moving. A piece of cloth was forced in my mouth to stop me screaming. I was then shaved.

When the operation began, I put up a big fight. The pain was terrible and unbearable. During this fight, I was badly cut and lost blood. All those who took part in the operation were half-drunk with alcohol. Others were dancing and singing, and worst of all, had stripped naked.

I was genitally mutilated with a blunt penknife.

After the operation, no one was allowed to aid me to walk. The stuff they put on my wound stank and was painful. These were terrible times for me. Each time I wanted to urinate, I was forced to stand upright. The urine would spread over the wound and would cause fresh pain all over again. Sometimes I had to force myself not to urinate for fear of the terrible pain. I was not given any anesthetic in the operation to reduce my pain, nor any antibiotics to fight against infection. Afterwards, I hemorrhaged and became anemic. This was attributed to witchcraft. I suffered for a long time from acute vaginal infections."

Hannah Koroma, Sierra Leone (4)
.

References


1)Female Genital Mutilation In Africa, The Middle East & Far East, Nice site, but focused around breaking down religious misconceptions.

2)Female Genital Cutting (FGC): An Introduction, This Site is basic, yet informative.

3)Female Genital Mutilation; Contemporary Human Rights Issues, Short and to the point.

4)What is female genital mutilation?, This site is nice pretty good because it's outlined by very common yet helpful and interesting questions.


Why We Still Get the Flu
Name: Adina Halp
Date: 2003-12-15 23:08:00
Link to this Comment: 7537


<mytitle>

Biology 103
2003 Second Paper
On Serendip

This winter, media reports of early influenza (flu) deaths in American and British children sparked a panic that is spreading throughout the United States and the world. People are currently rushing to get flu shots to try to prevent this virus, which can be temporarily debilitating and even lead to death (1). With readily available flu vaccination and medication, it is a wonder that the flu is still an extant disease. In fact, in any given year, the flu kills about 15 million people world wide, more people than are killed by AIDS, lung cancer, and heart disease combined (2). With so much modern medical technology, why is it that we are still getting the flu?

Influenza, commonly known as the flu, is a virus that infects the trachea (windpipe) or bronchi (breathing tubes) (1). Strains of the flu may belong to one of three different influenza virus families, A, B, or C (3). Symptoms include high fever, chills, severe muscle aches, headache, runny nose, and cough. Complications can lead to pneumonia. Those most at risk of dying from the flu or contracting complications include asthmatics, people with sickle cell disease, people with long-term diseases of the heart, kidney, or lungs, people with diabetes, those who have weakened immunity from cancer or HIV/AIDS, children on long-term aspirin therapy, women who are on their second or third trimester of pregnancy, children under the age of nine, and adults over the age of 50 (1).

Flu shots may be a miracle of modern technology, but they are not received by everyone. The flu is a world-wide problem. While Americans spend $2 billion treating and preventing the flu every year, those countries known as the Third or Developing World simply cannot afford such a luxury (2). Even relatively wealthy countries cannot give the flu shot to everyone. Most Singaporeans are being urged to wait to get their flu shots until they have been administered to the more susceptible groups such as children and the elderly, and to health care workers and those traveling to parts of the globe where the flu season is at its worst, such as the United States and Britain (4). The United States, too, has not until recently been able to offer the flu shot to everyone. During the winters of 2001-2002 and 2002-2003, the US did not have enough influenza vaccines to administer to all of its residents, so the Centers for Disease Control and Prevention (CDC) asked healthy Americans to wait until those who were more vulnerable to the virus had a chance to get vaccinated (5).

Even those who are able to obtain flu shots often do not take advantage of them. For example, even though the flu triggers asthma attacks, studies suggest that only 8.9% of all people with asthma receive flu shots (6). In 2001, only 67% of people aged 65 years and over, another at-risk group, received the shot despite the fact that it is covered by Medicare (3). This flu season, even though many experts are recommending that most Americans receive a flu shot, a number of factors are preventing many people from doing so. Because the vaccine is made with hens' eggs (1), people who are allergic to eggs should not receive the flu vaccine. There are also some side effects of receiving the flu vaccination. Less than one-third of those who receive the vaccination experience soreness at the vaccination site, and five to ten percent suffer low-grade fevers and headaches (7). Some people are afraid of contracting the virus from the injection; however, they are misinformed as it is impossible to catch the influenza virus from the shot (8). Others are too pressed for time to receive a flu shot (5). Many people simply hate going to the doctor and getting shots, an experience that is usually physically and sometimes emotionally painful.

Even if everyone in the world got the flu shot, we still would not be eradicated of influenza. The flu shot is only about 75% effective in preventing the flu and reducing its severity. This is partly because it takes the vaccine one to two weeks to fully travel to the lungs (2), during which time the individual who has received the flu shot is just as susceptible to contracting the flu as is the individual who has not received it (7). Another reason that the flu vaccine is not always effective is that the virus that is injected into the body is never the exact same virus that is in circulation. The flu is a virus that naturally undergoes mutations from year to year, with multiple strains from three different virus families circulating somewhere in the world at any given time. This year's Fujian flu appears to have mutated from the more common Influenza A/ Panama virus (9). It is a slight "reshuffling" of existing strains of flu, a process called "antigen drift". In other years, when worldwide epidemics or "global pandemics" occur, it is often due to a major change in the flu's genetic material, an alteration into a completely different strain (4). As was the case of the 1918 Spanish Flu virus, a pandemic that killed more than 20 million people worldwide, it is also possible for genes that code for the proteins within the virus to split and recombine (10). The influenza virus can also jump from one species to another, exchanging genetic material in viruses afflicting different animals and creating new influenza strains (9).

By the time each vaccine is created, new strains of the flu are usually circulating. This is why influenza vaccines can only be made using flu varieties of the previous year, combining different strains of those flues in order to best protect against any extant strain of influenza (1). Because the strains are usually similar, these vaccines offer very good cross-protection; but they are not 100% effective (9). It is because of mutations and because the flu vaccine is made from a "killed" virus and is thus weaker than it would be if it were made from a live one that the immunity from the vaccine is short lasting, and it is recommended that people receive flu shots every year (7). If we spent a few years vaccinating the world's population as was done during the World Health Organization's smallpox eradication program (11), new strands of influenza would probably come into existence by the time everyone had been vaccinated from the old ones.

There are, at least in America, other options that help to prevent the contraction of the flu. One such option is FluMist, a nasal spray that was approved by the Food and Drug Administration in September of this year (12). It is a live but weakened form of the flu virus, and this gives it greater potential to produce a broad immune response (13). The fact that it is a nasal spray makes it much more appealing than the flu shot, which can only be administered via needle. However, because FluMist is administered through breathing, it has potential to be secreted, subjecting those who have not been vaccinated to the live virus. The fact that it is a live vaccine makes it possible for those who receive it to contract the virus from it. It is for this reason that FluMist is not recommended for people who fall in the groups most at risk of catching the flu (12).

Medications for flu are also not completely effective. Although there are currently over 100 over-the-counter flu medications, including Sudafed, Alka-Seltzer, Tylenol, Robitussin, Actifed, Dristan, Contac, TheraFlu, Dimetapp, and Nyquil, these drugs treat the flu's symptoms – they do not prevent the flu (2). One new drug, Zanamivir, can be inhaled and has been shown to reduce the symptoms of both A and B strains of influenza if taken at the onset of the disease, as has Oseltamivir, a drug taken in pill form. Oseltamivir can be used to prevent influenza, and so can the antiviral drugs, amantadine and rimantadine, although these two drugs need to be taken "as long as influenza cases continue to occur in the community," they are only useful for the prevention of Influenza A, and they can cause mild side effects. These two medications can also be used to treat influenza if taken soon after its onset (3), but again, this is not prevention, and the contagious influenza virus can still be spread.

Though it is extensive, current technology could not rid the world of the flu even if these medications and preventative measures were available to everyone. This is especially evident when we look at another kind of modern technology, the technology of travel. Today we constantly hear that the world is "getting smaller". Travel is much easier and much more frequent than it has ever been in the past. As was the case with last year's rapid spread of the SARS virus, people infected with a flu virus that is common in one area of the world constantly come into contact with those who live in areas not in contact with that particular virus, quickly spreading different strains of the flu throughout the globe. At least for now, all we can do is try to prevent ourselves from contracting and spreading influenza by receiving flu shots, inhaling FluMist, or taking medication to prevent influenza or reduce its symptoms.

References

1)Influenza Vaccine | Vaccine Education Center, an informational site on the Children's Hospital of Philadelphia website.

2)Influenza is America's #1 Killer: Flu Vaccine Found Least Effective for those Counting on it Most, an article that gives shocking flu facts.

3)American Lung Association Fact Sheet – Influenza

4)Mystery flu and its myths: Vaccination frenzy but most adults not at risk, article on a Singaporean news site.

5)MSNBC – Americas Urged to get Flu Shots, recent MSNBC news article.

6)Influenza: Serious Problem for People with Asthma, facts to do with influenza and asthma on the American Lung Association website.

7)Care Dynamix – Flu Shot Facts, facts provided by an on-site flu shot service.

8)Floyd Co., VA – Flu Vaccine

9)Fujian flu vaccine ready by next year; news article on the Star Online, a Malaysian Newspaper.

10)ScienceDaily New Release: Australian National University Scientists Find Genetic Trigger For The 1918 Spanish Flu

11)WHO 50th – Smallpox Eradication, Site Commemorating the 50th anniversary of the World Health Organization.

12)FluMist: No More Flu Shots? , On the Mayo Clinic website.

13)ScienceDaily News Release: A Better FLU Vaccine? Nasal Spray Vaccine May Give More Protection Against 'Drifted' Strains


Put Your Dukes Up; Nature and Nurture Go at it Ag
Name: Megan Will
Date: 2003-12-16 11:10:16
Link to this Comment: 7540


<mytitle>

Biology 103
2003 Third Paper
On Serendip

The Judd family is blessed with vocal and dramatic talent, as the James family was cursed with a lust for crime and the fast life, as the Bush family exudes, well, a good hair line. But is that truly the case? Can Ashley and Wynonna hit that high C simply because Naomi Judd, their mother, could? Did Frank and Jesse James both have the same gene that gave them their love of robbing banks and aggressive nature, passed on from a great-grandfather somewhere down the line? Or were these characteristics groomed by experiences and environment? What gives humans their traits and behavior?

The case of human traits and behavior is yet another battle in the on-going war of nature versus nurture. As in all prize fights, the challengers must be introduced. Nature versus nurture is a popular phrase used in most contemporary debates over what degrees genetic makeup, or "nature", and life experiences, or "nurture", influence traits or behaviors. Nature can encompass genetic makeup, as well as human nature, or instincts. Nurture has historically referred to care given to an individual by their parents, but can also include experiences in the womb, childhood friends, and one's early experiences with the television (1),as well as environment in which one lives.

In specific regards to human characteristics, there is a strong case for at least partial biological predetermination. On the obvious level, humans "breathe, sneeze, laugh, cry, sleep, and otherwise engage in a variety of activities which need not be learned" (2).From this we can derive that there are human activities or behaviors that need not be learned from experience. There are some proven cases in which it makes sense to say that a particular trait is due entirely to nature. Huntington's disease is a highly penetrant genetic disease, and one will only contract Huntington's if they have the corresponding gene variant.
Scientists are sure of the fact that genes do code for things such as eye and hair color. However, whether or not genes determine common human traits is still based all on theory. One such theory is the Nature Theory (3). The Nature Theory hypothesizes that abstract human traits such as intelligence, sexual orientation, and aggression are decided by DNA. For example, in the case of human aggression, the fact that our early human ancestors, as well as species closely linked to humans, display aggression also merits that the behavior is at least partially predetermined by genes. Unearthed Australopithecus skulls have indications of wounds caused by tools, among which some are considered to have been mortal. In some digs, remnants of cannibalistic meals have been discovered (7). Genes do not directly determine traits, in simplicity, genes code for protein. Yet genes do influence the developmental expression of traits, which "...represent the expression of the interaction of genes with environments." (6). And humans do obviously share genes with their ancestors. Sociobiologists use man's descent from hunters to support a genetic basis for human behaviors like aggression; we carry genes from a hunter/gatherer society, therefore it is embedded in our brains. Human aggression has been thought to be "an innate, unlearned behavior pattern" (4).

Research also shows that behavior could be mediated by the brain, the amygdala in particular. Texas sniper Charles Whitman begged for an autopsy after his execution because he believed that his actions were a result of his brain's inner-workings. The autopsy revealed he had a tumor pressing into his amygdale (4). Studies have also been done involving identical twins. The twins are separated at birth and have totally different lives and experiences, yet test back to have similar personalities and levels of intelligence. The only shared experience of these twins was that in the womb (1).

The other side of the argument is obviously the Nurture Theory. Psychologist John Watson experimented with environmental learning, and in doing so, demonstrated that the acquisition of a phobia could be explained by classical conditioning. Watson claimed: "Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll guarantee to take any one at random and train him to become any type of specialist I might select...regardless of his talents, penchants, tendencies, abilities, vocations and race of his ancestors." (3)

Another development in the nature versus nurture controversy was the Human Genome Project. Set to determine exactly how many genes a human has, it was discovered that humans possess 30,000 genes, which is barely twice that of a fruit fly (3). "We simply do not have enough genes for this idea of biological determinism to be right," concluded Craig Venter, president of Celera Genomics.

Back to our human aggression model, the fact that modern humans are much more aggressive than their ancestors shows that environment and upbringing definitely effect levels of human aggression. In the modern world, factors such as "influence of media, smoke, noise pollution, air pollution, abusive parenting, overcrowding, heat, and even atmospheric electricity" can lend a hand in aggravating the aggressiveness in humans (6). Behaviorists generally view aggression as a set of acquired behaviors and attach less emphasis on biological determinants. These scientists commonly apply the "principles of social learning theory" when addressing aggression.

The public view on the nature versus nurture argument tends to sway to both sides. People are ready to accept that it is genes that cause diseases and cancer, even obesity and homosexuality. Of course, this takes the blame off of human lifestyle. If it is written into their genes, there is nothing they can do about it. However, the public tends to favor the nurture side of the argument when it breaches sensitive topics such as aggression or intelligence. If people truly believed that intelligence was totally dependent upon genes, there would be no waiting lists to get into the best private schools, no SAT tutors, and no French lessons for three year olds. When the Columbine school shootings went on, it was the angry music, the video games, and the parents that were blamed. No one even brought up the fact that the two shooters genetic makeup could have had anything to do with it. Eminem and Double Dragon took the rap, and the parents were questioned as why they hadn't known about it or prevented it.

Today's biologists tend to agree that traits and behaviors are dependent upon not only nature, but also nurture. Particular genes can influence the development of a specific trait, or can agitate a specific behavior. Rather, the question is not whether it is nature or nurture, the question becomes how the two interact to produce human traits and behaviors. The University of California did a study on perfect pitch. Perfect pitch is the "ability to recognize the absolute pitch of a musical tone without any reference note". (5). People with perfect pitch often have relatives with the same quality, and recent studies show that perfect pitch is possibly the result of a single gene. But the studies also demonstrate a requirement for early musical training, before age six, in order to manifest perfect pitch (5). Therefore, even if the perfect pitch "gene" is inherited, if it is not exercised at an early age, it will go wasted and undetected.

The lessons learned from this? Don't believe those headlines that say that scientists found the cancer-causing genes. Don't blame your mother for your inability to cook or poor handwriting. The nature versus nurture argument will go on forever, as people look for scapegoats to blame for their misfortunes and factors to praise for their luck. There will never be enough evidence to prove whether it is one or the other, people just need to add space in the pot for both theories. Mix in a little free will and you've got yourself a real party. Ashley Judd should thank her mother, not only for giving her the building blocks to a stellar voice, but also for putting her in those acting and singing classes and exposing her to the music world at an early age.


References

1. 1)Wikipedia, online Encylclopedia site

2. 2) Montagu, Ashley. The Nature of Human Aggression. London: Oxford University Press. 1976.

3. 3)Genetics, discussion of Nature versus Nurture argument

4. 4)Human Evolution, essay on Human Evolution and Behavior


5. 5)Human Genome Project,findings from Human Genome Project

6. 5)Human Aggression, discussion of Nature versus Nurture in human aggression


7. 7) Heller, Agnes. On Instincts. Netherlands: Van Gorcum. 1979


Ethiopia's Medical Dilemma
Name: Maria Scot
Date: 2003-12-16 11:47:03
Link to this Comment: 7541


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Living in an industrialized country like America, and especially in a community such as Bryn Mawr, we are well fed and given excellent healthcare. Despite student complaints that they cannot go to the health center for a cough drop without being asked if they could be pregnant, most students are aware that they are very lucky and appreciate that there are parts of the world that are ravaged by diseases such as Malaria, which kills three children every minute. We donate money, we participate in clothing drives, but it is there that our involvement often ends, and we rarely see how effectively organizations such as Doctors without Border or Unicef ameliorate epidemics and other crisis developing countries. If one judges by the recent outbreak of malaria in Ethiopia; these human rights organizations are not living out the 'I Dream of Africa'-esque humanitarian fantasy that the donors may have imagined. Unicef, in conjunction with the Ethiopian government has been using what some claim are outdated drugs to fight the disease, which The World Health Organization predicts will infect 15 million of Ethiopia's 65 million population ( three times the normal infection rate) (1) . However, international doctors groups' such as Doctors Without Borders argue that the outdated drugs will be ineffective and may even make the epidemic more severe. There are new drugs that both W.H.O. and Doctors without Borders favor, but they are expensive and it is felt that it might worsen the situation to switch tactics now. And so the problem presents itself: expensive, effective new drugs, or cheaper, older drugs that may not work(1). One can understand the position of the Ethiopian Government so far as that they would like to choose the less expensive option. However, if the treatment they buy is not effective and if the second line of treatment is not possible for many of the citizens, then it is not only in the Ethiopian government's best interest, but also their responsibility to seek out and use a drug that will in fact help their citizens.
In a country with an average life span on 44 years and a death rate of 17.2 percent for children under five, the health care in Ethiopia is already poor and this malaria epidemic is the worst that the country has seen since 1998 (3). Malaria is spread largely by Anopheles mosquitoes and attacks the liver and red blood cells, though it can also attack other organs, depending on the case (4). Heavy rains this year and hot weather encouraged the breeding of mosquitoes and caused the spread of the disease. Unicef tried to take preventative action by sending hundreds of thousands of mosquitoes nets and over a million dollars in drugs. Clearly the efforts did not prevent the spread, though whether they lessened the severity of the outbreak cannot be known. Unicef does not take full responsibility for the choice of drugs. As a United Nations agency, they must be guided by the country's government, and in this case, Ethiopia's government chose the older medications (3).
The problem with the 'out-dated drugs' is that the diseases they were designed to treat have mutated and are no longer responsive to the treatment. This is the problem that is facing Ethiopia (1). The strain of malaria that is currently ravaging the country is thought to be resistant to the drugs chosen by the Ethiopian government and supplied by Unicef: a combination of chloroquine and sulfadoxinepyrimethamine. The pills must be taken for one day and it costs roughly twenty cents per person. Doctors Without Borders has claimed that up to 60 percent of the patients they see have not responded to this treatment (1). The second line of treatment—that which is used if the first treatment fails—is a five day in-patient hospital stay while being given quinine. In a country such as Ethiopia, where many citizens live in rural areas or are nomadic, checking into a hospital or clinic for five days can be nearly impossible. In fact, resistance to chloroquine is so common that the World Health Organization advises against its use (1).
The treatment suggested by Doctors Without Borders is using medications that contain artemisinin (5). Artemisinin, a chemical that is found in the sweet worm wood plant, does fight malaria more effectively, but it also costs between $1.00 and $1.25 and the pills must be taken for three days, not one day (1). The World Health Organization also supports the use of "artemisinin cocktails." Other African Countries including Burundi, Liberia and South Africa are using "artemisinin cocktails."(5)
Some doctors would argue that the treatment chosen by Ethiopia is not only ineffective, but in fact detrimental. The drug attacks the malaria parasite during one phase of its life, but also speeds the rate at which it produces the cells that are transmitted by mosquitoes, making it spread faster. If it doesn't kill the parasite, then all you have is a more virulent and easily spread case of the disease (1).
What it comes down to is how accurate the reports of resistance is and how widespread the resistance is to the drugs that are currently being used. If the 60 percent resistance that Doctors Without Borders is reporting is accurate, then switching tactics would be "a sensible strategy to follow" according to Dr. Mary Ettling, chief malaria expert at the United States Agency for International Development. Indeed, the USA endorses using artemisinin (1). In the meantime, the disease continues to spread. This problem is not going to go away, indeed as diseases mutate and develop resistance to the currently available drugs the problem of new expensive drugs versus older and cheaper drugs will only become a more pressing dilemma. Perhaps Unicef will in the future be able to put more pressure on the governments of the countries in which it is working, but until then finances will continue to provide a barrier to health services in developing countries.
WWW Sources
1)New York Times Article ,Newspaper article on Malaria in Ethiopia, but you have to Purchase it.
2) W.H.O., the World Health Organization's site on Ethiopia.
3) Unicef, Unicef's webpage on Ethiopia
4) Web MD: Malaria, Web MD's summary of Malaria and its symptoms. In case you were worried that you had contracted it for some reason If you are a hypochondriac, I wouldn't recommend exploring this site. I speak from experience.
5)Doctors Without Borders, Doctors Without Borders website on the current outbreak of Malaria in Ethiopia.


References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Eating Disorders in Males
Name: Katherine
Date: 2003-12-16 14:54:39
Link to this Comment: 7542


<mytitle>

Biology 103
2003 Second Paper
On Serendip

disorders are largely considered to be a "female disease". Statistics seem to validate this perception – of the estimated five million-plus adults in the United States who have an eating disorder, only ten percent are thought to be male ((1)). Many professionals, however, hold the opinion that these numbers are incorrect – it is impossible to base the statistics on anything other than the number of adults diagnosed with eating disorders, and men are much less likely than women to seek help for such a problem ((2)). This means that the male population probably suffers more from eating disorders than the numbers show.
The fact that the number of men who suffer from eating disorders is larger than most people think, and the fact that most people do not consider men to be susceptible to eating disorders at all, raise the question of whether or not we treat men who may have an eating disorder the same way we treat women. Simply put, this knowledge begs the question: is it more dangerous to be a man with an eating disorder than it is to be a woman with one?
For quite some time, there was a great deal of debate within the medical community as to whether or not men develop eating disorders for the same reasons that women do ((2)). Since very few men are willing to participate in treatment and study programs for people suffering from eating disorders, there was little way of knowing what psychological factors triggered disordered eating in males. A study published in the April 2001 issue of an APA journal, which looked at men with eating disorders and compared them to women with eating disorders and men without eating disorders, found that "men are generally very similar to women in terms of comparing psychopathology," and that "the illnesses are much more equivalent in prevalence than was previously thought" ((2)). Triggers for developing eating disorders have been found to be similar between the sexes: low self-esteem, depression, anxiety, difficulty coping with emotional and personal problems, and other existing psychological illnesses are common underlying factors in the development of disordered eating ((3)).
Aside from having the same basic influences, men usually develop the same kinds of eating disorders associated with women. Many people think that, given the muscular appearance of the male ideal, "male" eating disorders would be different from what are considered to be "female" eating disorders. The most common eating disorders in men, however, are anorexia, bulimia and binge eating (in which the person uncontrollably eats large quantities of food but does not purge after eating), which are also very common among women who have eating disorders ((1)). Like women, men who are involved in weight-conscious sports, such as wrestling, swimming and running, are more likely to develop eating disorders than those who do not participate in such activities ((3)). The only notable difference found between men and women with eating disorders thus far is that "while women who develop eating disorders feel fat before the onset of their disordered eating...typically they are near average weight. Men are more typically overweight medically before the development of the disorder" ((3)).
Given that most of the underlying psychological triggers for eating disorders, as well as the way eating disorders manifest themselves, is basically the same for both men and women, it seems strange to suggest that eating disorders may pose more of a threat to men than to women. Many health care professionals who specialize in eating disorders, however, worry that this is the case. The first danger is that a man (and those around him) may be less likely to notice his behavior or to think it is a real problem because "eating disorders have long been assumed to plague women only" ((4)). Many men do not know the symptoms of eating disorders, which means that they would not know what to seek help for even if they felt that something was wrong ((4)).
Medical professionals may also be less likely to diagnose an eating disorder in male patients that in their female counterparts. When Jason DeMaio was taken to the hospital, he was so thin that the simple act of walking was almost too much for his body to handle. At 5 feet 7 inches, DeMaio weighed in at around eighty pounds. His doctors, noting his extremely low weight and swollen lymph nodes, thought that he was exhibiting symptoms of cancer. It was only after this line of diagnosis proved fruitless that they began to consider that their patient might have an eating disorder ((1)). When dealing with the other side of eating disorders, men who bingo or compulsively overeat may not be diagnosed as having an eating disorder not only because emotionally triggered eating is not associated with men, but also because of "society's willingness to accept an overeating and/or overweight man more so than an overeating and/or overweight woman" ((3)).
Even when men know, through diagnosis or simply through their own deduction, that they have an eating disorder, they are much less likely to seek professional treatment than women. Many men are unwilling to ask for help concerning what they, and many others, consider to be a woman's problem ((2)). They fear being labeled as weak, effeminate, homosexual, or simply "not real men," and feel an intense sense of shame surrounding their disordered eating ((1)).
The lack of availability of treatment centers that can effectively treat male patients both validates and augments these concerns. Centers that treat eating disorders are often geared towards women, and many dislike admitting male patients because their presence may make female patients feel uncomfortable. Most of the programs who do readily allow men to participate do not have separate programs for male and female patients ((1)). Just as women may not be comfortable discussing their disordered eating in front of a man, men may very well want to avoid talking about their problems in a group composed mainly of women. Since group therapy is one of the most common and effective tools in battling an eating disorder ((1)), it is important that men participating in these programs are in groups where the can share their feelings openly.
What all of this suggests is that eating disorders are actually more dangerous for men, if only because men are less likely to seek and/or receive help. What is most important in lessening this danger is, in my opinion, educating the public on the reality of male eating disorders. While the media attention dedicated to women and girls suffering from eating disorders has been helpful in educating the general public to the threat of eating disorders in females, its neglect in covering men with eating disorders may contribute to the problem. People do not see or hear about men with eating disorders; therefore men with eating disorders do not exist for most people.
The medical community holds a large portion of the responsibility for educating the public on the dangers that eating disorders pose to men as well as women. Family doctors, for example, gave my parents (and the parents of my female friends) lectures on the importance of promoting a good self-image in their daughters as they approached puberty; to promote healthy eating and exercise habits while at the same time offering emotional support and watching their growing daughter for the signs of disordered eating. My parents did not get the same speech regarding my brother, nor did any of the parents of my male friends. It is important that doctors tell parents to help their sons maintain a healthy body image and educate them on the early signs of an eating disorder in their child. It is also important that doctors be aware of the possibility that one of their male patients may have an eating disorder, and that they treat any of the symptoms that would lead them to believe a female patient had an eating disorder just as seriously when those symptoms occur in a male patient. By working to erode the assumption that eating disorders are fundamentally un-masculine, we assure that men feel able to seek the same help that as women.


References

1)Men With Eating Disorders Face Unique Challenges

2)News From the APA: Men Less Likely to Seek Help for Eating Disorders

3)Issues for Men With Eating Disorders

4)Men Less Likely to Seek Help for Eating Disorders 2


ABORTION: WHERE DO WE DRAW THE LINE?
Name: Alice Gold
Date: 2003-12-16 16:09:26
Link to this Comment: 7544


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Possibly one of the most controversial court cases in our country's history was settled in January 1973. In a decision known as Roe vs. Wade, the Supreme Court legalized abortion in the United States (1). Now, according to the National Center for Health Statistics, abortion practices have become the most frequent surgical operation in the United States. But at what point is abortion just wrong? Only under certain circumstances should abortion practices be legal.

Many would argue that abortion is wrong because as soon as the baby is conceived, it becomes a living, breathing organism. Therefore, aborting the small organism is simply murder. However, when Roe vs. Wade was decided, organizations like Planned Parenthood publicly supported this ruling, arguing that during the first three months of pregnancy, the fetus is nothing more than a mass of tissue (1). In addition to Planned Parenthood, there are also many religious organizations in agreement with the legalization of abortion. These organizations include American Baptist Churches USA, the Episcopal Church, Presbyterian Church USA, the United Methodist Church, and the United Synagogue for Conservative Judaism (2). But under what circumstances should abortion practices be accepted?

As time passes, it seems as if more and more women are having abortions as a way of shunning responsibility. According to the AG Institution, fifty percent of women who have abortions use it as their sole means of birth control (3). This is one example of how abortion practice is being taken advantage of and used for the wrong reasons. In addition, approximately 45% of all abortions in the United States are done by women ages 19 and under. Why is this? Based on a survey of 1900 women in this country, the two most common reasons for abortion are: 1) the woman can not afford to keep the baby, and 2) the woman is not ready for the responsibility. These responses accounted for nearly 42% of all answers (3). However, these reasons are far from legitimate because these issues can be easily fixed. There are too many contraceptives and other forms of birth control to allow these excuses to be justifiable.

So what are acceptable reasons to abort an unborn baby? For one, a mother's health should never be in jeopardy. Another understandable reason to have an abortion is when there could be possible health problems for the unborn child. This excuse is legitimate because there is no way that a mother could be able to live a peaceful life knowing that her child is suffering, or being deprived of a normal lifestyle in any way. Sadly, these two reasons combined only accounted for six percent of the 1900 responses. One final justifiable reason for having an abortion, and perhaps the most important, is conceiving due to rape or incest, which according to this survey, only accounts for one percent of the women's opinion (3). Getting an abortion under this circumstance will always be justifiable because the situation is clearly out of the woman's hands. There is no woman in this world that will ever be able to tell her child that he or she was conceived not out of love, but as a result of rape.

Abortion procedures are not a bad thing; they are just too commonly used, and therefore promote irresponsibility. But is there really a way to reduce the number of abortion per year, without making it illegal; most likely not. Abortion was not legalized to be used as a sole form of birth control, so why do 50% of the women in the United States abuse it in this manner? Legalized abortion is a prime example of how people are given an inch and in turn, they take a yard.

References


1)Get the Facts, a website i found very interesting
2)Talking about Freedom of Choice: 10 Important Facts about Abortion, another interesting site
1)Who Does Abortion Affect?, yet another site


Painting What We See Within: A Look at the Insides
Name: Lindsay Up
Date: 2003-12-16 23:30:44
Link to this Comment: 7546


<mytitle>

Biology 103
2003 Second Paper
On Serendip

One of the most memorable experiences I had last summer was visiting the American Visionary Art Museum in Baltimore, Maryland. (3)At this museum, professional artists had created none of the works hanging on the walls. Visionary art is an individualized expression by people with little or no formal training; the rules of art as a school did not apply here. While I was there, I learned that for many years, the artwork created by patients of mental institutions, hospitals, and nursing homes were disregarded and destroyed by their caretakers. After seeing what powerful and telling work came from many people in these situations, I found this information to be very distressing. Fortunately, the development of art as a form of therapy has changed the medical attitude toward art created by the healing in the past fifty years. While the work created through this therapy is rarely showcased as at the American Visionary, it is aiding therapists and their clients in reaching a new awareness.

Art therapy uses media and the creative process in healing, the key word here being process. We all know how revealing the artwork of children can be of their emotions. Art therapy applies this concept across the spectrum in a multitude of situations. It functions in many of the same settings as conversational therapy: mental health or rehabilitation facilities, wellness centers, educational institutions, nursing homes, in private practices or in a client's home. An art therapist may work with an individual or group, in families or couples. While most therapy is based on conversation between the therapist and his or here client/s, art therapy integrates visual communication into the experience through painting, sculpting, drawing or other media. (1)

What occurs during an art therapy session depends mainly on the client. The theory behind it is that visualizing his or her feelings will help him or her to get beyond masking them through language. Imagine describing a dream. It is never quite possible to communicate effectively the images we are left by our subconscious. Art therapy allows the client or patient to relay these images in a raw and powerful way. (1) During the therapy, a client's artistic ability is irrelevant. While the session is not just a relaxing diversionary activity, it is also not an art class. Most sessions are structured to help get the client started on a project, and oriented toward helping him or her reach specific goals. The idea is for the patient to be able to work at his or here own pace as the therapist helps them to explore the work's significance. The therapist is not an interpreter of the client's art but rather a facilitator to his or her inner discovery. (5)

Many art therapy practitioners agree that it is a good alternative to verbal therapy if the client does not speak English or is shy or frightened about verbalizing his or her feelings and experiences. If the latter is the case, it is often easier or less painful for the client to discuss the image, rather than to discuss him or her self directly. In this way, art therapy is at once both therapeutic and diagnostic. The act of creating helps the client to heal and allows the therapist to perceive implications from the process. As the therapist learns about the individual, he or she is able to help the client further in exploring the work's significance. (4)

First and foremost, art therapy is in the process, not the finished painting or sculpture. Therapy and art are both processes in and of themselves that require deep introspection and a commitment to learn at one's own pace. Art therapy shares all of these attributes. A remarkable example of how art therapy works as a process is Silence Speaks. (2) Digital storytelling is a form of art therapy that uses filmmaking to help clients make sense of and share their experiences. Silence Speaks is a program that lets those who are in therapy for violence-related concerns to learn a new skill while sharing their stories through an art medium. This form of therapy not only benefits the individual or group that makes the film, but also those who watch the finished product, by hearing and seeing the stories of their peers. Digital storytelling combines art therapy with narratives one might hear in traditional verbal therapy. Music is also sometimes incorporated. This integration creates a process of therapeutic communication on multiple levels. At the same time, the films provide an alternative to representations of violence and the self in popular media. (2)

I believe creative art therapy helps to remedy some of the main problems that many people have with traditional emotional therapy. One of my friends was telling me that when her therapist asked her how she was doing, she would normally answer "fine." I think many people might react in the same way even when everything is not "fine" because we condition ourselves to speak and look to everyone else like we've got everything under control. Although journal keeping is a widely established therapeutic exercise, it is possible that we are just as skilled at not being completely honest in writing as we are in conversation. It seems like the use of painting, drawing or other creative arts like dance and music would make for a less inhibited representation of our feelings. Artistic expression is something that many people practice on their own in order to make sense of their thoughts. By sharing these expressions with another person, we have an alternate angle as to what is going on in our heads.

Most "grown-ups" don't take the time to practice creative art in a casual setting. Art therapy brings opportunities to do so to people who normally feel they too busy or it isn't serious enough for them. Art therapy brings creativity into places like hospitals and nursing homes, places that could use other forms of healing rather than just the medical or psychiatric. Personally, I don't see why the developments in the study of art therapy shouldn't change our attitudes toward creative arts in general. When was the last time you sat down with crayons and drew a picture, or put on your favorite song and danced just for the heck of it? Sure, if you are creatively talented you might study art or music, but through as we get older we take even our most creative work too seriously. We try to make it good according to external standards rather than doing it for our own good. In the broader sense of things, I don't think art therapy is just for the healing. I believe our lives would be brighter and less stressful if we got into the habit of being just a little creative every day.

References

1)The American Art Therapy Association, organization dedicated to the research, practice and education in art therapy.
2)Silence Speaks, information about the digital storytelling program.
3)The American Visionary Art Museum, website for one of the most innovative creative spaces on the East Coast.
4)Arts in Therapy, organization mostly for the collaboration of students and practitioners of art therapy, with useful information.
5)Creative Response, information about art therapy programs for patients of AIDS and cancer.


The "Gemini" Disorder
Name: Patricia P
Date: 2003-12-17 10:33:03
Link to this Comment: 7547


<mytitle>

Biology 103
2003 Second Paper
On Serendip

The "Gemini" Disorder
What We Know and Are Still Discovering About Bipolar Disorder

"You must understand something about Andrew... he's a Gemini." This was a simple phrase I heard very often in the company of my dear friend and his clever well-intentioned mother. It was discovered a short time later that the aforementioned statement was justification for the earliest symptoms of Bipolar disorder (or manic depressive illness.) As Andrew and I matured into our twenties, it seemed that he was going to need to understand a bit more than his astrological sign to gain control of his life and his mental and emotional well-being. Thus, we sought this information out together.

It is important to consider the magnitude of people who are affected by this disease and the multitude of forms it can take. Bipolar disorder affects approximately 2.3 million American adults, or about 1.2 percent of the U.S. population age 18 and over in a given year. (1) Of this population, approximately 75 percent have at least one close relative with manic-depression or severe depression. (5) Men and women are equally likely to develop bipolar disorder. Children and adolescents may show signs or have symptoms of bipolar disorder, yet a person's first manic episode usually strikes in their early 20s. Bipolar disorder is also more common among those who have family members, specifically first-degree relatives, with this disorder than with those who do not. (6) Unfortunately, many people suffer for years before properly diagnosed and treated or the illness may be never recognized at all. (4) Generally, bipolar disorder causes dramatic mood swings—from overly "high" and/or irritable to sad and hopeless, and then back again, often with periods of stable moods in between. Severe changes in energy and behavior follow these mood swings. (4) However, this description of bipolar disorder does not delve into the specifics of the disease which often branch into separate diagnosis and needs for treatment. Bipolar 1 Disorder is the more classic form of this illness, easy to recognize due to its frenzied and often psychotic episodes of mania. During these episodes, people may experience hallucinations (hearing, seeing or sensing a presence that isn't actually there,) or delusions of grandeur (such as believing they are the President, invincible, all-powerful, or extremely wealthy.) During depressive episodes, the person may experience feelings of worthlessness, hopelessness, pessimisms toward the future, and thoughts of death and suicide or even suicide attempts. Bipolar 2 disorder is characterized by its more mild to moderate form of mania known as hypomania. People experiencing episodes of hypomania may feel extra alert, productive, and tremendously brilliant: "At first, when I'm high, it's tremendous... ideas are fast... like shooting stars you follow until a brighter ones appear... All shyness disappears, the right words and gestures are suddenly there...uninteresting people, and things become intensely interesting. Sensuality is pervasive; the desire to seduce and be seduced is irresistible. Your marrow is infused with unbelievable feelings of ease, power, well-being, omnipotence, euphoria... you can do anything." (4) Unfortunately this form of hypomania alternates with a proportionately mild form of depression. Between each episode, a person may be free of symptoms. Many are symptom free for surprisingly long periods of time. However, when 4 or more episodes take place within a year, a person may be diagnosed as having rapid cycling bipolar disorder. With some people, the symptoms occur in what is known as a mixed bipolar state, where symptoms of mania and depression transpire together.

It is important to recognize that manic depression doesn't only drastically affect the diagnosed individual's quality of life, but also his/her entire family and network of those he/she is close to. People close to those with bipolar disorder are often coerced into riding this painfully unpredictable and irregular emotional roller coaster. And even though this is not a physically deteriorating disease, it is not without its injuries or fatalities. Those who are ill often find sanctuary with the use of drugs and alcohol. Many people who suffer from this disease will also cause physical harm to themselves or others during a severe manic or depressive episode. Devastatingly, 10 percent to 15 percent of those diagnosed are successful in their suicide attempts. (3)

Right now, bipolar disorder is combated with a variety of methods. Unfortunately, as with many mental illnesses where the source of the problem is uncertain, treatment is complicated and less direct. People are commonly prescribed one of many available mood stabilizers, with other medications added in order to control specific episodes of mania or depression (such as antidepressants.) The key to medication is to find the one that reacts best to your body, effectively treats your symptoms, and allows you to maintain your feeling of self. The most commonly used mood-stabilizing medications used today are Lithium and Valproate. However, there are varieties of mood-stabilizing, antidepressant, and even anticonvulsant drugs used to treat more specific or rare cases. People continue this treatment over periods of years, as the medication does not cure, but rather lessen the severity of symptoms. Strategies that combine medication and psychosocial treatment are optimal for managing this disorder best over time. (4) Keeping your doctor current with your progress and finding an optimal combination of strategies is key. But is there reason to have higher expectations for treatment in the future? Possibly even a cure?

Now that we understand the people effected by, symptoms associated with, and differing diagnosis and treatment for this life altering disease, one might wonder about the missing piece of the puzzle: Why? Why do people develop bipolar disorder in the first place? Is it hereditary? If so, is it genetic? Explanations of bipolar disorder have ranged from a shortage of Lithium in the brain to dog bites in childhood! (6) Researchers have previously argued over whether manic depression was a "mental illness" or more specifically, a "brain disorder." For example, some resources warn, "Like other mental illnesses, bipolar disorder can not be identified physiologically—for example, through a blood test or brain scan. Therefore, the diagnosis is made on the basis of symptoms, course of illness, and, when available, family history." (4) However, further research provides hope for alternatives.

The initial ray of hope is provided by inheritability. The chance of two adults without bipolar disorder having a bipolar child is only about 1 percent. However, if one parent has the disorder, the offspring's chance of becoming bipolar raises to about 5 percent. Furthermore, if that same child has aunts, uncles, or another relative with the disorder, their risk rises to about 14 percent. In the unlikely occurrence that both parents have bipolar disorder, the child is at a 30% risk, rising even greater if sibling or other relatives suffer as well. (3) Although the risk for disease increases with heredity, it is not based on any concrete principle of inheritance or ratio as with single gene disorders, again, making it hard to study. We must also reconcile with the fact that someone who seems genetically susceptible to bipolar disorder will not necessarily develop it. Thankfully, this link to heredity serves as a starting point for biological answers.

It is among popular research to focus on the neurotransmitter system specifically. Mostly, this is because the drugs we are prescribing today are aimed at controlling neurotransmitters, and have been successfully controlling depression and anxiety disorders for some time now. Some studies indicate that secret may be high or low serotonin, norepinephrine, or dopamine levels. Others feel that the issue is with the balance or equilibrium of these substances, and not simply an incorrect quantity in the body. (6) The problem may also be the way in which these substances interact with the sensitivity of receptors on nerve cells. A recently published study from the American Journal of Psychiatry reports that within patients suffering from bipolar disorder, two major areas of the brain contain 30 percent more cells! These cells are specifically responsible for sending signals to other brain cells. (6) It is conceivable, based on the exaggerated emotional symptoms of bipolar disorder, that these extra cells are responsible for a type of over stimulation occurring in the brain. Prior to this study, three independent research teams, two supported by the National Institute of Mental Health, reported a genetic link to bipolar disorder. The studies found that chromosomes 6, 13, and 15 were partly responsible. (7)A short time later, researchers of the National Institute of Mental Health Genetics Initiative Bipolar group found "several different chromosomes that seems to be important for bipolar disorder, not only chromosomes 18 and 21, which were reported before, but also 1, 6, 7, 10 and possibly others." (2) Although the scientific world is not yet able to come to consensus on which chromosomes may be responsible, they raise hope for a cure that is more than just the treatment of symptoms. These studies lead us in the right direction for a possibly biological cure for a psychologically experienced disease, which is in itself a huge step!

The most recent science brings us to the conclusion that it appears to be slightly more of a biologically responsible disorder than an environmental one, experienced as a psychological one, and exerting its power in all functions of a person's life- emotional and physical. The disease can be treated from many angles, but with our focus on biological possibilities, we may gain genetic knowledge allowing us to "intervene in the disease process to control or reverse it." (3) The potential for identifying specific chemicals, chromosomes, or genes also allows us possibility to step in before the disease has had any effect on the person. If we can identify people at risk, we may be able to prevent the development of the disorder all together.

The more we discover about bipolar disorder, the more we are drawn to ask about it. Are we looking to control and stabilize a person's environment who is deemed to be at risk? Are we looking to target specific cells? Are we looking to target specific genes? Will people one day get tested for genes that are associated with bipolar disorder? Will the discovery of those genes lead to a cure for the disease? From Andrew's perspective, there is a genuine fear that creeps into his head when he considers beginning treatment: 'Will I loose my identity?' 'Will I discover that those moments of artistic genius and emotional enthusiasm are curable symptoms of a terrible disease?' What is important to recognize right now is that there is information available and that it is ever expanding and decidedly improving. With a better understanding of this disease, its origins, and its treatments, we will also begin to bridge huge gaps between the behavioral and possibly the genetic. Whether there are definite environmental factors, cells, chemicals, genes, or actual specific strands of DNA, or multiple interacting causal factors, scientists are reasonably sure of one thing: Andrew's behaviors are not likely the result of being a Gemini!

References


WWW Sources

1) The Numbers Count: Mental Disorders in America , A huge source of well sited statistics on mental disorders, suicides, etc.


2) Researchers find Genetic "Hot Spots" of Manic-Depression , A write up on a specific genetic study designed to find chromosomes responsible for bipolar disorder.


3) The Foundation for Genetic Education and Counseling: Genetics and Bipolar disorder , A very helpful question-and answer sheet set up by the Foundation for Genetics. It is particularly interested in explaining genetics and the implications of its possible connection in this disease.

4)
National Institute of Mental Health: Bipolar Disorder
, Contains absolutely anything about bipolar disorder that is reasonably concrete information: treatment, symptoms, diagnosed types, and everything in between.

5)
Evidence of Brain Chemistry Abnormalities in Bipolar Disorder
, Discusses connections between brain cell count and brain chemicals as possible explanations to bipolar disorder.


6)
What Causes Bipolar Disorder?
, A discussion of causes for bipolar disorder. Specifically looking closer at cell counts and the neurotransmitter system.


7)
Scientists Close in on Multiple Gene Sites for Manic Depressive Illness
, A genetic study contradicting the one introduced prior. It focuses mainly on chromosone links to bipolar disorder.


Stradivarius: Unsurpassed Artisan or Just Lucky?
Name: Sarah Kim
Date: 2003-12-18 00:26:29
Link to this Comment: 7550


<Stradivarius: Unsurpassed Artisan or Just Lucky?>

Biology 103
2003 Third Paper
On Serendip

There are about seven hundred Stradivarius violins still intact from the 17th century, and they are among the most sought-after instruments in the world (3). Most, if not all, of the greatest violinists of modern times believe that there is something in the Cremonese violins that provides superior tonal quality to all other violins. Skilled violinists can even distinguish between different qualities in the sound produced by individual Stradivarius violins. The challenge for scientists is to characterize such differences by physical measurements. In practice, it is extremely difficult to distinguish between a Stradivarius instrument and a modern copy on the basis of measured responses because the ear is a supreme detection device and the brain is a far more sophisticated analyzer of complex sounds than any system yet developed to assess musical quality. There have been many theories as to why Stradivarius violins produce such legendary brilliance and resonance, none providing a conclusive answer.

To understand the factors that affect the quality of sound produced by violins, the functioning of the violin must be understood. First of all, sound is produced by drawing a bow across one or more of the four stretched strings, but the strings themselves produce almost no sound. The energy from the vibrating string is transferred to the sound box, which is the main body of the violin. The bridge, which supports the strings, acts as a mechanical transformer; it converts the transverse forces of the strings into the vibrational modes of the sound box (4). The bridge itself also has resonant modes, playing a role in the overall tone. The front plate of the violin is expertly carved with f-holes which boost the sound output at low frequencies, through the Helmholtz air resonance. The Helmholtz air resonance describes the action of the air bouncing backwards and forwards through the f-holes (1). Then, front and back plates are skillfully carved to get the right degree of arching and variation in thickness. Even the tiniest changes in the thickness of the plates and the smallest variations in the properties of the wood will significantly affect the specific resonance in the frequency range (1).

There are many theories as to the "secret" of Stradivarius violins. Of course what was obviously first explored was the exact size of the violins and ratio of the parts of the violin to each other. It was proposed that perhaps the magic lay in some perfect ratio of measurements in the pieces of the violin, but instrument makers have disassembled their violins, calibrated every dimension of the pieces to within the hundredth of an inch, and replicated the measurements perfectly in new instruments, but failed to duplicate the Stradivarius magic (2). Another factor to consider is that almost all Cremonese instruments underwent extensive restoration and improvement in the 19th century. For example, in the 19th century, both the bass bar and the sound post were made bigger to strengthen the instrument and increase the sound output. The bass bar is glued underneath the top plate to stop energy being dissipated into acoustically inefficient higher-order modes. The sound post is a solid rod wedged between the back and front plates which causes the bridge to rock, making the plates vibrate with larger amplitude, producing a stronger sound (1).

Some tests suggest that early Italian makers, such as Stradivari, may have tuned the resonant modes of the individual front and back plates to exact musical intervals (1). They would be identified by the traditional flexing and tapping of the plates, in essence, the violin maker's brain providing the interpretative computing power to perform nodal analysis. This would be consistent with the prevailing Renaissance view of perfection, which was measure in terms of numbers and exact ratios. Unfortunately, there is no historical evidence to support this case, and physicists have used lab equipment to analyze the vibrational patterns of Cremonese violin front and back plates and had craftsmen carve new plates that faithfully reproduce the patterns, yet still the extraordinary and brilliant tone of Stradivari's violins is missing (2). Also, top players regularly return their instruments to violin makers to optimize the sound by moving the sound post and adjusting the bridge, showing that there is no unique set of vibrational characteristics for any particular instrument, even a Stradivarius (1).

A claim had been made by one of the last famed Cremonese violin makers, Joannes Baptista Guadagnini, that Stradivari's secret laid in using wood that had been dry-aged, with no extra treatment (3). The problem was that in Venice, from 1700 until 1720, when Stradivari produced his most prized and valued violins, wood supplies were tightly controlled by government authorities. People would have been thrown in jail for simply walking out and cutting wood from the forests. Authorized woodcutters felled trees and dumped the longs into rivers where they were carried downstream to the capital. By the time violin makers had access to the wood, it had been sitting in water for weeks or even months at a time (2). When wood shavings from Cremonese instruments were examined, residue of bacteria and fungi showed up, just as you'd expect in wood which had been sitting in water. This suggests that perhaps Guadagnini was deliberately misleading people so that nobody could replicate the great masters.

One of the most widely known theories is that the secret lies in a special kind of varnish used. Scholars from Cambridge University used electron microscopy to identify many of the ingredients of the varnish itself and the materials used to smooth the surface before the varnish is applied (1). They concluded that most could have easily been bought from the pharmacist shop next to Stradivari's workshop and that there is no convincing evidence to support the idea of a secret formula. Joseph Nagyvary has a slightly different take on the varnish issue. He claims that the local lumberman and the local apothecary simply happened to supply Stradivari with the ideal wood and perfect varnish; the production of his magnificent and extraordinary instruments was just a lucky accident (3).

The secret to producing such amazing tonal quality, he claims lies in the varnish. Nagyvary proposes the idea that the insect-repelling mixture of "salt of gems" (which are finely crushed crystals) and borax that the Cremonese violin makers used as varnish is what fossilized the wood to a perfect pitch (3). He believed that the violin makers treated their wood with mineral solutions, which is not a far-fetched idea, as the alchemy books of the time had plenty of recipes for mineral-rich wood preservatives used by furniture makers to protect chairs and tables against damage from insects and general rot. Salt of gems was commonly used as well to add stiffness to the wood and make the finish glitter. Nagyvary's idea is that the accidental chemical reaction of phosphates and wood lifted Stradivarius's violins to a whole new level.

The finish of the most pristine of the surviving Stradivarius instruments has a brittle, almost glassy look. If Stradivari's varnish contained sugar or a polysaccharide, the molecules would have attached to one another and to the wood, stiffening it so it could vibrate more efficiently (4). Fruit-tree extracts were widely used in wood varnishes as well, and Nagyvary claims that the pectin creates polymers which continue to add to the superior brilliance of the Stradivarius tonal quality (3). Unfortunately, ultraviolet photography has revealed that many fine-sounding Italian violins have lost almost all their original varnish. These violins were recoated during the 19th century or later (1). Therefore, the composition of the varnish may have had little to do with the overall superior tonal quality of the Stradivarius violins.

Another important finding by Nagyvary is that violins acknowledged to be great by expert listeners all look similar on the sound analyzer. He found that the sound pattern almost exactly reproduces that of a human voice (2). He built violins to match the spectrogram tests of the Stradivarius violins, results registering between 4,000 and 6,000 kHz, the zone where the human ear is the most sensitive. Shunsuke Sato, a top violinist, played both a Stradivarius violin and Nagyvary's replica as a test. Though Nagyvary's violin exhibits an uncommon brilliance and resonance, the Stradivarius violin sounded much warmer and even the untrained ear could hear a distinct difference (3).

A new theory which has emerged attributes the climate of the time period to the uncommonly amazing sound of the Stradivarius violins. A tree-ring dating expert, from the University of Texas, and a climatologist, from Columbia University, claim that the wood used by Stradivari had developed special acoustic properties as it was growing because of a "Little Ice Age" (4). They propose the idea that an extended period of long winters and cool summers gripped Europe from the mid-1400s until the mid-1800s. The peak coldest point of this ice age was during a seventy year period from 1645 until 1715, known as the Maunder Minimum. This change in climate affected wood density, yielding uncommonly dense Alpine spruce for Stradivari, creating superior tonal quality. Stradivari was born the year before the Maunder Minimum began and produced his most prized and valued instruments from 1700 until 1720, right at the end of the period. These experts write that the narrow tree rings which personify the Maunder Minimum in Europe played a role in the enhanced sound quality of instruments produced by the Cremona violinmakers and that the narrow tree rings would not only strengthen the violin but would also increase the wood's density (4).

Overall, science has not provided any conclusive answer on the existence or otherwise of any measurable property that would set Stradivarius violins apart from the finest violins made by skilled craftsmen today. However, the really top soloists and the violin dealers remain convinced by the legend of the Stradivarius violins. Perhaps this is due to certain snobbery on the part of the violinists, attempting to set themselves aside as elite. Perhaps it is the dealers who do not want people questioning whether simply the name of Stradivari alone is worth a million dollars. Certainly, the secret of the Stradivarius violins is elusive. I would find it interesting, in light of this new study on the climate of the 17th century in Italy, to have scientists grow a crop of trees in controlled climates to produce exceptionally narrow rings, then attempt to recreate the magic of the Cremonese violins. This may be the real key to unlocking Stradivari's secret. Personally, I find it intriguing that science cannot find an answer to the brilliance of the Stradivarius violin's sound. Highly trained individuals can detect the difference between a Stradivarius and a new modern copy, however good the copy may be. Skilled individuals can detect the difference between the sounds of an Italian Cremonese sound and those with a more French tone. Experts can even tell apart an individual Stradivarius from another Stradivarius. Yet do not know how to characterize such properties in meaningful physical terms.


References


1)Science and the Stradivarius, PhysicsWeb, April 2000

2)Stradivari's Secret, Discover, July 2000

3)Stradivarius: Artisan or Accidental Chemist?, November 2001

4)Cool Weather May Be Stradivarius' Secret, CNN.com, December 2003


Obsessive Compulsive Disorders, Obsessive Compulsi
Name: Elizabeth
Date: 2003-12-18 10:16:30
Link to this Comment: 7551


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Elizabeth Bryan
Biology 103
Final Paper

Obsessive Compulsive Disorders, Obsessive Compulsive Spectrum Disorders, and the P.A.N.D.A.S. Connection


As someone who's been plagued by an Obsessive Compulsive Spectrum Disorder since childhood, I can say it seems hopeless at times. For so long a sufferer feels that what they have isn't a legitimate ailment and that he is alone in his battle. Thankfully, in recent years, more and more research is being done on Obsessive Compulsive Disorders, and more answers are being found.
Obsessive Compulsive Disorders are the fourth most common psychiatric diagnosis. Sometimes the onset of symptoms is sudden, but more often than not it is a gradual progression. Precipitating events that could spur the onset of an Obsessive Compulsive Disorder can include emotional stress (domestic or job-related), increased levels of responsibility, health problems, and bereavement. According to the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition, "the essential features of an Obsessive Compulsive Disorder are recurrent obsession or compulsions that are severe enough to be time consuming (i.e.: they take more than an hour per day) or cause marked distress or significant impairment. At some point during the course of the disorder, the person has recognized the obsessions or compulsions are excessive or unreasonable. It's important to note that this is difficult concerning children because children tend to not realize that their compulsions are excessive or unreasonable while adults do ((1) .).
People develop compulsions by trying to ignore thoughts or impulses, or by trying to neutralize them with other thoughts or actions. Compulsions are mental acts, and include repeating words, ordering things, hand washing, and various other motions. The goal of these compulsions is to prevent or reduce anxiety.
Because Selective Serotonin Reuptake Inhibitors (SSRI's) such as Prozac, Luvox, Zoloft, and Paxil are effective in controlling Obsessive Compulsive Disorders, it's believed that serotonin regulation is a part of the cause of OCD. Serotonin is a very important chemical messenger in the brain, and plays a role in a person's mood, aggression, impulse control, sleep, appetite, body temperature, and pain. Brain imaging studies have depicted various abnormalities in parts of the brains of Obsessive Compulsive Disorder sufferers. These parts include the caudate nucleus, the basil ganglia, the thalamus, orbital cortex, and cingulated gyrus.
Disorders that have the obsessive compulsive symptoms of intrusive, repetitive behaviors are often called OC Spectrum Disorders. Amongst these include Trichotillomania, Monosyruptomatic Hypochondriasis, Body Dismorphic Disorder, and some eating disorders. Other disorders also coexist with Obsessive Compulsive Disorders, and are referred to as Comorbid Disorders. The most common Comorbid Disorder is depression.
Trichotillomania is the chronic, repetitive pulling of bodily hair, most often being from the scalp, eyebrows, eyelashes, and pubic areas. Sufferers feel great anxiety before pulling, and feel a sense of relief after pulling out their hair. While more research needs to be done on Trichotillomania, it is believed to be related to abnormalities in brain function.
Body Dismorphic Disorder is "characterized by preoccupation with a minor bodily defect or imagined defect which is believed to be conspicuous to others" (Pedrick, #1). Repetitive Body Dismorphic Disorder behaviors include mirror checking, grooming, shaving, washing, skin picking, weight lifting, and comparing self with others. Both Trichotillmania and Body Dismorphic Disorder can be treated with medication and cognitive behavior therapy.
It is now widely believed that in many cases Obsessive Compulsive Disorders are triggered by a phenomenon called "P.A.N.D.A.S", or Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal Infections. This term is used to describe children who have Obsessive Compulsive/"Tic" disorders (such as Tourette's Syndrome) in which symptoms worsen after a strep infection(2) ..
It's recently been found that in some children, the natural antibodies developed in their blood to fight a strep infection not only attacked the strep, but also attacked perfectly healthy cells as well, including the basal ganglia region, which controls the body's motor movements, and it also known to have abnormalities in OCD sufferers. Generally the onset of tics or compulsions is abrupt in P.A.N.D.A.S. cases, and may occur within days, weeks, or months following a strep infection.
Dr. Susan Swedo has been the most prolific researcher of P.A.N.D.A.S., and is the head of the Behavioral Pediatrics Department of the child psychiatry branch of the National Institute of Mental Health. Her department has been studying OCDs and their links to strep infections since 1986. According to Dr. Swedo, if the following criteria are met, a diagnosis of P.A.N.D.A.S. may be made:
1. "The presence of an OCD or tic disorder
2. Pediatric onset of symptoms (age three to pre puberty)
3. Abrupt onset or dramatic exacerbation of symptoms
4. Symptom exacerbation related to a strep infection
5. Neurological abnormalities during the exacerbation periods, which could include unwarranted fears, fidgeting, and difficulty in school" (2) .

Dr. Swedo also notes that if P.A.N.D.A.S. hasn't occurred by age twelve or thirteen, it most likely never will.
New treatments are being experimented with for P.A.N.D.A.S. In 1999, Dr Swedo reported at the annual meeting of the American Academy of Child and Adolescent Psychiatry that plasmapheresis was successful in treating P.A.N.D.A.S. Dr. Swedo cited a preliminary study in which twenty-eight children were randomly assigned to receive plasmapheresis, intravenous immunoglobulin, or a placebo infusion. Tics declined fifty percent within one month in the plasmapheresis patients and twenty-five percent in the patients who received the immunoglobulin. No change was seen in the placebo group. Obsessive Compulsive Disorder symptoms were reduced by sixty percent with plasmapheresis, by forty-five percent with immunoglobulin, and virtually not at all with the placebo infusion(3) ..
Children treated with plasmapheresis showed greater improvement on global functioning measures, and their total improvement went on for over a month after their treatment. In the children who received the plasmapheresis, brain structures such as the caudate nucleus, basal ganglia, and globus pallidus, which are enlarged during P.A.N.D.A.S. flares, normalized. Dr. Swedo even said in some cases the difference in the size of the caudate nucleus is visible(3) ..
Thankfully, for OCD, OC Spectrum Disorders, and P.A.N.D.A.S. sufferers, more research is being completed everyday. Because significant research has only begun since the late 1980s, what is known now will be considered little in the years to come. The Plasmapheresis treatments now being experimented with look very promising in treating P.A.N.D.A.S. In terms of other OCDs, OC Spectrum Disorders, and Comorbid Disorders, cognitive therapy as well as medications have proved to be very effective. In my own experience with an OC Spectrum Disorder and depression, the combination of the right serotonin medication and therapy have proved very successful in allowing me to go on with my life without worrying about such disorders on a daily basis, as sufferers of these disorders often do.

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

1)What Causes OCD

2)Not Just a Sore Throat: Common Childhood Infection May Trigger Neurological Disorder

3)Strep Related OCD/Tic Treatment Info

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Noah's Ark
Name: Justine Pa
Date: 2003-12-18 13:10:15
Link to this Comment: 7552

Justine Patrick
Professor Grobstein
Biology: Basic Concepts
Final Paper
December 17, 2003

Noah's Ark vs. Jurassic Park:
As the human population of the world continues to increase the flora and fauna of the planet are becoming an increasingly smaller part of the picture. Environmentalist and conservationists all over the globe are working hard to find strategies and methods for the preservation of disappearing creatures and species. An increasingly popular idea that would allow for great benefits in the field of conservation became apparent in 1996 with the cloning of sheep by the name of Dolly. Since then the scientific debate on the relationship between cloning and conservation has ensued. Although the answer to that question remains on the horizon, cloning for helping endangered species is a process that may become a frequent procedure in the future.
When one thinks of cloning generally the first idea that pops into your mind is a large tube filled with some creature attached to a lot of tubes. Cloning actually, is a much more complicated and difficult process. Cloning, scientifically defined is, "asexual reproduction or as the creation of genetically identical individuals" (1). In the cloning process the DNA of one individual creature is "copied" into the cell, or embryo, of another and then that embryo develops into a baby and proceeds down the process of birth and embryological development. Currently, many scientists believe that if the DNA of endangered species is rescued and preserved that the DNA of that species could undergo the cloning process and thus produce a clone of that species. The results of this cloning process would be enormous or it "will open a new front in the battle to preserve the Earth's biodiversity by cloning endangered gorillas, tigers and other rare species" (2). Cloning of endangered species would be a monumental achievement for the scientific community. Many people when they think of cloning picture sometime weird science fiction creature in a test but in real life the idea of cloning could be a valuable tool for the environmental community and it would be a process could vastly improve the current situation of endangered species.
Part of the reason why the cloning process is so highly revered is because of the level of complexity and the multiple details that must develop accordingly for the process to work. Robert P. Lanza says, "It is a deceptively simple-looking process. A needle jabs through the protective layer surrounding an egg. A research assistant sucks out the egg's nucleus, which contains the majority of a cell's genetic material, leaving behind only a sac of gel called cytoplasm. Another needle is injected and an electric pulse fuses the newly introduced cell to the egg and the early embryo begins to divide. Soon, it will become large enough to move to a surrogate mother" (3). All of these steps must be completely perfect for the introduction of the cloning process. Although the entire process of reproductive physiology and endocrinology are currently unknown, scientists and biologists realize the many other hurdles must be jumped for the success of the cloning process. A statistic about the success of cloning states, "For every 100 eggs that are fused with host cells the expectation is only between 15 and 20 to produce the first step on the cellular level, known as blastocysts. And generally less than a fraction of 10 percent of those yield a single birth... But even in this instance we have to work hard to produce just a few animals" (3). Although the complexity of the cloning process illustrates the difficulty in the actual process of cloning, as reproductive technology and research increase further advances can be projected making cloning a more attractive prospect as a method for the conservation of endangered species.
If the process of cloning is successful there are multiple questions that arise with the induction of a clone. Cloning by definition refers to the copying of one individual and making it another. If a fundamental step in categorizing biological life has having diversity the process of cloning conflicts with this notion. Does that mean instead that cloned animals and species are not living? My answer to this question is no because life is not solely based on diversity although it is an essential aspect in the success of that species. According to the Scientific American, "cloning's main power, however, and is that it allows researchers to introduce new genes back into the gene pool of a species that has few remaining animals" (3). Cloning allows for researchers and scientists to use a certain type of species to improve the numbers of the population. Although the genetic material of the clone may come from one specific subject, it does not dictate all genetic characteristics of that species. There are certain fundamental characteristics in every living thing that is indigenous to that subject although it may have been cloned. In a commentary by Oliver Ryder he suggest, "the application of cloning technology stands to have a long-term benefit of increasing retention of genetic variation in small populations because the consideration of preservation of gene pools is a potentially major aspect of mammalian cloning" (1). He suggests that the cloning of one individual specimen will yield its own diversity although the genetic material is the same. As reproductive technology increases and cloning becomes a strong prospect for the conservation of endangered animals multiple question arise about the loss or lack of diversity that may arrive because of cloning.
The debate on the effectiveness and benefits of cloning when it comes to saving endangered animals is raised in the upbringing of the animal itself. One key factor in the cloning process lies in the actual birth of the animal. After the embryo as grown to a reasonable size it must be moved to a surrogate mother, or another female animal of similar species for the gestation period. In an article by Scientific American it states, " A clone still requires a mother, however very few conservationists advocate rounding up wild female endangered animals for that purpose or subjecting a precious zoo resident of the same species to the rigors of assisted reproduction and surrogate motherhood. That means that to clone an endangered species, researchers such as ourselves, must solve the problem of how to get cells from two different species to yield the clone of one" (3). So, even if the problem of finding a matching surrogate mother is found other questions arise. In an article that supports the cloning process that "If cloning were to become utilized, it would focus attention on the surrogate dams, [mothers], including their behavior. The importance of mother/infant relationships for instance, especially with regard to reinforcing appropriate natural behaviors adaptive to the wild for reintroduction purposes would receive increase scrutiny. The positive side of this is that females that impart desirable behavioral traits to their offspring could do so to genetically unrelated individuals" (1). If the mother of the clone is a different species a possible outcome could be that the clone would behave like the surrogate mother. This leads to the question of nature versus nurture. Does the environment of a specimen dictate its actions or are there certain social biological behavioral characteristics that exhibited by different species because of their biological background. If during the cloning process a panda happens to be raised by a black bear and exhibits the same characteristics as a black bear cub there could be major alterations to the outcome of the cloning process and the success of that clone in years to come. There are successful "interspecies embryo transfers" (3). "An Indian desert cat into a domestic cat; a bongo antelope into a more common African antelope; a mouflon sheep into a domestic sheep and a rare red deer into a common white-tailed deer. All yielded life births" (3). Only time will tell if there is any influence on the behavior of the creatures cloned. After the gestation period the question of the influence of the surrogate mother on the clone's behavior is another question and problem with the cloning process but a successful outcome in both situations can lead to better methods for the survival of endangered species.
Conservationists and scientists continue to support cloning as a method for saving endangered mammilla but during the cloning process the selected specimen may result in a specific gender of the animal. If once the decision to clone a certain animal has been made and there is only one sex represented how does that benefit the success of the species? A solution to this question comes from the Advanced Cell Technology (ACT). It says, "ACT plans to try to make a male by removing one copy of the X chromosome from one of the female specimen's cells and using a tiny artificial cell called a microsome to add a Y chromosome from a closely related species" (3). If this process works Noah's idea of "two by two" (a female and a male) is not necessary. This is genetic engineering of animals can be also considered by some as "playing GOD". But the success of the species resides in the ability for an animal to be able to reproduce, or one of the fundamental steps that is necessary in categorizing biological life.
Cloning as a method of preservation carries into the storage of the genetic material necessary for the beginning of the cloning process which begins with the DNA of the creature being clone. This idea also carries into a popular catch phrase known in the conservationist community as "frozen zoos, or there are cylinders of frozen genes from hundreds of rare plants and animals from around the world used in high-tech efforts to save endangered species and possibly cloning" (2). Scientists would be able to take samples of endangered animal's DNA and cryogenically freeze them until there is an opportunity to use the genetic material to further the success of the species. This sounds similar to a modern day creation of Jurassic Park. There was speculation about possible reintroduction of a 1000 year old wooly mammoth. But, the validity of this statement is limited because of the damaged done to the cells by changing in temperatures of the ice over the years. So, the Jurassic Park idea and the reintroduction of dinosaurs would not be applicable because there are no genes or genetic material that could allow for cloning. Cryogenics is a method that allows for the retention of the DNA necessary for the cloning process, which is a method of preservation for endangered animals.
The world is changing. Animals and the wildlife of the planet are loosing the battle with industrialization. A method to the salvation of these creatures lies in cloning. Cloning is a difficult process with many complicated aspects that are necessary for the success of the procedure. Although many may feel that this is the easy way out of conservation or will change the focus of conservationist the main focus is to allow the continuation of species that without the help of scientist and conservationist would disappear. The combination of the cloning process and an in increase methods for the preservation of the natural habitats will result in a longer lasting legacy and ability for future generations to be able to experience the majesty of the Earth's many diverse endangered species.

Works Cited
1.) Ryder, Oliver A. and Kurt Benrischke. "The Potential Use of "Cloning" in the Conservation Effort." Zoological Society of San Diego 16. (1997): 295-300.

2.) Dan, Fagin "Gene Back Acts as a High-Tech Noah's Ark." Long Island Queens: Our Future. .

3.) Lanza, Robert P. and Betsy L. Dresser and Philip Damiani. "Cloning Noah's Ark." Scientific American 19 (2000). 17 Dec. 2003. .

4.) Smith, Peter T. "Director's Diary: Cloning and Conservation." Concerning Conservation Newsletter 2 (1999). 17 Dec. 2003. .


Should the Morning-After Pill be Available Over-th
Name: Natalya Kr
Date: 2003-12-18 17:05:20
Link to this Comment: 7553


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Last Tuesday, advisors to the Food and Drug administration voted to make the "morning-after" pill available over-the-counter (1). The FDA has not yet acted on this recommendation (1). The morning-after pill is the vernacular term for emergency contraception, specifically, two pills with the commercial name, "Plan B", which have the ability to inhibit and, depending on one's perspective, possibly to terminate unwanted pregnancies. The FDA approved the first version of the morning-after pill for prescription use in 1998 (1). The issue today is whether it should be available without a prescription.

The morning-after pill is essentially a high dosage of the birth control pill (2). It can contain progesterone, estrogen, or both (2). . It can prevent fertilization in the fallopian tubes by altering sperm and egg transport or by preventing or delaying ovulation, and it can prevent fertilized eggs, or zygotes, from implanting in the uterus by thickening the uterine lining (1). It is not effective if the process of implantation has already begun (5). The morning-after pill is not to be confused with RU-486, the so-called abortion pill, which terminates a zygote implanted in the uterine lining (1).

All three mechanisms of the morning-after pill do not necessarily all take place every time it is used and it is impossible to determine which, if any, of them prevented implantation in any successful case (3). One controversial ethical issue surrounding the morning-after pill is whether it is tantamount to abortion. The debate concerns whether pregnancy and life begin with a fertilized egg or with its implantation.

If conditions in the uterus are ideal, a zygote will begin to implant itself in the uterine lining after about 6 days and take several more days to be complete the process of implantation (3). . One of the reasons why many scientists have chosen to define implantation as the beginning of pregnancy is because half of all zygotes do not survive beyond two weeks even if no action is taken to destroy them and so un-implanted zygotes are not considered necessarily viable (3). According to the Mayo Clinic, the morning-after pill prevents pregnancy from occurring because it does not terminate a developing zygote implanted on the uterine wall (1). According the American Bioethics Advisory Commission, preventing a zygote from implanting in the uterine lining, is technically abortion because life and pregnancy begin with conception (2). The American Heritage dictionary defines conception as the "formation of a viable zygote by the union of the male sperm and the female ovum; fertilization" (10). . Even this definition leaves room for interpretation about whether or not a zygote is viable, but for those who believe that every fertilized human egg is a human life, the debate here is identical to the debate over whether or not surgical abortion should be legal. It is a question of how human life is defined, when it begins, and under which circumstances, if any, it is permissible to end it. In polls, most Americans have demonstrated a preference for earlier abortions over later ones (7). For those who see more shades of gray in such matters, the question becomes, is the morning-after pill a better, more humane, and safer option, than surgical abortion and particularly late-term or partial birth abortion, and if so should it be made readily available?

In many ways, the morning-after pill is a remarkable advance over previously available methods of dealing with the prospect or reality of unwanted pregnancies. In the past women had to wait weeks for the results of pregnancy tests and then weeks more until they were at stage of development when surgery was possible (7). The morning after pill has the ability to prevent women from getting pregnant in the first place and it is not invasive, which makes medical complications far less likely. Still, critics point out that no studies have yet been conducted to reveal the long-term effects of the morning-after pill on the women who use it, especially those who use it more than once (1).

Sharon Camp, president and CEO of Women's Capital Corporation, which makes the morning-after pill, claims that 48% of American Women have had at least one unplanned pregnancy, with higher rates for teenagers and women over 40 (8). . Dr. Carole Ben-Maimon, president of Barr Research, claims that 15% of women who use condoms and 8% of women who take birth control become unintentionally pregnant and over 3 million unintended pregnancies occur each year, half of which terminate in abortions (1). She says it is estimated that half of these unintended pregnancies could be prevented with the morning-after pill (1). The United States abortion rate has dropped 5% between 1996 and 2000 (7). . Researchers at the Guttmacher Institute believe that much of this drop can already be attributed to increased use of emergency contraception (7). The morning after pill is already available over-the-counter in 31 countries and 5 states (1). In the Netherlands, where it is available over-the-counter, the abortion rate for women ages 15 to 19 is about 4 abortions per 1000 women, compared to nearly 30 abortions per 1000 women in the U.S. (9).

The designers of the morning-after pill claim that when it is taken within 72 hours of unprotected sex, it reduces the chance of pregnancy by 89 percent (1). Research shows the treatment to be most effective when taken within 24 hours of intercourse (1). Experts from the World Health Organization say that a woman's chance of becoming pregnant from unprotected sex doubles if she delays taking the morning-after pill for only 12 hours and increases with longer delays (4). Getting to the doctor to get a prescription can be difficult if not impossible on weekends or without a previous appointment. This is one of the main reasons why experts believe the drug should be available over-the-counter instead of by prescription only (1).

Some argue that the morning-after pill will promote promiscuity because it will alleviate the consequences of having sex before marriage (8). To begin with, the side effects of the morning-after pill include nausea, fatigue, headache, abdominal pain or cramps, dizziness, breast tenderness, diarrhea and moodiness, hardly a charming combination (5). More importantly, a teenager interviewed for CBS news in June said "I don't think that [the morning-after pill] causes people to be reckless any more than like airbags cause people to get into accidents. It's just something nice to have just in case something goes wrong, you know" (5). Her analogy is an apt one and illustrates the point that women who use the morning-after pill, in general, are trying to correct a personal mistake not to accommodate a careless lifestyle.

In summary, for those who believe life begins with conception and terminating an unborn human life is always unacceptable, the morning-after pill is just another method of committing an immoral act, but for those remaining, who do not consider all methods of terminating or preventing a pregnancy to have the same moral implications, the morning-after pill is a Godsend. It corrects for the chance ineffectiveness of methods of contraception like condoms and birth control without invasive surgery, and in some cases, or all, depending on one's perspective, without the moral quandary of pregnancy. The pill has the potential to reduce the number of surgical abortions and unwanted pregnancies in the United States dramatically, especially if it is provided over-the-counter. The abortion rate in countries like the Netherlands which already provide it over-the-counter, is marginal. The pill is significantly more effective the sooner one takes it after intercourse and the great advantage of making it available over-the-counter as opposed to by prescription only is that it would give a greater number of women faster and more access to it. This accessibility is crucial for women to be able to make effective use of the pill. Luckily, for the women of America, the Advisory Commission of the FDA has recommended making the morning-after pill available over-the-counter in the United States as it is in other countries. One can only hope that the FDA will follow suit.

WWW Sources

1) Health: Panel backs over-the-counter "morning-after" pill. (CNN.com)

2)The Morning After Pill: What You Need to Know About Emergency Contraception (American Bioethics Advisory Commission 2002)

3) Pro-Life Activities: Life Insight.

4) Health Pregnancy risk of "morning-after" pill (BBC New Online)

5) McKinley Health Center at University of Illinois Urbana-Champaign: Plan B Emergency Contraception (Morning-After Pills)

6) The Early Show: The Morning-After Pill, For Emergencies Only.

7) The Boston Globe: Women having earlier abortions; "morning-after" pill use rises.

8) CityPages.com: The Morning-After Pill Gets a Push.

9) Smn.com: There's got to be a morning after.

10) The American Heritage Dictionary of the English Language: Third Edition. New York: Houghton Mifflin Company, 1996.


Head Case
Name: Stefanie F
Date: 2003-12-19 01:42:43
Link to this Comment: 7556


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Most of the little girls I knew in my childhood liked to play dress up, host tea parties, play with dolls, paint, and do other "normal" children's activities. When I was a little girl I enjoyed painting, hosting tea parties, and spent many of my weekends and school vacations competing in both national and international martial arts tournaments and exhibitions. Martial arts taught me self-discipline, self-control, and self-awareness as a child. It kept me physically fit, and made me more confident in my abilities. However, as I progressed through the ranks I spent more time training and much more time competing.

Once a practitioner reaches the level of Black Belt, all sparring matches become full contact, meaning blows to the head, neck, and below the waist are now scored as hits as opposed to fouls by sporting rules. I reached the rank of Black Belt at age eight, meaning I participated in full contact sparring matches for roughly six years. In addition to my competing as a martial artist, I was also an amateur boxer for two years- from ages 13 to 15. At the time, I didn't think about the consequences of the sport I had chosen. The daily punishment of taking one or two hard blows to the head didn't seem troublesome at the time. All competitors wore gloves in addition to protective headgear and mouthguards. In the roughly ten years in which I fought competitively I sustained several concussions, and only one in which I lost consciousness. However you may wonder, what is the clinical definition of this condition, and how are concussions diagnosed?

What is a Concussion?

When an injury to the brain is sustained, it causes the brain to bounce against the hard bone of the skull. The force of the hit against the skull might cause "tearing or twisting" of structures and blood vessels in the brain. This "tearing or twisting" deep within the brain tissue causes a breakdown in the normal flow of messages within the brain. This breakdown is the biological explanation of the concussion condition (1).

Oh No! Am I Concussed?

There are over 600,000 cases of sustained concussion in the United States alone each year. Symptoms include, loss of consciousness, dizziness, nausea or vomiting, increased size of one pupil, loss of memory, severe headache, weakness in one or more extremities, or changes in behavior. These symptoms may last anywhere from a couple hours to several weeks or months, depending on the seriousness of the injury, according to most physicians (1).

How is Concussion Severity Determined?

There are over sixteen concussion grading systems noted in medical literature however, the most widely used was developed by Dr. Robert C. Cantu in 1986. He, in 2001, revised these guidelines placing the emphasis on posttraumatic amnesia (PTA) and other post concussion symptoms as signs as opposed to loss of consciousness when determining concussion severity. The Cantu Scale goes from grade one to three. Grade one is a mild concussion in which there is no loss of consciousness, however there is PTA or post concussion symptoms which last anywhere from fifteen minutes to half an hour. A grade two concussion is described as moderate, with loss of consciousness (for no more than a minute), or PTA or post concussion lasting for longer than thirty minutes, but no more than 24 hours. The most severe concussion, a grade three, is when there is loss of consciousness for more than a minute, or PTA or post concussion lasting for longer than seven days (2).

Now, at age eighteen, I have been removed from martial arts and boxing for almost four years. In those four years I have sustained two additional concussions while playing basketball, both of which were only grade one severity. About a month and a half ago I sustained a third grade one concussion, which has since been upgraded to a grade three, as I am still experiencing post-concussion symptoms almost two months after the initial injury. The blow to my head in a routine drill, which was not even a hard hit, has also led to the end of my collegiate basketball career. Has my history of previous head trauma led to my current delay in recovery? Am I now more susceptible than other athletes to suffering a concussion? Most importantly, will I suffer long term damage from the injuries sustained during my youth and high school athletic years?

When is it Okay to Play?

One of the most frequently asked questions of physicians and certified athletic trainers is, when is it okay to play? There is no set timetable for athletes who are concussed, with regard to when they should return to the field, pitch, court, pool, or other venue of competition. Researchers at the University of Pittsburgh's have developed new technology known as Immediate Post-Concussion Assessment and Cognitive Testing system or ImPACT. The purpose of the ImPACT technology is to test the memory, reaction and processing times of athletes. A baseline assessment obtained at the beginning of a season would serve as a comparison to an athlete's performance post-concussion. The results would allow physicians, coaches, and trainers to have a more objective method of deciding the status of athletes following injury (3). ImPACT is affordable technology that is available for athletes at all levels and is currently used by the National Football League and the National Hockey League, in addition to high school, club, and middle school teams. Strict adherence to revised concussion diagnosis guidelines in addition to implementation of new diagnostic technology can help keep impaired athletes from further brain damage by allowing for ample recovery time. "On-the-field amnesia", not loss of consciousness, is most important in diagnosing concussion severity, in addition to deciding whether an athlete should be allowed to return to competition (4). With proper diagnosis in addition to allowing for recovery time coaches, physicians, athletic trainers, and other health care professionals are preventing possible long term damage.
Research by the team at the University of Pittsburgh has concluded that concussion is cumulative. Simply, the more a person is hit in the head, the more likely he or she is to recover more slowly, thus leading to a greater risk of another concussion. According to this research, I am indeed more susceptible to future concussions.

Concussion is one of the more allusive injuries in the sports medicine community. Diagnosis and treatment are hard to determine, as is the effect of concussion in the long term. These unanswered questions leave me, a victim of multiple concussions, uncertain about my future. My neurologist is unable to assure me that future problems do not lie on the horizon as the deeper, lasting impact of concussion is still a mystery.

References

1)Concussion, a guid to concusions for families provided by University of Missouri Health Care, Neuro-medicine

2)Concussion Grading Systems And
Return-To-Play Guidelines: A Comparison
By Robert C. Cantu, M.D.
, a resource outlining the symtoms of concussions and how they are graded

3)Serious Effects of Mild Concussions, a sports medicine article disgussing the concerns about the long term effects of concussions

4)FOR YOUNG ATHLETES WITH CONCUSSION, ON-THE-FIELD AMNESIA, NOT LOSS OF CONSCIOUSNESS, PREDICTIVE OF POST-INJURY SYMPTOM SEVERITY, an article reporting the results of a study of concussed student athlets by the University of Pittsburg Medical Center


'Golden Rice': who would have thought something s
Name: Abby Fritz
Date: 2003-12-19 10:59:05
Link to this Comment: 7557


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Heated debate over the bioengineering of a type of rice that has come to be called 'golden rice' has been taking place in the past five years. Exploration of the possibilities that would follow the mapping of the rice genome began in response to the huge populations in developing countries that experience vitamin deficiencies; namely, vitamin A. When biotech company, Syngenta, announced that they had mapped the rice genome, a series of activist groups spoke out against a project that was, many argued, politically and financially motivated. This has been a topic of intense debate that I was surprised to find. Developing a kind of rice containing a vitamin that is lacking among large populations seems to be such a great idea. So why does so much controversy surround the project? There are many more disadvantages to the introduction of the new technology than one might anticipate. The following first explores the effects of vitamin a deficiency and then the arguments of the opposition and supporters' responses to it.

Vitamin A is an organic compound that is needed in small amounts in the human body; however a deficiency in this micronutrient can lead to problems and illnesses (3). The vitamin is found naturally in many plant and animal foods in the forms of retinal in animals and carotene in plants (3). Retinal pigments that are very important for night vision are produced by Vitamin A; the vitamin is also important in maintaining the strength of epithelial tissues (5). Without proper amounts of Vitamin A, the outer lining of the eyeball becomes dry and wrinkled, leading to redness and inflammation and, which brings potential of blindness (3). Sources vary, but on average it is believed that as many as two million children die a year due to vitamin A deficiencies and that another 500,000 go blind (2).

It is because of these kinds of numbers that researchers have been searching for ways to bring more vitamin A into foods that are part of the diets of people in at-risk countries, especially in Southeast Asia. A genetically engineered rice, 'golden rice,' has been named one potential solution. Rice is a staple food in most of the countries that have been experiencing numerous health issues due to malnutrition, the greatest deficiencies being of vitamin A. Traditional rice, however, lacks vitamin A; this is largely due to a process called rice polishing that was introduced by The Green Revolution (4). The process involves removing the aleurone layer, because it causes the rice to turn rancid more quickly while being stored (9). 'Golden rice' is meant to return the vitamin A, in the form of pro-vitamin b-carotene, to rice so that the millions of people who rely on it as a major food source gain some nutritional value from it (2).

The concept sounds noble and logical enough. It sounds like a perfect solution to a widespread problem. Why not add vitamin A to rice? What could really be lost in the process? According to the numerous critics of 'golden rice' research and the recently discovered rice genome map, nothing but problems will arise upon the integration of the rice into the diets of the malnourished that the entire project claims to have wanted to help. One of the most substantial arguments that critics turn to is the fact that the addition of vitamin A into a staple food such as rice would be far from fixing the problem of malnutrition. People who are severely malnourished may not even be able to absorb the vitamin A ingested through rice if they do not have an otherwise overall nutritionally satisfactory health status (11). Pro-vitamin b-carotene is fat soluble, and therefore requires dietary fat for absorption (8). Therefore, the "digestion, absorption, and transport of b-carotene require a functional digestive tract, adequate protein and fat stores, and adequate energy, protein, and fat in the diet" (8). Because vitamin A deficiency is clearly accompanied by deficiencies of many other micronutrients, the proper absorption of vitamin A is easily hindered among the target population of 'golden rice.' Those with diarrhea, which is common in developing countries, are also unable to take in the vitamin A from rice (10).

Let us imagine for a moment that the bodies of those individual who had deficiencies could absorb the vitamin A existing in 'golden rice.' Critics also argue that experimenting with the possibilities of vitamin A in rice is completely trivial because the amount that a person would have to consume to even take in half of the necessary daily value of vitamin A is simply not practical. The average person would have to eat twelve or more pounds of rice to meet their daily vitamin needs (2). Realistically, three servings of a half pound of cooked 'golden rice' per day would provide only 10% of a person's daily vitamin A requirement, and less than 6% for a woman who was breast-feeding (10).

The combination of the amount of fat and other nutrients necessary to take in vitamin A and the impracticality of digesting many pounds of rice a day to even have the opportunity to take in nearly the required daily amount has resulted in the suggested that malnutrition merely a nutrition problem, but also as a social problem (12). Dr. Samson Tsou, Director General of the Asian Vegetable Research and Development Center (AVRDC) proposes that, "income generation, healthy diet and proper education need to be improved simultaneously for sustainable development" (12). He, as well as numerous others, thinks that the priority should be to increase vegetable production with high vitamin A content, as many leafy green vegetables and green and yellow fruits are rich in the vitamin (3). Nutrition experts have determined that, "a pre-school child's daily requirement of vitamin A can be met with just two tablespoons of yellow sweet potatoes, half a cup of dark green leafy vegetables, or two-thirds of a medium-sized mango; and unlike golden rice, these vegetables supply other micronutrients as well" (10). It is very possible that even the distribution of massive doses of vitamin A every six months, in addition to educating people to eat green leafy vegetables and yellow fruits like papaya daily, could be more effective than the proposed benefits of 'golden rice' (5). The doses have been given in the past in the amount of 200,000 I.U. every six months based on the property of vitamin A that allows it to be stored in the liver and used as necessary over a long period of time (5). These methods could be more effective used as part of a plan that might reduce vitamin A deficiency long term. In addition, encouraging people to maintain diversity in their diets is important and keeps them well-balanced.

Not only might the introduction of 'golden rice' as a "cure" for vitamin A deficiency reduce diversity among the foods eaten by an enormous population of the world, but this "prescriptive approach in which only a few varieties will contain the trait will further worsen genetic erosion," warns MASIPAG, (The Farmer Scientist Partnership for Development Inc.) in the Philippines (12). Local groups such as this are reluctant to combat a socio-economic problem through an artificially produced solution, as they are still attempting to recover from the Green Revolution, which they have totally abandoned (12). They see maintaining, or in some cases re-introducing, biodiversity in many forms of sustainable agriculture as being key because they have seen it practiced successfully by tens of millions of farmers all over the world (9). These local groups also distrust the instability of artificial gene constructs that have been known to be structurally unstable, because they easily break up and join incorrectly with other parts of genetic material, resulting in new and unpredictable combinations (9). These unpredictable products bring up huge questions of safety, especially as they have never existed in the many years of evolution (9).

As we have seen so far, the 'golden rice' project has been met with bitter controversy. Many critics are convinced that the entire program has been a publicity stunt and that its intentions were politically and economically motivated. Brian Tokar, a member of Biojustice, a group opposed to genetic engineering asserts that, "the purported benefits of golden rice are completely fabricated" (2). Some have even gone so far as to say that the entire project is "absurd" and that "it was a useless application, a drain on public finance and a threat to health and biodiversity" (9). They view it as a desperate effort to "salvage a morally as well as financially bankrupt agricultural biotech industry that obstructs the essential shift to sustainable agriculture that can truly improve the health and nutrition especially of the poor in the Third World" (9). The bioengineered 'golden rice' has come under severe scrutiny because it has been seen by so many critics as the "poster child" of the food biotechnology industry's extensive public relations campaign to convince the public that, "the benefits of genetically engineered agricultural products outweigh any safety, environmental, or social risks they might pose" (8).

Amid all of the skepticism and criticism, there remain adamant supporters of the possible benefits anticipated to become possible as a result of the development of such products as 'golden rice.' They have responded to the accusations of ulterior motives that are politically and financially driven by asking what the goals of organizations like Greenpeace might be saying through their criticism. Greenpeace has been a staunch critic of the introduction of 'golden rice' because it was developed through their "bete noir" of transgenic engineering and because it helps modern agribusiness (1). But supporters of its production feel they can just as easily point fingers at the organization asserting that GMO opposition has waged its own war of propaganda against the work of bioengineers. The argue that, "they [GMO organizations] are only pretending to work for mankind," or "are only satisfying their own egos" or "are merely working for the profits of industry" (13). Ingo Potrykus, Professor Emeritus at the Institute of Plant Sciences Swiss Federal Institute of Technology, has responded to these challenges firmly stating:

"By their [Greenpeace and associated GMO opponents] singular logic, the success of 'golden rice' has to be prevented under all circumstances, irrespective of the damage to those for whose interest Greenpeace pretends to act...The GMO opposition has been doing everything in their power to prevent 'golden rice' from reaching subsistence farmers. This is because the GMO opposition has a hidden, political agenda. It is not so much the concern about the environment, or the health of the consumer, or the help for the poor and disadvantaged. It is a radical fight against a technology and for political success. This could be tolerated in rich countries where people have a luxurious life even without the new technology. However, it cannot be tolerated in poor countries, where the technology can make the difference between life and death or between health and severe illness" (13).

Potrykus also clearly points out that the GMO opposition insists on demanding that scientists take full responsibility for their actions, while seemingly blind to the fact that they are not taking responsibility for their own actions by hindering the advancement of a product such as 'golden rice' that could help those that they claim to support (13).

These and similar remarks by those who support 'golden rice' technology are very logical and I have often thought the same way about radical environmentalist and humanitarian groups that thrive by bringing other organizations down to benefit of their own agendas. I find that there are possible benefits to this development in biotechnology. One is that rice is the smallest of major cereals: six times smaller than that of corn and 37 times smaller than that of wheat (1). The production of bioengineered rice could make way for possible and perhaps more beneficial production of more complex bioengineered products that contain more important vitamins and nutrients that the body needs. In this respect, no matter what the motives of the biotech industry might really be, there is no denying that the technology has potential of being helpful knowledge to have obtained.

At the same time, I am still uneasy about the integration of artificially engineered products with native or local species. After all, we have spoken all semester about the importance of maintaining diversity as a fundamental element in biological systems. Further, I am in agreement with the arguments that nutrition experts have made about the need for more than a vitamin A enhanced rice. There are so many more elements that contribute to malnutrition; and if people who do suffer from it will not even be able to ingest the vitamin A from 'golden rice,' due to a lack of fat intake or other necessary vitamins, then I see no need to push for its immediate introduction in developing countries. There are too many alternate solutions to the problem of malnutrition that would also quite possibly be more beneficial, especially in the long term. Promising options seem to be increasing awareness among at-risk populations about the foods available to them that consist of the greatest number and most beneficial vitamins and nutrients that are imperative as part of a balanced diet. The complexity of factors that lead to malnutrition support the notion that the addition of a single nutrient to a food will not play a large enough role in developing a remedy for the problem. To conclude, I would suggest that to assert, as many opponents have, that the entire 'golden rice' project has been a useless application and a waste of money is extreme. I think that the knowledge gained by mapping the rice genome will have a use in the future and may well play a role in solving hunger issues. As far as solving the present problem of vitamin A deficiencies, however, I feel that there are better, more economical options that will have much greater long-lasting effects for those affected by it.


References

1)"Rice Genome Brings Hope, Controversy.", The controversy over genetically engineered rice.

2) "'Golden rice' touted as one of biotech's benefits" , Various criticisms of 'golden rice' production

3)"The Basics of Vitamin A.", Everything you need to know about Vitamin A.

4)"Natural versus engineered Vitamin A.", More specifics on vitamin A and how much the body needs.

5)"The Indian Scene.", Specifics on malnutrition in India.

6)"The Doctor's Debate.", Malnutrition's effects on children.

7)"Natural Foods and Beta-Carotene.", Specifics on beta-carotene and its absorption.

8)"Unproven and unwanted science: the debate.", The use of 'golden rice' as biotech industry's "poster child."

9)"An exercise on how not to do science.", Author of this article is adamant critic of 'golden rice.'

10)"Golden Rice and Vitamin A Deficiency.", This article addresses problems with Golden Rice

11) "Golden Rice: blind ambition?" Friends of the Earth International. Link Magazine: Issue 93, April/June 2003. More arguments against Golden Rice

12)"Grains of delusion: Golden rice seen from the ground.", Assessment of pros and cons of Golden rice.

13)"Golden Rice and Beyond.", This article contains arguments supporting Golden Rice research and the importance of technology transfer.


Lung Cancer: A Good Reason to Stop Smoking
Name: Flicka Mic
Date: 2003-12-19 15:52:12
Link to this Comment: 7561


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Lung Cancer accounts for15 percent of all cancer cases, and an estimated 170,000
people in the United States get lung cancer a year. (5)About 155,000 of those people die from the cancer. Recently, the rate of women affected by lung cancer has increased, while the rate of men affected has decreased. However, lung cancer is the leading cancer that kills both women and men. (1) So, what is lung cancer? Lung cancer is the uncontrollable growth of abnormal cells in the lung. (5) There are two main types of lung cancer: non-small cell and small- cell lung cancer. Non-small cell lung cancer (or NSCLC) is more common than small-cell lung cancer (SCLC) and occurs in about 80% of all lung cancer cases. (3)

Early lung cancer does not cause symptoms, so when the symptoms finally do show and the cancer is detected, it is already at an advanced stage. (1) Smoking has been shown to be the primary cause of lung cancer. About 87% of all cases occur in people who smoke. (1) However, not everyone who smokes gets lung cancer and not everyone who gets lung cancer smokes. One of the main symptoms of lung cancer is a chronic cough that lasts for more than two weeks. Another is constant chest pain. Other symptoms include wheezing, shortness of breath, coughing blood, hoarseness, and repeated pneumonia or bronchitis. (1) There are also less noticeable signs such as unexplained fever, weight loss, or appetite loss. (2)

As mentioned earlier, there are two types of lung cancer: NSCLC and SCLC. NSCLC can be divided into three sections. First, there is epidermoid carcinoma which is usually starts in the large breathing tubes, and grows slowly. (5) There is also adenocarcinoma, which is found in the mucus glands and can vary in size and rate of growth. Finally, there is large cell carcinoma, which begins at the surface of the lung, and grows very rapidly. (5) SCLC, also known as oat cell carcinoma, has no divisions. It starts in the large breathing tubes, and grows fairly rapidly. This type of lung cancer is usually very large by the time it is diagnosed. (5)

There are also different stages of lung cancer. For NSCLC, there are four main stages. Stage I A/B is when the tumor has spread to the lymph nodes that are associated
with the lung. (2) Stage II A/B is when the tumor has spread to the lymph nodes in the tracheal area, including the chest wall and the diaphragm. Stage III A is when the tumor has spread to lymph nodes on the opposite lung or on the neck. Finally, Stage IV, the most severe stage, is when the tumor has spread beyond the chest. (2) On the other hand, SCLC only has two stages of cancer growth. Limited is when the tumor is only found in one lung and the nearby lymph nodes. Extensive is when the tumor has spread beyond the lung into other organs.

The treatments for lung cancer depend on the type, location, and size of the cancer. (3) One option is to remove the tumor through surgery; however, it depends on the location of the tumor. A surgeon can perform a resection, which is a removal of part of the lung. The removal of the entire lobe of the lung is called a lobectomy. (3) In addition, there is a surgery called a pneumonectomy, which removes the entire lung. Each kind of surgery is performed according to how advanced the cancer is and how much of the lung has been affected by the cancer. (3)

Another option for treatment is chemotherapy. Chemotherapy is the use of antidrugs to kill cancer cells or to control the growth of cancer cells in the body. It can also be used after surgery if any cancer cells remain. Usually, chemotherapy is administered by an injection through an IV or through a catheter. However, some antidrugs are administered through pills as well. (3) Chemotherapy can be used in advanced stages of cancer to relieve symptoms, and in all stages of SCLC. (1)

An alternative to chemotherapy is radiation. Radiation is the use of high-energy rays to kill cancer cells. However, they can only be directed towards a specific area and therefore only affect the cancer cells in that area. (3) Radiation treatments can be used before surgery to shrink the cancer cells or afterwards to get rid of any remaining cells. There are two types of radiation. External radiation is radiation that comes from a machine. (3) Internal radiation is when an implant of radioactive material is placed into or near the tumor. (3) Studies have shown that a combination of therapies can be more effective than just one treatment by itself.

Any person 60 years or older who has a history of smoking or is currently smoking should get tested for lung cancer. Screening tests are used to detect lung cancer early so that doctors may treat it before it spreads. One test is called the low-radiation dose spiral completed tomography scan, or low-dose CT scan.(2) Recent trials done in the U.S. and Japan have shown that this type of screening can lead to early detection of lung cancer in people with high-risk. They also found that this type of test worked significantly better than chest x-rays at detecting lung cancer. In the U.S., a study of 1,000 people who were at high-risk for lung cancer took the low-dose CT san. It exposed malignant tumors four times as often as chest x-rays and detected Stage I tumors six times as often. (2) Therefore, this type of screening is becoming more popular in patients with a high-risk for lung cancer.

New research studies are being done to test new drugs or combinations of drugs for lung cancer. Other clinical trials test the combination of chemotherapy and radiation treatments. (2) The newest breakouts in lung cancer treatment are usually discovered through clinical trials. Before the Drug and Food Administration can approve a new drug for treatment, it must go through a series of phases. Phase I Trials test how a drug should be administered to the patient, how often it should be administered, and in what dose. (4) These beginning trials only use a small group of patients for testing. Phase II Trials include the initial information about the effectiveness of the drug, and then can be used for testing to a specific type of lung cancer. Phase III Trials compare the effectiveness of the current drugs with the drug being tested. (4) These trials use a large number of patients from around the country. Each patient is randomly assigned a drug, either the new one or the standard one, and then the efficiency of each drug is tested against the other. The last stage in clinical trials is called Phase IV. These trials are used to continue testing the drug after it has been approved by the FDA and is already available on the market. (4)

Recently, the International Adjuvant Lung Cancer Trial (IALT) did a study comparing the effect of surgery to surgery combined with chemotherapy to treat NSCLC. The study included 1, 867 patients from 33 different countries. (4) Thirty-six percent of the people had Stage I lung cancer, twenty-five percent had Stage II, and thirty-nine percent had Stage III. They found that after two years, 61% of people who had combined treatment of surgery and chemotherapy were free of cancer, compared to the 55% of people who had just had surgery. (4) After five years, 39% of the people who had had adjuvant chemotherapy were cancer free, compared to the 34% of people who had had only surgery. (4) The discrepancy is small, but it is enough to help scientists evaluate which method of treatment will help the patient the most.

In conclusion, new studies are being done every day to improve the treatment of lung cancer. However, it is still a fatal illness that kills most people if it is not detected early. The main way to avoid lung cancer is to stop smoking! Cigarettes and tobacco have about 4,000 chemicals in them which lead directly to cancer. (1) The more a person smokes, the greater the possibility of lung cancer. However, if one stops smoking, the risk of getting lung cancer reduces each year. After ten years, the probability decreases to about one-half or one-third the risk of people who continue to smoke. (1) In addition, secondhand smoke causes about 3,000 people to die from lung cancer a year. (1)Therefore, the best way to prevent yourself or those around you from getting lung cancer is to stop smoking!


References

1)Facts About Lung Cancer

2) Lung Cancer.org

3) Treatment for Lung Cancer

4)ALCASE Education

5) What Is Lung Cancer?


Big Questions: Conversations inside the Third Cult
Name: Su-Lyn Poo
Date: 2003-12-19 17:11:29
Link to this Comment: 7563

<mytitle> Biology 103
2003 Second Paper
On Serendip

      In 1961, C P Snow introduced the idea of the "two cultures", the scientists and the literati, divided by a lack of communication that had been crystallized through academic specialization (1). Thirty years later, John Brockman unveiled the Third Culture as the new face of intellectual life, consisting of scientific thinkers who had ousted the traditional literary scholars in "rendering visible the deeper meanings of our lives, redefining who and what we are" (2). He has been criticized for his fragmented vision of intellectual culture, which affords no place to non-scientists in spite of the apparent inability of science to provide answers to the "big questions" that we ask (3). But are we defining these particular questions in a way that excludes science? If these are issues of truly universal significance, then no single discipline can claim monopoly over their interpretation: answers must draw from broader horizons.
     
      The scientific optimism of which Brockman boasts has been approached with much cynicism by humanist scholars. Much discomfort arises not from scientists' claims to general truths about the world, but from the assertion of many scientists that their work stops at the process of discovery: science has nothing to do with how politicians choose to apply their ideas (4). Humphrey (5) points out that it is a great cause of anxiety when those who generate knowledge disclaim all responsibility for how that knowledge is put to use, whether in the form of eugenics in the past, weapons of mass destruction in the present, or even possibly thought control in the future (5).
      Appleyard recognizes that science aspires to be a value-free pursuit of knowledge, but also that such pursuits are inevitably conducted in a value-laden world (4). If scientists refuse any role in shaping these values, then it is for the humanities and social sciences to help us understand the significance of scientific progress (6), whether it is through the way in which we define life, when confronted by abortion and cloning, or how increasingly closely-integrated communication networks have transformed human relations across the expanse of space and time. In this respect, Brockman's scientifically imperialistic conception of intellectual culture lacks the "questions of subjective, of spiritual and of social values" (3) that must lie at its heart.
      Furthermore, the criteria by which the sciences have established themselves and the standards to which they proudly hold have been responsible for the impressive advances, but as Taylor points out, "there are places where experiment and verification cannot go" (5). Given Brockman's emphasis on empirical facts, his understanding of "what it means to be human" (5) seems lacking. There is little room for the consideration of the moral dimension of the human experience. His words echo with what Leavis criticizes Snow for: an inadequate sense of human nature and human need (7). After all, according to Lanier, there are questions that must be addressed by any thinking person that lie outside of the established methods of science (5).
     
      If so, is it reasonable to expect science to provide a full worldview that can provide answers on the moral and spiritual levels? Hut's answer is hesitant: "Science just isn't far enough along to address that quest" (5). Instead, the scientific method should be used to sort through our inherited wisdoms, to separate that which is still useful from the dogmatic trappings of the past. Taylor maintains, therefore, that the promise of science lies not in "sweeping away other aspects of existence ... (but) respectfully deepening understanding of what it is to live and die as a human being and observing the universe from that perspective" (5).
      How is it, then, that Brockman can distinguish the achievements of the Third Culture, which he rightly predicts will affect everyone, from those of the humanities, described as the irrelevant and "marginal exploits of a quarrelsome mandarin class" (2)? Horgan reminds us that humanist scholars like Judith Butler, often derided for her work in deconstructing sexual identity, are "far more engaged with reality – our human reality – than are string theorists or inflationary cosmologists" (5). Ludwig Wittgenstein went so far as to argue that even when all the questions posed by science have been answered, the problems of human life will remain untouched (4). But is this true, or even a fair assessment of the purpose of science?
      What Brockman failed to make clear was that the sciences and the humanities have different goals, that the "deeper meanings" that the humanists seek are not necessarily the same that the scientists seek. Comprehension may arise from explanation, as offered by Pinker in his biological account of human nature (5). But we learn too from empathy (some may say more powerfully), as from Shakespeare's portrayal of Lear as the figure of fallen pride, repentant too late. The different approaches of sciences and the humanities obscure the relationships that do exist between what are often seen as separate cultures. Levine maintains that a kinship binds the two, that "science and literature reflect each other because they draw mutually on one culture, from the same sources", and yet they remain different enterprises, "(working) out in different languages the same project" (8).
      It is thus erroneous of Brockman to expect the goals of the humanities to align with those of the sciences, or to expect them to become empirically-based (Hauser, 5). By this same token, however, it is also unreasonable to expect the sciences to answer our big questions in the manner that we have posed them. One may argue that the point of science, given its specific methodological commitments, is to be held to a different set of questions, a different manner of asking and seeking answers. It is in bridging this gap that we see the power behind the enterprise of the Third Culture, which, contrary to Brockman's puzzling portrayal of it as predominantly scientific, draws a large number of its members from the humanities and social sciences. Smolin describes this community as characterized by a new epistemology, rooted in a pluralistic, relational approach to knowledge that gains from the rich exchanges between its diverse participants (5). Nagel's "view from nowhere" has proven unfruitful, and is here usurped by the view from everywhere.
     
      Perhaps nowhere is there a clearer record of the extent of this bridge-building than the World Question Center (9), an archive of questions and responses from the Third Culture. I provide a short list of some of the entries from 2002 as an indication of the breadth and depth of conversation. The contributors of these ten questions include professors of astrophysics, biology, classical studies, mathematics, physiology and social psychology, as well as science writers and laboratory directors.
* Can democracy survive complexity?
* Is it conceivable that the standard curriculum in science and math, crafted in 1893, will still be maintained in the 26,000 high schools of this great nation?
* Can there be a science of human potential and the good life?
* Why do we fear the wrong things?
* Will non-sustainable developments (i.e., atmospheric change, deforestation, fresh water use, etc.) become halted in pleasant ways of our choice, or in unpleasant ways not of our choice?
* Are space and time fundamental concepts or are they approximations to other, more subtle, ideas that still await our discovery?
* Could our lack of theoretical insight in some of the most basic questions in biology in general, and consciousness in particular, be related to us having missed a third aspect of reality, which upon discovery will be seen to always have been there, equally ordinary as space and time, but so far somehow overlooked in scientific descriptions?
* Do the benefits accruing to humankind (leaving aside questions of afterlife) from the belief and practice of organized religions outweigh the costs?
* Why bother? Or: Why do we go further and explore new stuff?
* Do 'folk concepts' of the mind have anything to do with what really happens in the brain?

      These pages stand as testament to the fact that science, in spite of its inability to answer some of the big questions that we have, is still willing and able to pose some of its own. It also presents with determined clarity the fact that the sciences and humanities (and everything in between) cannot and should not ask their big questions in isolation from or ignorance of each other. This is the hope of the world that the Third Culture bears: that we will begin to see holistic, encompassing answers rather than specialist treatments of a narrowly-defined topic.


References

(1) Snow, C. P. (1959). The two cultures and the scientific revolution. New York: Cambridge University Press.
(2) The Third Culture - An essay introducing the Third Culture.
(3) Durant, J. (1995, June 1). Off line: the third culture. The Guardian (London): pg 9. Retrieved December 19, 2003, from Lexis-Nexis database.
(4) Appleyard, B. (2003, November 30). Mugged by the science mafia. Sunday Times (London): Features; News Review 9. Retrieved December 19, 2003, from Lexis-Nexis database.
(5) The New Humanists: Science at the edge - Introductory essay by John Brockman and responses to it from members of the Third Culture.
(6) More on the Science Wars (2003, February 23). From the weblog of Steven Shaviro, Professor of English at University of Washington. Retrieved December 19, 2003.
(7) Leavis, F. R., and M. Yudkin. (1962). Two cultures? The significance of C. P. Snow. London: Chatto & Windus.
(8) Levine, G., and A. Rauch (Eds.). (1987). One culture : essays in science and literature. Madison, Wis.: University of Wisconsin Press.
(9) Edge: The World Question Center - An archive of responses from scientists, thinkers, etc. to the questions that John Brockman has posed.


The Needle Treatment
Name: Laura Wolf
Date: 2003-12-19 17:38:58
Link to this Comment: 7564

<mytitle> Biology 103
2003 Second Paper
On Serendip

Acupuncture is an ancient Chinese method of "encouraging the body to promote natural healing and improve bodily function" (1) that dates back as far as 4,700 years ago. Now for the past 25 years it has appeared in the U.S. as a popular form of alternative medicine, and it is "a licensed and regulated HealthCare profession in about half the states in the U.S." (3). It is most often called upon for problems such as lower back pain, migraines, arthritis, and additional non-fatal aches and pains. Some people say it works, others are still skeptical. Since this method does not seem to be based on "actual science", is it merely a placebo effect? Can a medical practice dated nearly five millenniums ago still prove to be valid?

When acupuncture was created, some of the medical concepts it employed were relatively new; there were not many falsified stories for it to build off from. In fact, "acupuncture is said to have been theorized... by Shen Nung, the father of Chinese medicine, who also documented his theories on the heart, circulation, and pulse over 400 years before Europeans had any concept about them" (1). Since then, Europeans and Asians alike have encountered centuries of medical dilemmas and successes. Over time, hypotheses emerge and are either disproved or continue to live on as part of scientific discourse and medical practice. For this reason, most old-fashioned treatments no longer hold true when compared to methods cultivated within the great wealth of knowledge attributed to medicine today – not because we are smarter now or are more civilized, but because the field of medicine has accumulated so much more experience and has improved methods to be "less wrong" countless times. So, why has acupuncture not been bettered or disproved after all this time? Is it perhaps a perfect form or treatment? Probably not. But, let us look more closely at the acupuncture treatment to understand its unlikely longevity in the medical world.

First, the patient must relax in order to prevent fainting or nausea; the most common side-effects of acupuncture, often due to nervousness. The practitioner will insert fine, hair-like needles into the body "at specific points shown as effective in the treatment of specific health problems. These points have been mapped by the Chinese over a period of two thousand years" (2). The needles go about ¼ to one inch deep, depending on the age and size of the patient and the location of pain on the body (2). The insertion of needles is sometimes accompanied by "heat or electrical stimulation at these specific [acupuncture] points" (1).

The process is relatively painless when done correctly. The patient "should feel some cramping, heaviness, distention, tingling, or electric sensation... In Chinese, acupuncture is bu tong, painless. Some Western cultures may categorize these sensations as types of pain. In any case, if you experience any discomfort, it is usually mild" (2). However, if the patient experiences a burning sensation, sharp pain, or if s/he is too nervous and feels uncomfortable with the procedure is it suggested that s/he let the physician know immediately. After researching this procedure and hearing over and over that it should be painless, the image of patients being used as human pin-cushions does not seem quite so frightening...

But why stick needles into someone? What is the point (no pun intended)? "The popular classical Chinese explanation on how acupuncture works states that channels of energy run in regular patterns through our body and over its surface. This energy force is known as Qi (pronounced Chee)" (1). The channels of Qi, as well as Xue (meaning blood) are said to "cover the body somewhat like the nerves and blood vessels do" (2). "The channels... are called Meridians, which are compared to rivers flowing to the body to irrigate and nourish the tissues" (1). Pain is created when these Meridians are blocked – like a dam in a river – causing backup and disharmony. The theory is that by stimulating certain points on the body, a natural flow of Qi and Xue will return and the pain will have been treated. "In this way, acupuncture regulates and restores the harmonious energetic balance of the body. In Chinese there is a saying, 'There is no pain if there is free flow; if there is pain, there is no free flow'" (2).

At first glace for a person of the Western world, this explanation may sound more like fantasy than medical fact. So, what is another, perhaps more "scientific" explanation?
"Western medicine's view is that the placement of acupuncture needles at specific pain points releases endorphins and opioids, the body's natural painkillers, and perhaps immune system cells as well as neurotransmitters and neurohormones in the brain. Research has shown that glucose and other bloodstream chemicals become elevated after acupuncture" (3).
This seems to support the Chinese blood-flood theory, or at least show that perhaps acupuncture is not simply a placebo. Something does happen to the body during this procedure, which is amazing considering it was theorized 4,700 years ago.

Patients come out of the treatment with mixed results, however. It depends on the skill and training of the practitioner, and it also depends on the patient. Often two people will have the same injury and receive treatment on the same day at the same clinic, but one will come out feeling better while the other has experienced no change. Obviously this is not a perfect treatment, but when it accomplishes something it seems to only help, never hurt.

However, there are some dangers to acupuncture when one uses it as a replacement for new medical treatments without understanding the true purpose of acupuncture – it alleviates non-fatal bodily pains, not cancer or any organic diseases. As one scientist warns:
"When sick people become irrational and fanatical about acupuncture, or about any other non-conventional form of medical treatment, resulting in the delay in diagnosis and treatment of their illness by a physician, this could lead to unnecessary increase in the risk, complications and mortality. Some preventable deaths due to delayed diagnosis and treatment have been reported among patients who insisted on nothing else but on acupuncture or on alternative medicine in the treatment of their organic illnesses" (1).
In other words, acupuncture should not be relied upon as some sort of medical miracle. It is an alternative treatment that is sometimes prescribed for the right reasons.

A lot of mystery still surrounds acupuncture, especially from a Western point of view. It has been used for some seemingly unlikely purposes; to cure smoking or alcoholism, to aid with psychological problems, to cure kidney failure or asthma. In China, it has even been used in animals "to turn the fetus into a normal position in the womb" (4). Altogether, acupuncture has withstood the test of time; not necessarily the test of criticism. The procedure seems to work sometimes, and so it has not been falsified. It seems very beautiful, ancient and creative, and so it is left alone as a non-conventional medical alternative. But there should be more studies done on the subject, especially to prevent people from depending on it because of a desire to be all-natural. This is not reason enough to abandon centuries of experience and of scientists getting it "less wrong". Acupuncture is a possible treatment – but not a perfect one.

References

1) Heart to Heart with Philip S. Chua, M.D. an overview of acupuncture, its origins, and its current uses and misuses.

2)The layman's guide to acupuncture. A wonderfully descriptive site, but biased in favor of acupuncture. Does not include downsides.

3) Acupuncture for children includes some general information about acupuncture.

4)Animal acupuncture highlights a few uncommon uses for acupuncture.


Major Depression Across Atlantic: Diagnosis and T
Name: Mariya Sim
Date: 2003-12-20 01:04:32
Link to this Comment: 7572


<mytitle>

Biology 103
2003 Second Paper
On Serendip

In this day and age depression is a catchword. It is applied to all imaginable situations, from grieving after the loss of a loved one to simple foul moods. Although such a loose usage of the word is hardly warranted, the statistics of the World Health Organization suggest that there is some real basis behind it: about 4-5% of the world's population suffer from depression, and it is the reason behind about 60% of all suicides (1). United States is ahead of the world's quota in this sad race: according to the National Institute of Mental Health, about 9.5% of the population (or about 18.8 million adults) experience a depressive disorder in any given year (2). With such prevalence, one can hardly wonder why the issues of diagnosing and treating depression are so urgent and controversial both in the United States and around the globe.

While the growing awareness of the reality of depression as an illness gradually removes the stigma from those suffering from it, the public's attitude, at least in America, seems to be shifting to the other extreme. Almost any sadness, change of mood, or less-than-happy-go-lucky behavior is suspected of being a herald of depression, regardless of the possible causes. On the one hand, people began to be less reserved about seeking psychological help (which is a good thing). On the other hand, both psychiatrists and primary care providers are sometimes too quick to give a verdict "depression" and even more prone to use antidepressants as a cure-all. Although the causes of depression are by no means well-defined (3), drugs that supposedly adequately treat it are publicized without reservation and often regarded as the only possible solution to the problem. This is especially true in America, partially because most new anti-depressants are developed, produced, and hence marketed here, partially because of the methods of diagnosis commonly used, and partially due to the time restrictions placed by the health insurance companies on health professionals.

Since this situation is beginning to raise some concerns among the general populace and physicians, it would be beneficial to compare and contrast the diagnosis and treatment of depression in America and in a country, whose historical conditions shaped a different attitude towards this illness and towards the methods of dealing with it. Russia, being akin to the U.S. in both size and population, but sharply different in the organization of its medical system, could become a suitable subject for such a study. Without focusing on minute details, one can undertake a brief overview of the situation in both the United States and Russia and develop a general, but by no means definitive list of pros and cons for both systems. Although multiple variations of depression can be considered, we will focus on major depression in this study in order to keep the argument succinct.

Major depression, a combination of symptoms such as sadness, hopelessness, feelings of guilt, loss of interest in life, decreased energy, sleep and appetite disruptions, difficulty concentrating, etc. that interfere with one's daily life (2), has a long history of being recognized as an illness in America. Before 1960s, the methods of its diagnosis and treatment were largely left to the discretion of one's psychoanalyst. When first tricyclic anti-depressants were developed in 1960s (3), the psychoanalytic community was more than suspicious about their efficacy, often to the point of outright rejection (4). Medications were thought to provide temporary relief of the symptoms of depression, but the etiology of the illness was traced to the deep psychic conflicts that could only be approached with the help of a psychoanalyst (4).

As the research went on, various studies have suggested that the reason for major depression lied in the disruption of the functioning of certain neurotransmitters, which, in its turn, disturbs the communicative processes in the brain (5). Tricyclic anti-depressants and monoamine oxidase inhibitors stabilize these processes, although the mechanisms of this stabilization are multiple and in many cases contradictory (3). These medications were still not widely used due to rather serious side effects, but with the arrival of the selective serotonin reuptake inhibitors (SSRIs) in 1980s this ceased to be a major problem. The new medications, while providing the same relief, were more safe and produced less side effects for most patients (6).

From 1980s on, the use of anti-depressants to treat major depression is widely accepted and encouraged. Primary care physicians, psychiatrists, and psychoanalysts prescribe anti-depressants, often in combination with monoamine oxidase inhibitors, in the majority of the cases, regardless of whether counseling services or psychoanalytic services are provided (4). The readily discernible reason for promoting the use of anti-depressants is their apparent (and relatively fast) effectiveness; however, there are other factors that may contribute to their often being a treatment of choice in the United States.

One such factor is the official system used for diagnosing major depression in the United States. Popularly called DSM-IV, it is based on The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. It provides psychiatrists with a list of particular episodes and then outlines various mood disorders based upon this list. It also specifies the details of the behavior and states of the patients with different disorders and provides information on the cycles associated with each (7). DSM-IV is often praised for not attempting to define the causes of depression, but for rather striving to delineate some useful categories based on the symptoms of the illness, thus providing a framework of terms and possible treatments, while not assigning the patients to any harsh categories (3). However, there is another reason for its popularity. Since the examination performed using this system results in very orderly forms with explicit delineation of the possible diagnosis, it is a god-sent to the insurance companies and researchers looking for consistency of their results (7). It can also provide an established and safe backdrop for psychiatrists not wanting to deal with the nuances of each individual case, much like the widespread acceptance of anti-depressants labels them as a safe choice despite the still-unknown mechanisms by which they function.

Many psychiatrists find fault with both the DSM-IV diagnosis system and with the unlimited and often indiscriminate use of anti-depressants for essentially the same reason: the lack of attention to the nuances of the individual case. Thus, Simon Sobo, the Chief of Psychiatry at New Milford Hospital, CT notes that anti-depressants are often used in order to simply alleviate the symptoms of the illness, with "treatment" stopping short just as the desired effect is reached. Even if one puts aside the dubious and uncertain nature of the usefulness of anti-depressants and accepts them as a helpful tool, there remains a problem of the physician often relying too much on the "fixing" power of the drug. Counseling, which should be used to acquaint the patient with the methods for dealing with stressful situations in the future, to teach him useful modes of thinking, and to elucidate certain behavioral patterns of the brain, is overlooked. The patient, in effect, remains dependent upon the drug, which, given the fact that major depression usually recurs several times throughout a person's life, does not constitute an optimal treatment. Sobo also notes that using anti-depressants often rationalizes the illness in terms of purely physiological disorders and discourages the patient from facing personality or attitude issues, which plays a crucial role in the process of recovery (8).

Similar arguments could be levied against DSM-IV diagnostic method. However, the issue is more complicated here since money comes into the forefront. U.S. medical insurance companies often allow as little as 15 minutes per a physician and 30 minutes per a psychiatrist visit. The lack of time necessarily results in the doctors' opting for faster, but not necessarily thorough methods of diagnosis and treatment. Filling out questionnaires provided by DSM-IV and prescribing anti-depressants may well become their legitimate choices, because of their apparent credibility and convenience. The doctor only has to evaluate a patient, prescribe a drug, and then continue to administer routine 15-minute check-ups to ensure the effectiveness of the anti-depressant (8), (9).

Insurance companies also play an unsavory role in misdiagnosing depression, since many primary care physicians simply do not have the time (in the 15 minutes allowed) to do a thorough examination and to detect a possibility of a major depression. Given the fact that major depression is often masked by somatic symptoms, many depressed patients never receive proper treatment for their actual illness (9). The same time-and-money problem often forces the patients and the physicians to discontinue treatment (both pharmacological and psychiatric) when the patient first responds to the treatment but before a full remission is achieved. Such cessation of the process results in recurrent episodes of depression, which could have been avoided, had the patient have enough financial resources to continue therapy (6).

Despite all of the above problems, there are important steps taken by the U.S. medical and insurance systems to ensure that major depression is recognized and treated as an illness. There are resources for patients who lack money or insurance coverage for proper therapy; a thorough publicity campaign is waged in order to make people acquainted with this "disease of the century"; governmental and private funds are allocated each year to the continuing study of the causes and processes behind this illness. New drugs are developed and tested, and new networks of support are built. (3)
This is not the case in Russia, however. The concept of depression as an illness is relatively new there and has not gained widespread recognition, despite the health professionals' attempts at popularizing it. This is warranted historically, since psychiatry in the Soviet Union dealt almost exclusively with the cases that threatened the social structure, in other words, with the patients who were potentially harmful to others. The other "branch" of psychiatry in the Communist regime was wholly dedicated to keeping the dissident movements in check. The psychiatric wards were filled with perfectly healthy Soviet citizens who for one reason or another did not please the authorities. They were kept placated by the means of heavy psychotropes and electrical shock. Needless to say, the word "psychiatrist" still associates for some people with these "doctors," while others think that psychiatrists exist only for the clinically insane patients who are threatening to society (10).

There are also cultural reasons for not recognizing major depression as a legitimate illness. Russians are a pretty melancholic and pessimistic nation: for instance, it is in bad taste to answer "Great!" to the question "How are you?" A person happily smiling without any special reason is considered a fool. This mood of universal sadness is darkened even more by the current economic situation, so much so that even severely depressed people often rationalize their condition and feelings by saying "And who is happy nowadays?" On the other hand, a person who openly admits to having depression or, worse, to going to a psychiatrist is considered at best a weak and lazy character and at worst a hypochondriac, a maniac, or both. Needless to say, in both cases this person stops being socially accepted and is often blamed for his "imagined" sickness. (11)

This attitude towards depression is so widespread that sometimes even physicians are unaware of or skeptical about it being an illness. This leads to frequent misdiagnosis, so much so that about 80% of patients suffering from major depression is thought to be currently treated by general practitioners, while every fifth patient seeking treatment from a physician really needs the help of a psychiatrist (1).

Of course, this also means that the current knowledge about depression is largely borrowed from Western studies, and, in particular, from American experience. Among psychiatrists, the United States, with its social and medical guarantees for depressed individuals, with research programs, and especially with an adequate attitude towards depressive disorders (at least by Russian standards), is highly regarded (14). Nevertheless, despite the Russian psychiatrists' attempts to emulate American approaches to diagnosis and treatment of major depression, certain specificities of the medical care and social security systems and cultural differences ensure sufficient variance between prevalent Russian and prevalent American methods.

Thus, although all Russian psychiatric reviews stress the utmost importance and effectiveness of anti-depressants, the economic conditions are such that they are simply not available due to their cost for most patients. Even though most anti-depressants are currently on the Federal List of Essential Drugs and, therefore, not only should be available in all drug stores and hospitals but also sold at bargain prices to qualified individuals, this regulation is rarely followed due to the lack of money in federal and state budgets (13). Therefore, psychiatrists are often forced to treat their patients in the absence of anti-depressants or with the help of the older brands, which shifts the emphasis from pharmacotherapy to various counseling methods.

These methods are very similar to U.S. counseling methods. Psychodynamic therapy focuses on the inner psychological conflicts, trying to help the patient recognize and identify the conflict and to learn to resolve it constructively. Behavioral psychotherapy attempts to solve the most bothersome current problems of the patient and to alleviate certain negative behavioral symptoms: passivity, monotonous life style, isolation from the loved ones, inability to plan and make decisions, etc. Cognitive psychotherapy in the Russian practice is a synthesis of the above two branches that combines working with the concrete difficulties in the patient's life as well as with his or her behavioral patterns and inner conflicts. Cognitive psychotherapy also aims at breaking the patient's habit of negative pessimistic thinking and at helping him or her to establish a new way of looking at the self and the world. (12)

Unlike their Western colleagues, however Russian psychotherapists do not have any time limits set for the course of therapy allowed to a patient. Medical care is still mostly state-owned, so each citizen is entitled to free medical help, including free psychiatric visits and counseling. This allows the Russian psychologists to lead their patients on a gradual path to recovery, making sure that they reach complete remission before treatment ceases. Since the majority of their patients cannot afford anti-depressants, the doctors focus more on teaching the patient self-regulation, thus providing him or her with a framework which would be applicable to all stressful, but not necessarily depression-inducing situation (12).

The lack of pressure from insurance companies also allows both general physicians and psychotherapists to ensure that each patient receives adequate examination, so that most possibilities for somatic disorders are ruled out before the patient is directed to the psychiatrist (1). This comparative freedom as well as the specific Russian situation concerning the misconceptions and misinformation about depression conditions certain research interests of Russian psychiatrists, which often include masked depressions and borderline depressive disorders (14).

This interest in diagnostics also manifests itself in the Russian method of diagnosis of major depression. While the official system for identifying and classifying depression is the World Health Organization's ICD-10, which is rather like the American DSM-IV, it is often, if not always, supplemented by the more individualized and carefully worded methods of such prominent Russian psychiatrists as Behterev, Topolyansky, and Strukova (14). These methods focus more closely on the patient's individual symptoms, on his attitude to life, on his expectations (or the lack thereof) from the psychiatrist, from himself/herself, from life. They also presume that the patient's family will be as involved and supportive as possible, since a Russian cultural trait assigns great significance to the social aspects and causes of depression (13).

Thus, the major strengths of the Russian system for diagnosing and treating depression are particular attention to detail, focusing on psychotheraphy rather than pharmacotherapy, allotting enough time for thorough evaluation and treatment of the patient, and involving the family into the recovery process. These services, however, are only available to patients already diagnosed, which is a relatively small number, given the general unawareness of and even disdain towards major depression. Russian psychiatrists would benefit from emulating their American colleagues in their efforts to publicize the condition. They would also benefit from anti-depressants being more generally available; however, that is beyond their control.

American methods for treating depression could also be improved by learning from the Russians. Methods of diagnosis, particular attention to counseling, and, perhaps, an attempt to re-evaluate the universal effectiveness of anti-depressants could all safely be borrowed from across Atlantic. Physicians and psychiatrists should lobby to increase the time allocated by insurance companies per doctor's visit, while continuing to do the good work of making the general population aware of the concerns and dangers of depression. Overall, the American system needs to be less money-and-time and more individual-and-quality oriented, keeping in mind the fact that giving drugs to a patient does not remedy the problem, but teaching him or her to cope with stress on their own potentially will. However, one should not underestimate the gains of the American doctors in the fields of awareness and social support of people with depression, which are unquestionably more advanced than their Russian counterparts. Both countries would benefit immensely from productive dialogue and exchange of experience. Hopefully, such a dialogue will be started in a foreseeable future.


References

1)"Depression in Medical Practice." The Research Center for Mental Health of the Russian Academy of Medicine.

2)National Institute of Mental Health Home Page.

3)All About Depression, A Comprehensive Overview of Depression.

4)"Psychoanalysis and Pharmacotherapy - Incompatible or Synergistic?" By Leslie Knowlton. Psychiatric Times online.

5)An article on depression on the Continuing Medical Education, Inc. website.

6)"Applying Innovative Approaches for the Treatment of Depression." on Critical Breakthroughs, an online compilation of recent publications for practicing physicians and psychiatrists.

7) A comprehensive review of DSM-IV.

8)"A Reevaluation of the Relationship between Psychiatric Diagnosis and Chemical Imbalances." By Simon Sobo, M.D.

9)"Discovering Depression in Medical Patients." By Kurt Kroenke, M.D. In Annals of Internal Medicine online.

10)"I Have Depression." By Frumkina, R.

11)"What Is Depression?" By Egorova, Elena, M.D.

12)"What Do You Need To Know About Depression?" By Holmogorova et. al. Russian Psychiatric Research Center.

13)"Masked Depressions." By Dmitriyeva, Tatyana. The Russian Academy of Medicine.

14)"Depression." By Petrov, M.D.


Trepanation, Spirituality and Loneliness
Name: Lara Kalli
Date: 2003-12-20 04:16:49
Link to this Comment: 7575


<mytitle>

Biology 103
2003 Second Paper
On Serendip

The search for a "higher level of consciousness" is one that seems to be as old as consciousness itself. Practices such as the ritualistic or religious consumption of peyote, ayahuasca, psilocybe mushrooms or other such naturally-occuring hallucinogenic drugs, self-deprivation and transcendental meditation are just a few of the countless ways in which mankind has sought to expand the limits of human experience; these practices are still a mainstay in many modern countercultures. They are also very well-known and documented practices. There exists, however, a radical surgical procedure, as old as the aforementioned practices but far less notorious in the general public, which purports to result in the same sort of enlightenment: trepanation, also known as trephination.

(Be prepared: likely, the primary reason that trepanation has not received so much attention from popular culture is that it is far more extreme than the other methods mentioned above.) What is trepanation? Strictly speaking, it is the practice of drilling, scraping or in any other way creating a small hole in the skull down to, but not through, the dura mater, or the thick, tough membrane that contains the brain. Archeological evidence tells us that it was performed by ancient cultures on every continent; the oldest trepanned skulls that have currently been found date as far back as 3000 B.C (approximately). In almost all cases, the evidence points to the trepanation having been performed with skill and a great deal of precision - it was clearly a procedure that had ritualistic import. (1)

What purpose could this operation possibly serve? In early documented incarnations, trepanation existed as a cure for mental illness - it was believed that mental illnesses were the result of demons living within the skull, and thus a hole was made in the skull through which these demons could escape. The more modern perspective on trepanation as a means of expanding consciousness was started in by Bart Hughes when his text "The Mechanism of Brainbloodvolume ('BBV')" was published in 1962. (1) Bart Hughes' theory can be summed up as follows: "...as we mature and age our skulls harden, restricting blood flow to the capillaries of the brain....children, especially babies with their "soft spot", had a clearer outlook on the world because their brains were free to receive more cerebral blood volume than...our adult brains with hermetically-sealed skulls." (1) There are many ways to increase brainbloodvolume (the self-explanatory term coined by Hughes, hereafter referred to as BBV) temporarily, such as standing on one's head, quickly moving from a hot to a cold bath, or the consumption of psychedelic drugs; however, according to Hughes trepanation is the only way to increase BBV permanently. (2) The supposed result of this permanent increase in BBV is greater mental acuity and stamina, amplified sensory experience, relief from nameless anxiety; overall, a far improved sense of well-being. (3) In 1965, Hughes became the first person ever to successfully trepan himself. (4)

Since that time, the concept of trepanation as a route to spiritual awakening as championed by Hughes has been publicly lambasted by licensed medical professionals, who claim that all the positive changes noticed by those who have been trepanned can just as easily and plausibly be attributed to the placebo effect; Hughes himself has been denounced as dangerously insane, having at one point been forcibly detained in a hospital for three weeks. (1) However, he accrued a following despite the odds. Amanda Feilding and Joey Mellen are contented self-trepanned owners of an art gallery in London; they lecture all over Europe, showing the film that Mellen made of Fielding's self-trepanation called Heartbeat in the Brain. (2) Furthermore, there exists an organization called the International Trepanation Advocacy Group, of which Bart Hughes is a member, devoted solely to promoting the benefits of the procedure. The group owns a facility in Mexico where volunteers can be trepanned.

Not all those who have been trepanned have the same glowing response to it, however. One individual (whose name was not given either by himself or his interviewer) wrote a detailed account of his trepanation, which was performed by a close friend with several witnesses, and his experiences during the month that immediately followed it. At first, he experienced all of the changes that the literature on the operation promised - increased alertness and sensory perception, higher sense of well-being, and so on. However, his account ends with a rather abrupt about-face: he writes that, after some serious consideration, he realized that everything he experienced could be attributed either to the placebo effect or to the fact that he was already observing his feelings, thoughts and perceptions extremely closely in order to observe the possible results of the operation. (3) He concludes with the following statement: "Trepanation has no more physiological effect than any other trauma. I believe it is possible to so thoroughly convince yourself you feel different that you will, but I don't believe there is any pronounced or otherwise verifiable physiological improvement.... I enjoyed life more afterward because of the simple fact that it was still happening and I didn't kill myself. This kind of renewed vigor could be created by any survival of a possibly near-death experience. I conclude it does not do what many hope it will." (3)

What is interesting to note about this individual's account is that, despite the negative conclusions at which he was forced to arrive, he wrote that he did not regret his trepanation. Why? Because, although it did not have for him exactly the lasting physiological effects that he had hoped for, he felt that it did help him to a sort of spiritual awakening, due to the fact that in the month following the procedure he was extremely focussed on his thoughts and senses.

We have so far spoken a great deal about spirituality without explaining it, so at this point we should ask the question "What is spirituality?" Essentially, we can understand it as a state of mind in which one feels more connected with "soul" or some higher, deeper form of being; also it can be the practice or study which has that connectedness as its goal. Religion is an organized, formalized subset of spirituality; one can be spiritually-minded without being at all religious (one can also be religious without being spiritual, but that's a different matter). Spiritual awakening, supposedly, is the all-important "moment of realization" - though the term "moment" is used very loosely here, for many traditions hold that spiritual awakening is a life-long process. It is the point at which one finally becomes conscious of the connectedness of which we just spoke, the point at which the individual breaks through the barrier between his own bundle of thoughts, feelings and perceptions and the "higher level". "Expanded consciousness", "elevated level of being" - although true seekers of these sorts of states might argue that they are not precisely the same as spiritual awakening, for our purposes we can understand these kinds of terms as meaning something very similar.

But here we come to another question: "What is the self?" As we said above, the self is composed of a single bundle of thoughts and feelings and perceptions. However, we must mention further that that bundle comprises all that can ever be experienced and conceived of by the individual. All that we know and believe comes from that bundle, for it is all that we can access; it is un-transcend-able, in a manner of speaking. I may be able to comprehend another person's perceptions or thoughts or feelings because, when he speaks of them to me, I find that they are like my own on some level, or at least that they seem to be the results of similar experiences that I have had. However, I can never have another person's perceptions; I can never think his thoughts or feel his feelings. More significantly, by the same token, we can never see things from the perspective of a higher, objective entity. In this sense, therefore, we as individuals are completely isolated from one another and from whatever higher form of being there might be.

This is a very lonely state of affairs. If this knowledge were an integral part of everyday human consciousness, most would not be able to function.

Which brings me to my point: the individual who comes to this realization must act in whatever way possible so as to lessen or eradicate the pain that accompanies it in order to continue with his life. The ways in which one can accomplish this are infinite; the individual would select a course of action that is compatible with his overall disposition. Certain people who are so inclined, therefore, will choose to pursue what we have called spiritual awakening. One way to ease the pain of being an isolated unit is to maintain the belief that one's current isolation within the self does not necessarily entail that there is nothing that exists beyond the self; if one takes action to forcibly alter the self - that is to say, to forcibly alter that which one thinks, perceives, and feels - then one may have a chance at accessing that which is beyond.

Where does trepanation fit into all of this?

Depending on the individual, the realization that we are isolated units will come with varying degrees of intensity. For some, it may only take the form of a nagging doubt in the back of the mind; for others, however, it may appear as a glaring, inescapable reality. The more intense the realization, the more radical the reaction must necessarily be. I argue that the existence of the modern practice of trepanation as a route to spiritual awakening is excellent evidence of this fact. The individuals who have undergone trepanation are - I would venture to say without exception - individuals for whom the pursuit of higher consciousness has been a life-long endeavor; having found that other routes, such as the consumption of hallucinogens, were unsatisfactory, they eventually came upon trepanation. It was the most radical action available; in other words, it was an act of desperation. Those who choose to be trepanned are those to whom the realization described above is nearly intolerable; they will take any action, even one as extreme as the act of physically drilling a hole in the skull, if it is promised to them that the result will be a more spiritually awakened life - i.e. a life in which the pain of being alone has been lessened.

References

1. the International Trepanation Advocacy Group's website; contains or at least gives you access to pretty much everything there is to know about the practice of trepanation

2. excerpt from the book Eccentric Lives & Peculiar Notions; contains information about Amanda Feilding and Joey Mellen

3. detailed firsthand account of an individual who was trepanned

4. an interview with Bart Hughes conducted by Joey Mellen; also contains a detailed explication of Hughes' theory


Hypochondria and Prozac: a pill for all ills?
Name: Brittany P
Date: 2003-12-20 14:57:48
Link to this Comment: 7576


<mytitle>

Biology 103
2003 Second Paper
On Serendip


Hypochondria and Prozac: A pill for all ills?

Brittany Pladek

***********************

Right now, my shoulder really, really aches. It's a dull, uncentralized sort of pain, and over the past few days, it has spread to my neck and upper arms. My wrists hurt too, especially when I twist them a certain way. I'm tired all of the time, and thirsty.

I'm not worried about these symptoms. Their cause is obvious. I've spent the last four days hunched over my computer until early morning, furiously typing reports for finals week (this one included). I get an average of four to five hours of sleep a night, and the rest of the time, only a constant stream of caffeinated beverages can keep me awake. My back/shoulder/neck pain is caused by my stance at the computer; my tiredness is a result of---what else?---lack of sleep; I'm thirsty because all I'm drinking is soda.

If I were a hypochondriac, though, I'd probably think I had cancer.

Don't laugh. Hypochondria, or the attributing of benign symptoms such as backache and fatigue to serious illnesses, occurs in 1% of the population and 5% of America's medical outpatients ((7)). These people, while usually genuinely healthy, interpret every minor pain as indicative of something serious. They travel from doctor to doctor seeking treatment; if one doctor refuses to acknowledge their illness, or gives them a clean bill of health, they simply move on ((7)). This process can go on for anywhere from six months to years ((6)). The symptoms they feel are not delusions, nor are they purposefully-created fakes. Pretending to experience nonexistant symptoms is a behavior associated with a different disorder, Muchausen syndrome. Hypochondriacs' pain is very real. It's just not, as its sufferers assume, a sign of some fatal illness ((7)).

Actually, the story isn't even that simple. There are several different levels of hypochondria. For one, there are people who take normal aches and pains and self-diagnose themselves with something serious. For another, there are those who obsess unduly over an illness they already have ((6)). Then there are people who suspect they are riddled with several different ailments ((1)). All of these conditions have been lumped---correctly?---under the blanket term hypochondria.

This in itself is a problem. What *is* hypochondria, exactly? Rather, why does it occur? No one seems exactly sure. 20th century doctors dubbed it a purely psychological illness. Freud, typically, considered it a sublimation of the libido that manifested itself as pain below the ribcage ((6)). Other psychologists assert that traumatic experiences can trigger the disorder: the death of a loved one, for example, might heighten a person's sensitivity to disease ((1)). Childhood behavior patterns might also play a role. If a parent rushed his child to the doctor every time the child felt ill, that child might grow into an oversuspicious adult ((1)). Lifestyle, too, may influence the disease. One Norwegian doctor attributes the growth of the disease to an increase in affluence. Vikings, he says, didn't have time to worry about a headache. Likewise, one of his patients links hypochondria with seasonal-depressive disorder. Norway, dark for much of the year, is apparently a depressing place to live ((2)). And the list goes on. Because the symptoms associated with hypochondria aren't really "symptoms" at all---rather, they are benign pains with no connection to any disease---doctors have, for a long time, dismissed hypochondria as "all in the head."

Because of the disease's supposedly "psychological basis," treatment has, for years, been consigned to non-drug-based methods. These methods, popularly referred to as "therapy," have a high success rate. Psychiatrist Steven Locke studied a group of 114 hypochondriacs on their quest to heal themselves through group therapy. ((5)) A year after the sessions concluded, they had cut their doctor-visits in half (saving a whopping $1,008 in bills). Dr. Ingvard Wilhelmsen, of Norway, runs a special clinic for hypochondriacs from his home hospital in Bergen that is so popular it has its own waiting list ((2)).

Now, however, ideas are beginning to change. Psychiatrist Brian Fallon, an expert on the disorder, recently concluded a long experiment on the causes, effects, and possible treatments of hypochondria. His findings radically contradict all previous data on hypochondria. "I am firmly convinced that hypochondria can happen biologically, from the link that's been observed with obsessive compulsive disorder (OCD)," he says ((1)). He describes this link as a pattern of similar behavior. Hypochondriacs, for example, obsessively, repetitively check their condition with different doctors; they cannot stop "obsessing" over their symptoms; they cannot be convinced that the problem is in their minds. On a hunch, Fallon treated one of his hypochondriac patients with Prozac, a known prescription for sufferers of OCD. The man improved dramatically, and Fallon expanded the treatment to the other hypochondriacs under his care. It seemed to work, so in 1993, he conducted a full-blown study of 25 patients over 12 weeks of treatment. 70% of them did extremely well, Fallon notes ((6)). He suspects that Prozac's effectiveness is due to its ability to boost the amounts of brain-borne serotonin, a type of neurotransmitter whose deficit has been linked to many psychological disorders, including OCD. ((6)) He is enthusiastic about extending the treatment further.

It is here that my problems begin. I dislike Fallon's race to employ drugs in treating hypochondriacs for three main reasons. One is medical, one is practical, and one is moral.

I'll deal with the medical first, because the problem is one Fallon fully acknowledges---and yet seems to ignore. This is, simply put, that Prozac is nowhere near as effective as his statistics claim. Several of his "cured" cases were actually patients on placebo. Fallon himself gives a detailed example of a woman whose recovery was so complete he was sure---until the test results came in---that Prozac had worked for her ((1)). "She got tremendously better, and started to dress well and look extremely happy," ((1)) he notes satisfactorily. The story gets darker, however. The woman also experienced psychological "nocebo" effects: in other words, adverse symptoms to the sugar pill. These included a vow to murder someone who was blackmailing her about the affair she was having ((1)). Other patients have done just as well with the placebo as they have with the real thing, suggesting that hypochondria is better treated through psychological therapy than pill-popping.

On the other hand, is this type of experiment (ie, one that includes placebos) even a good idea? Fallon admits that "Some hypochondriacs seem to be enormously suggestible" ((1)). By the nature of their illness, hypochondriacs feel pain that has no real cause. When they are placed on placebos and subsequently develop "nocebos" (painful side effects), they are simply repeating the pattern they followed before treatment: attributing discomfort to a condition that doesn't really exist. Therefore, the "nocebo" effect would actually serve to worsen their condition.

Another medical problem stems from the elusiveness of the disorder. Fallon himself points out that there are several different types of hypochondria, which are grouped together more by symptom than by cause. However, he attempts to blanket-diagnose all of these differing conditions with a single pill. About a third of his patients are barely affected by the drugs. Worrisomely, this group is, as Fallon puts it, "harder to treat. They feel they have a multitude of symptoms, and they worry that they have a serious disease" ((1)). Excuse my pickiness, but isn't "worrying that you have a serious disease" the basic definition of hypochondria? Are his other patients doing something different? Fallon says Prozac works, but even he is not quite sure how. Could only one type of hypochondria, in fact, be related to OCD and ergo treatable through medicine? If so, what about the other types---are they related physiological disorders, purely mental conditions, or something else altogether? If they're not specifically linked to OCD, is attempting to treat them with Prozac a good idea?

My second beef with Fallon's methods is a practical one. While Prozac may serve to dull anxiety and depression related to hypochondria in 70% of cases ((1)), I want to argue that it's not an effective long-term treatment. It's not going to cure hypochondriacs, because it actually makes their illness worse. Think about it logically. Hypochondriacs, when they jump from doctor to doctor, are seeking confirmation that they actually *do* have a serious physiological illness, one that can be treated through medicine ((3)). When they reach Fallon's office, they're told that they *are* sick, it *is* physiological, and it *can* be treated through medicine. Basically, it's giving them exactly what they want. It lets them quote, smugly, the proverbial epitaph on the proverbial gravestone: "I *told* you I was sick." Fallon's blackmailed patient's reaction was similar to this. So overjoyed to be diagnosed, she vehemently refused to relinquish her pills when Fallon requested ((1)). It wasn't medical withdrawal she was feeling---she was actually on a placebo. Rather, she was so desperate to hang onto the one real diagnosis she had received (and which she made herself believe was working), that she hoarded her sugar-pills like a lion.

More importantly, pills aren't an effective long-term solution because they don't deal with the problem that's at the heart of the disease: recognition of their condition. Dr. Wilhelmsen explains the importance of this recognition. "The patient with hypochondria can realize that he has anxiety, and not a serious physical disease, and gradually reduce his anxiety. The patient is not healed when he realizes that he has health anxiety (just as a person can still be afraid of flying even though he knows it), but it is an important first step," ((2)) he writes. While therapy helps patients take these steps, slowly allowing them to realize their hyper-anxiety and reduce it, Prozac removes the patient's responsibility. He doesn't have to worry about actively fighting his condition; he can simply pop a pill and wait for recovery to kick in. Therapy, on the other hand, allows the patient to seize control over the disorder. Wilhelmsen explains his hospital's therapy program: "The program includes home work assignments which might be behavioral (less checking of the body, activation etc.) or cognitive work (registration of situations, thoughts, feelings and behavior)" ((2)). If medication is ever used (rarely), it is always in conjunction with therapy.

Hypochondria, despite Fallon's claims of OCD connection, remains an essentially psychological disorder. It's only appropriate, then, that it's dealt with in the manner of other psychological disorders: through therapy. This not only allows patients to take an active role in their recovery, it's an extreme countermeasure to the associated depression/anxiety of their condition. Dr. Locke's experimental therapy sessions utilized this technique to astonishing success:

"The two-hour structured groups taught participants how to see a link between their experiences and their physical sensations. They also got training in meditation and how to break self-defeating behavior. In the year after the six-week group, each participant's medical costs were $1,008 less than patients who hadn't been in the groups, even after deducting the expense of the therapy, Locke says. Group participants made roughly half as many doctor visits as they'd had in the year before their treatment" ((5))

Results like these strongly suggest that if patients feel they are working for a goal, they'll feel better about themselves. They'll feel better able to handle their own condition. In short, they'll just feel *better.*

My final, and biggest problem with Fallon's treatment is a purely moral argument. I think it's a legitimate one, as it has been brought up in connection with other psychological conditions, most notably, ADD (Attention Deficit Disorder). It's the issue of over-prescription, of readily distributing pills as a cure-all. Remember the Ritalin controversy a few years back? Proponents of the drug argued that it cut down ADD in school-age children; detractors countered that ADD was being overdiagnosed and Ritalin given to every child who couldn't sit still in class. Although some of these kids had a legitimate disorder, for many of them their behavior was psychological. It was, detractors argued, a consequence of lax parenting and inept teaching. The same might be said for hypochondria. As has been mentioned before, the disorder comes in many forms, but only a few seem biologically occurring. Others are simply social conditions. Dr. Richard Friedman gives the example of hypochondria originating in a single, isolated incident: "for example, the common experience many have after a minor accident. Suddenly, the person becomes aware of ''new'' physical symptoms that are assumed to be a result of the mishap when they may have been lurking in the background all along" ((7)). This type of hypochondria is the consequence of a single experience, not a genuine physiological disorder. It's not biological. Can it really be treated with a tube of pills?

Wait. That's not even the issue. Even if it could be fixed via Prozac, *should* it be? If Wilhelmsen is right and hypochondria is a consequence of affluence---if the Vikings did *not* have hypochondriacs---is it fair to try and treat the condition medically? The same goes for similar disorders, such as ADD. Of course there will be those who truly suffer a "physiological" version of these illnesses, but for the majority of sufferers, problems can be overcome through dedication and the right type of therapy. Why, then, is there such emphasis on finding a medical "solution" to the problem? This may sound harsh, but isn't this just irresponsible self-indulgence? The doctors who search for these miracle cures, and the patients who rely on them, simply come off as too lazy to commit themselves to a tougher (if more effective) treatment. Pills are great tools, but they should not be applied to every situation. They're just an easy, temporary fix for a longterm psychological problem. Therapy works. It may take longer, and require greater effort on the part of both physician and patient, but it works. Furthermore, it instills a useful self-confidence in patients who was, before, riddled with anxiety over their physical soundness. It teaches them one of the most important lessons recovering hypochondriacs can learn: you do *not* have to live your life in fear of any disorder. Including hypochondria.

In conclusion, right now, my neck back, and shoulders hurt like hell. However, I'm not worried. That's what you get for staying up until 4 a.m. to finish biology essays.

But even if I was worried that it was something serious---in other words, if I *was* a hypochondriac---I would know that I wasn't at the mercy of my own phantom symptoms. I would know that, with some work and a lot of good therapy, I could overcome my condition. I would recognize my anxiety for what it was, and know that it was a mental rather than a physical problem, and therefore fixable.

Most importantly, I would know that I wouldn't have to rely on pills for my recovery.

That being said, I'm going to go pop a few aspirins.

***


References

Works Cited

1 1)Prozac works?, ... Dr. Fallon and prozac

2 2)The Norwegian Doctor, ... Dr. Wilhelmsen's idea about treatment

3 3)Some basics on hypochondria , ... more basics on hypochondria. Due to the updating of Columbia's records, this page may no longer exist. Sorry!

4 4)Hypochondria and the internet, ... can the internet actually make hypochondriacs worse?

5 5) The USA Today on hypochondriacs, ... basic information on hypochondria

6 6) Columbia University's health journal, ...due to updating of Columbia's records, this page no longer exists! I'm sorry!


7 7) The difficulty with treating hypochondriacs , one doctor's perspective...

8 8) Dr. Wilhelmsen, Part 2 , ... why treatment works.

***.).


Tearful Serenity: Crying Away the Stress
Name: Nomi Kaim
Date: 2003-12-23 21:32:31
Link to this Comment: 7579


<mytitle>

Biology 103
2003 Second Paper
On Serendip

Some days you've just had it. You've been talked at all day by people you couldn't care less about, the lady at the convenience store snapped at you, your friend invited herself over right when you had exactly one hour to write a paper, you got caught in a traffic jam going shopping, you're starting to seriously rethink your life career ... and now there's a thirty dollar parking ticket stuck on your windshield because that darn machine wasn't accepting quarters.

You burst into tears.

Tears, stupid tears! Always coming when you least want them. Now everyone on the street is looking at you and your eyes are so blurry you trip over the bumper and stumble into the street. What a klutz. How humiliating! Why do you always have to cry like this?

But everybody cries. For its capacity to signal physical or emotional distress, crying has left an indelible mark on the slate of human history. Where would art and poetry be without tears? In fact, where would we be? In truth, crying plays an essential role in our biology as well as our social and cultural experiences. We can't stop the tears from flowing, but we can investigate why they flow – and why crying might not be, after all, such a bad thing to do.

Tears are body excretions, just like sweat and mucous and urine. We don't usually like to think about body excretions, but when we do, we bear with them because we know they have important functions. Sweat removes excess salts from the body and cools us; mucous traps surrounding pathogens; urine and feces expel unneeded, toxic waste products that would harm the body if they remained within it. All three contribute to the body's self-regulatory or homeostatic nature, readjusting for balance. Tears, too, must serve a biological, homeostatic purpose. But what? In fact, there are three known answers to this question.

Scientists distinguish three kinds of tears, which differ from each other by function and also, probably, by composition. Basal tears actually form continuously. We don't experience these minute secretions as tears because they don't "ball up" as we are used to tears doing; instead, every time we blink, our eyelids spread the basal solution out over the surface of our eyeballs(1). Basal tears keep our eyes lubricated, important in preventing damage by air currents and bits of floating debris(2), (3).

Basal tears, like all tears, have numerous components. A little bit of mucus allows them to adhere to the eye surface without causing harm. The main part of a tear contains, predictably, water and salts (like sodium chloride and potassium chloride). The ratio of salt to water in tears is typically similar to that of the rest of the body, so there is no net change in salt concentration; nonetheless, if the body's salt concentration climbs too high, it will take advantage of the tear solution and instill it with extra salt. Tears also have antibodies that defend against pathogenic microbes, and enzymes which also contribute to destroying any bacteria the eye encounters(1), (4). A thin layer of oil covers the tear's outside to discourage it from falling out of the eye before its work has been done(5).

Our eyes produce irritant tears when hit by wind or sand (or insects or rocks). Irritant, or reflex, tears have the same constituents as basal tears, and work toward the same goal: protecting the eyes(1), (6). However, since they are designed to break down and eliminate eyeball-intruders like airborne dust, these tears tend to flow in greater amounts and probably contain a greater concentration of antibodies and enzymes that target micro-organisms. Thus, irritant tears are not just basal tears in greater quantity; different biological processes precede the excretion of the two types of solution.

The voluminous tears that so rapidly move us to frustration or pity are, of course, emotional tears(7). Secreted in moments of intense feeling – sometimes joy, but more often sorrow – these tears aren't there to cleanse the eyes of irritating microbes or debris. Yet they do serve a purpose; the function of emotional tears can be inferred from their constituents. Emotional tears contain much more (maybe 25% more) than basal or irritant tears of a certain important ingredient: proteins(8).

What do proteins do? Well, what can't they do? We know very well they can be involved in anything and everything. The proteins found in emotional tears are hormones that build up to very high levels when the body withstands emotional stress(9).

If the chemicals associated with stress did not discharge at all, they would build up to toxic levels that could weaken the body's immune system and other biological processes. But here, as in other areas, the body has its own mechanisms of coping. We secrete stress chemicals when we sweat and when we cry. Clearly, then, it is physically very healthy to cry, regardless of whether or not it feels awkward or embarrassing socially. The reason people will frequently report feeling better after a well-placed cry is doubtless connected to the discharge of stress-related proteins(10); some of the proteins excreted in tears are even associated with the experience of physical pain, rendering weeping a physiologically pain-reducing process(8). Conversely, the state of clinical depression – in which many of the body's self-healing processes appear to "shut down," including, often, emotional tears – is most likely exacerbated by the tearless victim's inability to adequately discharge her pent-up stress. Psychologists refer to freely weeping as an important stage in the healing process. But although this notion may appear to be psychological in origin, involving the confrontation of one's own grief, it also just applies physiologically: crying can reduce levels of stress hormones. Rejuvenating!

One major stress hormone released from the body via tears, prolactin, is found in much higher concentration in women's bodies than in men's. (This makes sense when you consider that the hormone is also implicated in the synthesis of breast milk.) (7), (10) Interestingly, prolactin appears to not only be secreted in tears but also to play a role in the formation of tears. Levels of prolactin in the body correlate positively with frequency of emotional crying(5); as a whole, women cry more often than men (perhaps four times as often, according to one study) and also have a whole lot more prolactin (60% more) (8).

From an evolutionary perspective, why does it make sense for women to cry more often than men? Clearly, this phenomenon plays no minor role in our social interactions; culturally, greater female weepiness is taken for granted in perhaps every area of the world. But why should it be so biologically? Don't men get as stressed out as often as women? Don't they need to release their tension just as much? Perhaps prolactin plays a double role in terms of increasing a person's vulnerability to feeling stress as well as her tendency to cry to discharge that stress. If this is the case, men may not be inclined to get as stressed out as women, who harbor more of the "stress-sensitive" protein prolactin. Thus, women's increased reliance on tears for stress-release may be their bodies' way of maintaining homeostasis. They take in more stress, they pour out more. The fact that men's tear glands are, as a whole, structurally smaller than women's supports the notion that they are used less(10).

But this theory seems faulty. I have seen many men become just as overwhelmed as women by the fast-paced, demanding culture we live in. Prolactin is not the only stress hormone, after all. In all likelihood, men have higher concentrations of other stress-related hormones than women. It could be, instead, that men discharge of their stress by different channels. Men tend to sweat more than women, and sweat contains many of the same chemicals as tears. Perhaps their sweating reduces men's stress in a fashion comparable to women's crying. Men might also urinate more, another way to rid the body of built-up waste products. In fact, it makes sense evolutionarily that men would benefit from discharging stress chemicals in ways other than crying; heavy crying blurs the eyes and prevents vision, antithetical to tears' most basic eye-preserving function. Men, traditionally the hunters, would need to be able to see their prey to survive. But hunting has got to be stressful! Either the age-old trials and tribulations of hunting molded men into organisms less vulnerable to stress (which would support the original hypothesis involving prolactin as the stress-hormone), or men have been releasing their stress through alternate pathways. Sweat, which does not usually impede vision, would be a functional alternative. (Men's evolutionary tendency for increased excretion efficiency over women is evidenced by the way they urinate. In the heat of the jungle, pursuing a wooly mammoth, it pays not to have to sit down to pee.)

If blurry eyes were a nuisance to prehistoric men, they could not have been a great boon for women, either. With their annoying propensity to cut off vision, how did emotional tears survive the test of time? None of the body's other natural processes involve the suppression of one of its basic senses. We need our eyes to see!

Perhaps, once again, our bodies take care of this problem implicitly. Tears, along with saliva, stomach acids and other secretions, are generally only produced when the body is in a state of quiescence or calm. These functions are under the control of the parasympathetic nervous system, a system that operates most efficiently when there's nothing too exciting going on to stimulate the body(11). After all, it would be unreasonable for your body to focus its energy on digesting your breakfast when you're being pursued by, say, a saber-toothed tiger! Same goes for emotional tears – and, indeed, for the feelings associated with them. We don't fully realize our fear and sadness until after the fact, when we get the chance to sit down, cry, and de-stress. Maybe, in the course of our evolution, our ancestors could afford to lose their vision to tears sometimes so long as they limited their crying to after the trauma. Nonetheless, it seems odd to me that a process which so blatantly interferes with one of the senses should survive until today. It's a good reminder that evolution is a work in progress!

One of the reasons emotional tears so fascinate people is that we don't see them in other species. All animals go to the bathroom; many animals sweat. All animals lubricate their eyes with basal tears. However, discounting a few unverified tales of weepy gorillas and elephants (which may well prove, someday, to be accurate), it seems humans are the only ones to cry(10). Why?

Emotional tears may be a piece of the equation that renders human beings biologically and culturally unique (or so we think...). We know our enlarged cerebrums and manual dexterity afford us thoughts and social liaisons as yet unwitnessed in other species. Our faces also hold the key to wonderful communication; we can smile, and we can cry. Smiling clearly developed chiefly for social purposes; crying, on the other hand, may have begun as a solely biological mechanism (for reducing stress) and then acquired an important social meaning. And so crying evolved for the dual purpose of physically dissipating tension and conveying profound emotion.

It may seem, when you cry in public, as though your body has failed you; a mistake, you think; should have been saved for later; sort of like peeing in your pants. It is more likely, however, that you have reached a level of stress that is detrimental to your health. You should let it out. It's okay to cry.

It is good to cry.

End Note:

Though emotional tears are likely unique to humans, all animals secrete irritant tears, and some secrete them in greater volumes than human beings. It is interesting to note that sea animals, including birds, rely most heavily on tears for removing salts from their bodies(10). Do these animals harbor especially high levels of salts because they originated in the ocean? But so did humans. Is it because they eat salty sea-foods? Or perhaps the vicious ocean winds cause increased irritant tearing in these animals, so they evolved to make good biological use of their bountiful tears?


References

1)Tears: Why Do We Cry?, readable school website describing the physiology of tears

2)Crocodilian Biology Database: Do crocodiles cry 'crocodile tears'?, some fun facts about crocodiles and crying

3)Science and the Creator: "My tears into Thy bottle", an interesting religious take on the enigma of human tears

4)Re: What are tears made from?, a little e-mailed message summarizing the composition and functions of basal tears – very brief

5)Tears, good source of information on the role of tears and crying patterns in men and women

6)Do Animals Shed Emotional Tears?, information and speculation on animals and tears

7)Why do we cry?, a few concise facts about tear composition and crying patterns

8) There is Healing in Your Tears, a personal tear story, some tear facts, and a bit of common-sense advice on crying

9)Tears, highly readable information and history of tear research, set in the context of child abuse

10)Sob story: why we cry, and how, interesting Boston Globe article covering a lot of details on tear composition and function

11)Sympathetic Innervation, complex information on the sympathetic and parasympathetic nervous systems





| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:22 CDT