Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Biology 103 Fall 2005 Papers Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

CCortex: More Human Than Human?
Name: Nick Kreft
Date: 2005-12-10 14:33:35
Link to this Comment: 17370


<mytitle>

Biology 103

2005 Final Paper

On Serendip

CCortex™ is a massive computer program being designed by Artificial Development (AD). Their aim with this program is to simulate the human brain, more specifically the human cortex and its relation to human peripheral systems. In order to do this, they are simulating, with computer technology, 20 billion neurons and 20 trillion neural connections. AD claims that their simulation reaches "a level of complexity that rivals the mammalian brain," and also that it is "the largest, most biologically realistic neural network ever built." (1) Such a claim necessarily raises the issues relating to computer-based artificial intelligence: how accurate with this program be with relation to real human thought? How exactly does a computer attempt to emulate human thought? Is computer emulation of human thought even a desirable outcome?

In order to answer the first two questions, it is necessary to explain a bit more information about the inner workings of CCortex™. First of all, though AD claims that CCortex™ is very similar in process and "organization" to the human cortex, it is completely dissimilar in terms of size. It is composed of 1,000 processors contained in 500 separate network elements. (2) Thus, even were the CCortex™ system to accurately emulate the human cortex, its application would be limited because it is extremely expensive and space-consuming to reproduce.



In terms of actual function, the CCortex™ system runs a personality emulation program that can "learn". The computer network is trained using a text-based chat-like interface, through which the network can respond to a number of questions. Based on the network's answers, the builders give the computer a reward based on the answer, and so the computer is taught using the classical conditioning method. (3)



Does all of this add up to a reasonable, or even desirable, reproduction of the workings of the human cortex? I argue that it does not, but also that it can not. Granted, this constitutes what, in all likelihood, could be the closest computer approximation of human thought and interaction. But what is 'close' in a case like this is very far away from an actual human brain. Even discounting the size issue, alluded to earlier, there seem to be a number of differences in terms of complexity that question the ultimate similarity between CCortex™ and the human brain.



The first of these is CCortex's™ manner of "learning" by this fairly basic Pavlovian response method. Classical conditioning is just one of many ways to learn for humans.(4) In addition, classical conditioning normally only works under certain circumstances and in certain ways, such as for unconscious processes. (5) CCortex™ has been programmed to only learn in a certain way, and, consequently, cannot learn in any way outside of its specifications. This is not the same as humans, who can learn in many different ways depending on what it is they're learning; classical conditioning cuts down what CCortex™ can possibly learn. Also, different humans learn better through varying methods. Some learn better by reinforced visual stimuli, some learn better by reinforced audio stimuli, etc. If another CCortex™ was built, unless it was specifically designed otherwise, it would learn by exactly the same method as the original. This is true no matter how many CCortex™ machines are built. Granted, the things that each machine "knows" will be different on a case-by-case basis, but their methods of learning will be identical. These are some things that ultimately separate CCortex™ from the human brain.



Even if CCortex™ could properly emulate human thinking and learning, I argue that this is not a desirable outcome. To start of, the field of psychology has been devoted to examining human intelligence and instinct for over a hundred years, so to a fairly large extent, this work is already being done, and CCortex™ may be mostly redundant for these purposes. One use for CCortex™ that is not inherent to the human brain already is the fact that it was built by humans. This could help us understand more exactly the nature of neural connections and the neural system in the brain. Even given this, however, our understanding of the brain through this fact is limited. AD is working with a simulation of neurons, and a simulation of neural connections, and so the building of CCortex™ can, in this vein, only help us to understand how to properly simulate the human neural system in a very specific way – through a computer program. There are most likely other possible simulations of the neural network that might also give us a firmer understanding of the brain, but these are not being explored here.



This entire discussion calls into question the issue of human learning and creativity. I have assumed that human thought and creativity is somehow individualized, and not merely a long and complicated response algorithm within the human brain. Could this be the case though? Humans all act somehow differently from one another, despite our general similarities. This difference can potentially be explained away, though, by noting the fact that every human deals with slightly different circumstances than all other humans. Given these different circumstances, is it possible that difference can be explained entirely by a person reacting to their unique environmental condition? I am tempted to disagree with this notion, if only because of the entire field of genetics which makes the very good point that environment interacts with potential differences (genes) within all of us to create the person we are. The question is, at its core, a good one to consider, though, and a worthwhile topic for further discussion.



One important aspect of CCortex™ that I have purposefully left out is AD's professed applications for this network. AD is not merely acting on a whim to try to absolutely emulate the human brain. Rather, they are viewing CCortex™ as a tool primarily for international business purposes, with other applications in the field of, obviously, artificial intelligence, as well as other psychological applications. They make no pretense of being able to duplicate a human's thought process, and it does not seem as if they are even interested in this at all. They are trying, rather, to create a more accurate computer simulation of the human brain in order to learn more about both artificial intelligence and humans simultaneously. (1) I used CCortex™ primarily as a jumping-off point for a discussion of artificial intelligence in general, and so many of my points are applicable to a wider range of cases than just this one.



1) The AD Information Page on CCortex™.

2) Roland Piquepaille's Blog, contains some pictures of CCortex™.

3) Ian Yorston's Blog, links to an AD press release.

4) Wikipedia's Article on "Learning".

5) Lynda Abbot, Ph.D.'s, Page on Behavioral Conditioning.


Predator Plant: The Story of the Venus Fly Trap
Name: Sara Koff
Date: 2005-12-12 14:54:20
Link to this Comment: 17376


<mytitle>

Biology 103

2005 Final Paper

On Serendip

Imagine you're a small insect buzzing about and a bright colored plant catches your eye. You fly towards it and softly land on the sticky portion of the plant. Unfortunately, this was a fatal mistake. The plant begins to close around you and no matter how you struggle you are trapped. You have just become the latest meal for a carnivorous plant also known as the Venus Fly Trap.
A plant that eats bugs, it seems like something out of a science fiction novel, but in fact carnivorous plants have existed for thousands of years and there are over 500 different types of these plants. The most famous is the Venus Fly Trap, Dionaea muscipula (1).


Although the Venus flytrap has captivated people across the world, the plant is confined to a very small geographic area (1). In the wild, they are found in a 700-mile region along the coast of North and South Carolina. Within this area, the plants are further limited to living in humid, wet and sunny bogs and wetland areas (3).


Flytraps actually get most of their sustenance like other plants do, through the process of photosynthesis. The plant uses energy from the sun and converts the carbon dioxide and water to sugar and oxygen. The sugar produced is then converted to energy in the form of ATP, through the same processes used by our bodies to process carbohydrates (1). However, in addition to synthesizing glucose, plants also need to make amino acids, vitamins and other cellular components to survive. In the bogs where the Venus flytrap is found the soil is poor and minerals and nutrients necessary for a plant's survival are scarce. Most plants can't even live in this habitat because they do not have the ability to produce these nutrients internally. The Venus Flytrap however, has adapted the capacity to thrive in the acidic soil found in the bogs by using an alternate means to acquire elements like nitrogen. Insects are rich in nitrogen as well as other key nutrients missing from the soil and so are a perfect means for the plant to gain these elements for survival (3).


Since insects are an important part of the flytraps' diet the plant must have the ability to trap its prey. The leaves of Venus' Flytrap open wide and on them are short, stiff hairs called trigger or sensitive hairs. When anything touches two or more of these hairs enough to bend them, the two lobes of the leaves snap shut trapping whatever is inside. The trap can shut in under a second. At first the trap only closes partially. This is thought to allow time for very small insects to escape because they would not provide enough nutrients. If the object isn't food the trap will reopen in about twelve hours and spit it out (2).


When the trap closes over food, the cilia. finger-like projections, keep larger insects inside. In a few minutes the trap will shut tightly and form an air-tight seal. Then glands on the leaf surface secrete several digestive enzymes that help to decompose the insect. Once the insect has been digested sufficiently, the leaf re-opens for another victim (2).


It is the act of trapping the insect that has fascinated biologists for years. How is it possible that a plant can react to the stimulus of touch? There have been many theories over the years about how and why the flytrap reacts in such a way. Even the most recent theory is not 100 percent clear and is only a collection of circumstantial evidence without direct links to demonstrate cause and effect. When the hairs are triggered a change occurs in the electrical potential this change sends a signal to the lower cells of the midrib. The next few steps occur so quickly that biologists are unsure what happens first. There is a substantial increase of the growth hormone IAA in the cells of the midrib. The energy potential caused from the response of the trigger hairs causes hydrogen ions to move rapidly into the cell walls.
The next steps are merely assumptions, but the most accepted theory is that "a proton (H+) pump moves H+ ions out of the midrib cells and into the cell wall spaces between the cells" (4). Hydrogen ions naturally make the area of the cell wall more acidic. These ions are able to loosen the cell walls by dissolving the calcium that holds the cellulose together. This reaction causes the lower side of the midrib to become limp. Calcium moving out of the cell wall increases inside the cells and the cells absorb water (4).


Once the calcium enters the cell there is a larger percentage of calcium and a smaller percentage of water on the inside of the cell than the outside. Water then enters the cells by osmosis. Since the cell walls have been broken down, they are able to expand as they take in water, and the cells grow (4).


This growth of the cells causes an expansion of the leaf and the closing action of the trap. This all occurs so quickly the trap is able to shut in less than a second (1). The cells remain at this larger size and the cellulose eventually increases to strengthen the walls. All of these steps merely take place to close the trap. In a few days it will have to re-open. Once the insect is digested, the cells on the upper surface of the midrib will grow, much more slowly, and the leaf will re-open. The plant is unable to grow so rapidly forever. That is why it is only able to close its trap about seven times during the life of a leaf (2).


The Venus flytrap in many ways remains a mystery. It is one of the most unique plants on Earth and it shows the power of adaptive evolution. The Venus flytrap is not near as dramatic as it is portrayed in TV and films but it is a bizarre organism worthy of our study.

1) http://science.howstuffworks.com/venus-flytrap6.htm


2) http://www.botany.org/bsa/misc/carn.html


3) http://www.botany.org/Carnivorous_Plants/


4) http://www.botany.org/Carnivorous_Plants/venus_flytrap.php


Breeding Back the Aurochs
Name: Magdalena
Date: 2005-12-13 00:29:44
Link to this Comment: 17383

<mytitle> Biology 103
2005 Final Paper
On Serendip

M

M. Michalak

Bio 103—Prof. Grobstein

Web Paper #3

 

Breeding Back the Aurochs

 

            Our discussion of genetics got me thinking about the processes of breeding back animals to produce something similar to animals that have since gone extinct.  These experiments have been done with the aurochs, the quagga, and the tarpan.  I remember learning about these projects as a child and never hearing them brought up in a biology class, so I decided to take this opportunity explore.  I decided to focus on the aurochs since it seemed to be the only one of those three animals which caused a controversy in terms of being reintroduced to the wild.  I wanted to learn how close the back breeding had gotten to recreating the aurochs, what the motivation had been for the back breeding, and what the controversy was in the first place.

            The aurochs (Bos primigenius) was the predecessor of modern cattle that first appeared on earth around two million years ago in the area that is now India, spreading afterwards to Europe.  It is most commonly known through the many Paleolithic cave paintings(1) that exist of it.  These are what give modern scholars a fairly decent idea of what the aurochs might have looked like, and compared with skeletons and remains frozen in permafrost, create a fairly accurate depiction of what the aurochs' size, shape, and colour would have been.  Basically, it was a big cow—nearly 2m at the shoulder, and built vaguely like the love-child of the Spanish fighting bull, the gaur (Brahman cow), with a dash of Highland cow thrown in just to make it hardier.  It adapted to the Middle East, to continental Europe, and even to the cold British climate.  The abundant verdant vegetation in these northern lands was great for the aurochs, but unfortunately also great for humans who hunted them for food and sport.

            Despite the early demise of its contemporaries (such as the mammoth and the saber-toothed tiger, for example), the aurochs managed to survive for an impressively long time.  The last aurochs(2) was thought to have died in Poland in 1627(3), long after it had died out everywhere else.  Poland's extensive old-growth forests provided a safe haven, and the country's very early forestry initiatives (started as early as the 11th or 12th century) attempted to protect the dwindling population.  Nevertheless, the aurochs finally died out, most likely from a combination of factors including over-hunting, decreased grazing land thanks to domesticated cattle, and a lack of genetic diversity leading to weakened stock.

            In 1920, the Heck brothers, Heinz and Lutz, embarked on an attempt to back-breed the aurochs from cattle with aurochs-like qualities.  The brothers worked separately, one in Berlin and one in Munich, though only the Munich program survived World War II.  The animals this back-breeding produced, known as Heck cattle(4), look similar(5) to the aurochs of Paleolithic times and are often seen in zoos nowadays.  They're generally smaller, though work is still being done to increase their size and weight since aurochs were thought to be around 1,000kg (half the size of a rhinoceros).  There have been successful programs to release Heck cattle back into the wild, most notably in the Netherlands where there is absolutely no human interference.  Programs still monitored by people exist in parts of Germany and France.

            The breeding programs were supported by the Nazi Party during the time of World War II as part of the Nazi propaganda to create an "Aryan" historical mythology.  This answers my question as to why there were attempts to breed them back; apparently only a glorious wild cow such as the aurochs was good enough for the supposed Aryan race.  After WWII, the emphasis shifted away from propaganda and to wildland management.  Current programs are geared towards introducing Heck cattle back into the wild to refill the ecological niche that the aurochs would have occupied.  This is where the previously-mentioned controversy develops: nobody really knows what niche the aurochs filled.  Some people claim that the aurochs, like the endangered European Bison, roamed the plains and grasslands.  Others claim that the aurochs inhabited the forests and marshes.  Delving into this, I found out that in order for grasslands to be maintained, they need to be "mowed" and resown by grazers which don't just feed on grasses (the way domesticated cattle do) because this helps to keep the population of shrubs and trees down and actually creates plains.  Domestic cattle aren't hardy enough to survive the harsh elements on their own, and European Bison are endangered and can't handle all the central European grasslands on their own.  Critics of the Heck cattle program state that emphasis should be placed on conserving the European Bison instead of attempting to introduce a new species into the mix.

            From what I was able to find, I think that the theory of the aurochs inhabiting the forests and marshlands is more likely than that of them inhabiting the grasslands, at least in Europe.  Their coloring wouldn't help them blend in on the open plain; their relatively short fur would have needed the extra insulation that low, densely-grown forests would have provided in the winter.  Looking at pictures of Heck cattle, I don't really see much similarity; everything from the horn shape to the coat to the size is off, as well as the build in general.  Heck cattle just look like a combination of Angus and Highland cattle.  While I think that breeding back is an interesting exercise in some cases, I can see the release of Heck cattle into the wild causing more harm than good, specifically by encroaching on bison territory and taking over the bison's grazing lands.  Still, breeding back a breed that's hardy and introducing it into places that no longer have a large, native grazer could be very beneficial for that type of place.  I think we need to understand a lot more about the aurochs, though, before we can even attempt to recreate them in any convincing fashion—and then, we'd have to ask ourselves what the motivations are.

 

[1] http://users.aristotle.net/~swarmack/hodgraph/aurochs2.GIF

2 http://en.wikipedia.org/wiki/Image:Jaktorow_pomnik_tura.jpg

3 http://users.aristotle.net/~swarmack/aurochs.html

4 http://en.wikipedia.org/wiki/Heck_Cattle

5 http://extinctanimals.petermaas.nl/

 


The biology of a sprain
Name: Stephanie
Date: 2005-12-14 11:26:49
Link to this Comment: 17395

<mytitle> Biology 103
2005 Final Paper
On Serendip

I have recently had the frustrating experience of spraining my ankle. I understood the ideas of R.I.C.E.—rest, ice, compression, elevation—and I understood that my ankle hurt, was swollen for weeks, had bruises in places that had not made direct contact with the rock that I slipped on and worst of all, I understood that my ankle was weak and kept causing me to fall unexpectedly on it. But I did not understand why R.I.C.E. was supposed to help my ankle get better, what was causing the prolonged swelling and bruising or what specifically was weakened and causing me to fall and re-injure it every week. Researching this topic was moderately frustrating: there are a lot of medical website that "dumb down" the information to a point where I probably could have written based on my own first aid knowledge and there are a few, more helpful, medical websites that sent me on dictionary treasure hunts every word. I hope to be able to shed some light on the physiological processes that occur during a sprain.

There are two types of common ankle injuries that can lead to sprains. One is when the ankle rolls outward and the foot rolls inward, called an inversion injury. This is an injury to the lateral ligaments, which are the anterior talo-fibular ligament, the calcaneo-fibular ligament and the posterior talo-fibular ligament (2). The other, which is far less common, is when the ankle rolls inward and the foot outward, called an eversion injury, an injury to the medial ligaments (1) or the deltoid ligament complex (2). Ligaments are made up of collagen tissue (3), which is also the type of tissue that makes up skin and bones (10). Sprains result when the ligaments are stretched farther than they normally would. This overstretching may result in tearing the ligament partially or completely (2).

Ankle sprains are divided into three categories, depending on their severity. A Grade I sprain is the least severe; no instability results and although there might be microscopic damage to the ligament, no tearing occurs. In a Grade II sprain, the ligaments may be partially torn, but no significant instability results from the sprain. A Grade III sprain is the most severe; the ligament is torn and there is significant resulting instability. The most severely damaged (and most commonly damaged) ligament is the anterior talo-fibular ligament, while the least common and least severely damaged ligament is the posterior talo-fibular ligament (2). In order to test to see if the ligaments are completely torn, doctors may perform a stress test in which they put posteriorly-directed force on the tibia. If the anterior talo-fibular ligament is torn completely, the tibia will visibly shift backward at the ankle and will snap back when force is removed (2). A complete tear may result in the need for surgery, but surgery is not recommended or useful in most cases of sprained ankles (3). There is controversy over whether or not a person with a sprained ankle should have surgery. Studies show that surgery should only be done if there is evidence of a complete tear of a ligament (8). It is also possible for the ligament to be torn from away from the bone. This is known as avulsion. Walking would be impossible with this type of injury (6).

Not long after the injury, the ankle begins to swell up. Bruising may be more prolonged and may not occur in minor injuries at all (3). Bruising can be caused by damaged blood vessels from the impact, but can also indicate that a ligament is torn (5). I think that there could be other reasons that there is bruising as well. Perhaps blood rushes to the area in order to help with healing and then is stuck there. Fibrosis also tends to occur following a sprain (3). Fibrosis is the formation of excessive fibrous tissue as a reparative or reactive process. This causes swelling in the area of the injury. Also contributing to the swelling around the area of the injury is serous fluid released from the torn tissues of the ligament (4). I had to look up what "serous fluid" was in the dictionary and found that serum can either refer to watery fluid that is found in animal tissue or to a yellowish fluid found upon separating clotted blood. I could not find information about which serum accumulates following an ankle sprain. It would make sense for it to be related to blood clotting as there is typically bruising with an ankle injury, but the type in edema is the watery type of serum. This swelling could be constricting the blood flow as well, which may contribute to the bruising.

Immediately after an ankle injury, it is necessary to keep moving the ankle. The reason for this is that edema tends to stiffen into a slightly flexed, inverted position (4). I tried to find information about why it stiffens into a particular position, but had no luck. I assume that it is because of the position of the ligaments when they are loose. If this stiffening of the edema occurs, then rehabilitation will have to be put off until a better range of motion is regained (4). Most children are taught in elementary school health class that injuries require rest, ice, compression and elevation. Rest in the case of an ankle injury helps to prevent further injury from direct contact with the ground while walking. Icing is important because it helps to reduce the swelling of the injured area. The optimal temperature for cooling the ankle is about 55 degrees Fahrenheit and should be done every few hours until the swelling and edema, or excessive accumulation of serous fluid, have stabilized (4). (I discovered while looking up "edema" that plants can also have edema, accumulation of water in their organs, as well as people. This makes me wonder if plants can be "injured" in a similar way to humans.) Compression helps to milk edema fluid away from the injured tissue, reducing swelling (4). Also, reducing the edema will prevent it from stiffening and causing a delay in rehabilitiation. A compression wrap can help to milk away the edema while also holding the ankle in place to prevent further injuries (1). Elevation of six to ten inches above the heart aids in venous and lymphatic drainage (4). This implies to me that the elevation is not actually reducing the swelling so much as aiding circulation while the swelling is constricting the blood flow through the veins.

In addition to the common treatment of R.I.C.E, there are some alternative treatments that may be effective for a sprained ankle. If the sprain is very severe, it might be helpful to raise calorie and protein intake to accommodate for the needs of the healing processes, but this is not recommended for most sprains (6). Also, a person with a sprained ankle might take proteolytic enzymes in order to reduce in the inflammation as well as to promote tissue healing. There is controversy over whether or not this actually works, as some trials found that healing was faster in people taking these enzymes than in people who were not, while other trials found that there was no significant effect (9). Another alternative remedy is horse chestnut as it contains aescin, which is an anti-inflammatory that also helps to reduce edema (9).

About 25% of people that sprain their ankle have long-term joint pain and weakness. The joint will become unstable if it does not heal correctly and will be easily reinjured (1). A common medical mistake is to immobilize the ankle too much or for too long a period of time (4). This makes sense because immobilization does not allow for any strengthening of the ligaments that are injured. The collagen fibers heal the fastest and orientate along the lines of force when supported mobility occurs (3). This is why doctors suggest using a brace that allows mobility after a sprain.

Complications other than the usual symptoms may also arise from ankle sprains. The meniscoid body is a small capsule that can get pinched between bones in the ankle, resulting in synovitis. Synovitis is the swelling of the membrane around this capsule and causes persistent swelling and pain. Eventually, this can become permanent, but it can be treated with injected corticosteroids. Inversion injuries can result in damage to the superficial peroneal nerve that crosses over the anterior talo-fibular ligament. Another complication of an ankle sprain may be reflex sympathetic dystrophy, which is painful swelling related to osteoporosis or a sudden constriction of blood vessels (angiospasm) secondary to the initial sprain. The edema is different from that of the torn ligaments and the relevant symptom is pain with multiple trigger points from one site to another. There may also be changes in skin moisture and coloration. A more mysterious complication is sinus tarsi syndrome, which is simply chronic pain in the area of the sinus tarsi. People are often misdiagnosed as having this because the anterior talo-fibular ligament crosses over the sinus tarsi and it is difficult to distinguish the exact point of pain. Thus, it is important for the doctor to examine both ankles in order to compare tenderness. Sinus tarsi syndrome can be treated with an injection (7). Another, seemingly newer finding is that anterolateral impingement lesions coexist with chronic lateral ankle sprain. It is believed that recurrent inversion stress is the cause of both the chronic instability as well as the lesions (8).

I can now make sense of the healing process that is occurring in my ankle and why it was so important for me to apply ice and compression to my ankle in the days after I sprained my ankle. I was surprised to find little in the way of concise biological information about sprains, torn ligaments or even R.I.C.E. on the internet. As I said before, websites seemed to be either too simplified, including little information regarding the underlying biological aspects of a sprain, or they were too difficult for an ordinary reader (like myself) to understand. Hopefully, in sharing my findings with the world, someone will be able to benefit from have biological and medical information integrated in a way that is understandable.


Web Sources
(1)Applying a compression wrap for a sprained ankle.
(2)Ankle sprains.
(3)Acute ankle sprains.
(4)Management of ankle sprains.
(5)AllRefer Health: Ankle Sprain Swelling.
(6)MotherNature.com: Sprains and Strains
(7)The Merck Manual: Common foot and ankle disorders
(8)Lateral Ankle Instability.
(9)Numark Pharmacists: Sprains and Strains
(10)Extracellular Matrix.


The Risks of Hormone Replacement Therapy
Name: Gillian St
Date: 2005-12-15 02:27:36
Link to this Comment: 17402


<mytitle>

Biology 103

2005 Final Paper

On Serendip


When women go through menopause, usually between the ages of 50 and 60, their bodies undergo drastic hormonal changes. The most important change is the drop in the level of estrogen, a hormone that is important for maintaining bone density and strength. Estrogen also helps increase "good" cholesterol in the bloodstream and rid the body of "bad" cholesterol. When estrogen levels decrease, women become increasingly at risk for osteoporosis and heart disease (1). Other symptoms of menopause include mood swings, sleep disorders, and a decrease in sex drive. In order to control and even reverse the effects of menopause, scientists developed "Hormone Replacement Therapy" (HRT), a treatment of either estrogen alone (for if the woman has had a hysterectomy) or estrogen combined with progesterone (for if the woman still has a uterus). Early studies showed that HRT could help decrease the risk of osteoporosis and heart disease in postmenopausal women by boosting their hormone levels (4). Many doctors prescribed HRT to their female patients to help control their menopausal symptoms; today, there are about 6 million women across the country who take some form of HRT (5).

In the early 1990's, however, a group of scientists started the Women's Health Initiative (WHI) to conduct more extensive research on the long-term effects of HRT. The WHI study was the first long-term, randomized, clinical trial of HRT; approximately 67,000 women nationwide participated in the study (5). Both estrogen alone as well as estrogen-progesterone combination treatments were studied. In July of 2002, the combination treatment studies were halted because it was decided that the health dangers outweighed the research benefits. In March of 2004, the estrogen treatments were stopped for the same reason (4). It had become clear that there were far more risks involved in HRT than earlier studies had shown.

The WHI found that there was indeed a significant decrease in the occurrence of osteoporosis-induced hip fractures (34% fewer) and other fractures (24% fewer). This was caused by the estrogen boost provided by HRT, because the results were the same in women who took the estrogen-only treatments and women who took the combined treatments. However, there were also significant increases in the occurrence of heart attacks (29% more among women taking the combined treatments) and strokes (42% more among women taking both types of treatments). Thus, HRT was shown to be successful in decreasing women's risk of osteoporosis but unsuccessful (even harmful) in affecting their risk of heart disease (4).

Not only did researchers see a notable increase in heart disease, but HRT also had a significant effect on several different types of cancer. WHI results showed a 26% increase in the development of breast cancer among women taking both types of HRT treatments. Estrogen promotes the proliferation of breast cells, which produces the genetic mutations that cause breast cancer; it also stimulates the growth of breast cells, including those which are cancerous. The link between high estrogen levels and breast cancer is also supported by the positive correlation between the occurrence of breast cancer and the length of a woman's exposure to estrogen due to natural causes (such as an early menstruation onset or late menopause). In addition, researchers have found a strong positive correlation between high bone density and increased occurrence of breast cancer; the high bone density would partially be a result of estrogen (2). Thus, estrogen seems to be the hormone at fault, and a boost in estrogen levels would logically lead to an increased risk for developing breast cancer. Another research group called HABITS studied women who had been treated for breast cancer prior to their exposure to HRT; there was also a significant increase in the reoccurrence of breast cancer in these women (2).

Estrogen and combined treatments also affect two other types of cancer: endometrial and ovarian. Endometrial cancer, the most common type of uterine cancer, occurs in the lining of the uterus. Estrogen causes this lining to grow during menstruation, which increases the likelihood that it will develop cancerous cells. The rate of endometrial cancer among women in the WHI study who were taking the estrogen-only treatment was 6-8 times higher than women who were not taking HRT. Progesterone, on the other hand, causes the uterine lining to decrease in size at the end of each menstrual cycle, which in a HRT treatment counteracts the effect of estrogen on the lining. Thus, women taking the combined estrogen and progesterone treatments did not have a heightened risk of developing endometrial cancer (4). A different study, conducted by the National Cancer Institute in 2002, showed an increase in the risk of developing ovarian cancer among women taking estrogen-only treatments. There was also a positive correlation between the length of the women's exposure to the estrogen and the number of occurrences of ovarian cancer; for example, women who had been exposed to estrogen supplements for 20 years were at about three times the risk as women who had been taking it for fewer than 5 years (4).

In summary, both types of treatments showed the expected decrease in osteoporosis-induced bone fractures. However, a number of highly dangerous side effects were noticed among the women in the study. Women taking the estrogen and progesterone combined treatments experienced significantly increased risks of invasive breast cancer, heart disease, strokes, and blood clots in the legs and lungs. Women taking the estrogen-only treatments showed increased rates of breast cancer and strokes and did not exhibit the expected decrease in the likelihood of heart disease (3).

Overall, researchers concluded that the risks of taking HRT treatments far outweighed the benefits. In accordance with new American Heart Association guidelines, most experts have stopped recommending combined-hormone treatments to women approaching menopause. Some doctors still recommend estrogen-only treatments (because this type of treatment seems to result in only a few of the long-term risks caused by combined treatments), but even estrogen-only treatments are prescribed in very conservative doses and for the shortest possible duration (3). Even for women who have already been taking HRT treatments for a considerable length of time, it's not too late; when the women in the WHI study stopped taking HRT, their risk of developing breast cancer and other cancers decreased. Alternatives to HRT have been developed in order to help women at risk for osteoporosis, such as Vitamin D and calcium supplements as well as medications (calcitonin, risedronate, etc.). However, women who are already at high risk to develop osteoporosis for other reasons, and who take HRT to decrease this risk, are warned that the benefits of HRT in maintaining their bone density and strength do disappear when the estrogen boost is removed from their system. Thus, some doctors may still recommend that these women keep taking HRT treatments (4). Most other women have either already opted to stop their HRT or are currently easing off the treatments due to these new research findings.

Web Sources:

1)Patient information: Alternatives to posmenopausal hormone therapy

2)Patient information: Postmenopausal hormone therapy and breast cancer

3)Postmenopausal Hormone Therapy and Cardiovascular Disease in Women

4)Medical Encyclopedia: Hormone replacement therapy (HRT)

5)Bad News About Hormone Replacement Therapy


Onchocherciasis: A River of Tears
Name: Kate Drisc
Date: 2005-12-15 21:37:17
Link to this Comment: 17414

<mytitle> Biology 103
2005 Final Paper
On Serendip

Onchocherciasis, commonly referred to as river blindness, is the world's second leading infectious cause of blindness. This disease is endemic in thirty-six countries, thirty of which are in sub-Saharan Africa, the remaining six in Latin America, Yemen, and the Arabian peninsula. A total of 18 million people are infected with the disease, 99% of the cases being reported in Africa. Of those infected by the disease, over 6.5 million people suffer from severe itching or dermatitis, 800,000 people are visually impaired, and over 270,000 are blind (1). While scientists and humanitarians worldwide have worked to develop drugs, surgical methods, and pesticide spraying techniques to control the virulence of this disease, there is currently no cure. Furthermore, some of the drugs that have been administered have caused side effects as harmful as the disease itself. However, the problems behind eradicating onchocherciasis seem to extend beyond the limitations of Western medicine, including weak infrastructure and a lack of leadership and education in African countries.

Onchocerciasis is commonly referred to as river blindness because the blackfly that carries the disease breeds in fast flowing rivers and streams in inter-tropical zones. Therefore, the rich fertile river valleys in sub-Saharan Africa are perfect breeding grounds for the blackfly to flourish. In fact, in some West African countries, about 50% of the men over 40 years old are blind as a result of the disease (2). The life-cycle of the disease begins when a parasitized female blackfly takes a blood-meal from a human host. The host's skin is pulled apart by the fly's teeth and sliced by its mandibles. A pool of blood is pumped into the blackfly, saliva passes into to the pool, and the infected Onchocerca larvae are transmitted from the blackfly into the host's skin.

The larvae then migrate to the subcutaneous tissue, which contains fat and connective tissue that surrounds larger blood vessels and nerves. In the subcutaenous tissue, the larvae form thick, fibrous, nodules called onchocercomas where they spend one to three months maturing and where they can live for up to fourteen years (2). Within ten to twelve months after the initial infection, adult female worms start producing microfilariae. The microfilariae have pointed tails, elongated posterior nuclei and paired anterior nuclei. Each female can produce between 1000 and 1900 microfilariae per day. The maximum production of these offspring occurs during the first 5 years of the female worm's reproductive life (3).

The eggs containg the microfilariae mature internally within the female worm and are then released from her body one at a time. These microfilariae then migrate from the nodules to the skin where they wait to be taken up by a black fly. When the infected host is bitten by another female fly, microfilariae are transferred from the host to the blackfly, where they develop into infective larvae, and the life-cycle of onchocerciasis continues (3).

In response to the migration of microfilariae to the skin, white blood cells release cytokines that damage the surrounding tissue and cause inflammation. This kills the microfilariae but it is also the cause of the morbidity associated with the disease. When the microfilariae die, the infected human's skin can become swollen and thick, a condition often called lizard skin and the skin can become lax as a result of the loss of elastic fibers, prominently found in the areas around the scrotum (often called the 'hanging groin' effect). Lesions, intense itching, and skin depigmentation are other common side effects of the disease (2).

The most serious side effect of onchocerciasis is blindness, another consequence of the immune system's reaction to the microfilariae. The microfilariae migrate to the surface of cornea, where they are killed by the immune system. In the damaged area, punctuate keratitis occurs (inflammation of the cornea), which will clear up if the inflammation decreases. However, if the infection is chronic, sclerosing keratitis can occur, making the affected area opaque. Over time, the entire cornea can become opaque, causing blindness (4).

Larvicide spraying is one effort that has been used to combat the transmission of ochocerciasis. This was first done by the Onchocerciasis Control Programme (OCP), a group that combined efforts with the World Health organization and three other United Nations agencies in the early 1970's to assist these ailing countries. The OCP sprayed larvicide over fast flowing rivers in eleven countries in West Africa to control black fly populations. The goal of larvicide spraying was to kill the larvae of the blackfly before it became adult, thereby controlling the black fly vector populations. Each week breeding sites in rivers and streams were "bombed" with the aerial application of larvicide. While this operation was relatively successful in controlling the disease in the open savanna, due to the difficulties concerning aerial access, it could not be used in rainforest areas (5).

In terms of drug distribution, Diethylcarbamazine (DEC), was administered to treat ochocerciasis in endemic areas. It was developed during the Second World War and was given to Australian and American soldiers fighting in the Pacific islands that were infected lymphatic filariasis (caused by thread-like parasitic worms that damage the human lymphatic system). In many cases, the drug would slow and even stop the progression of filariasis. After the war, DEC was used to treat onchocerciasis and was found to have a temporary holding effect on the disease. However, this drug is no longer recommended to distribute to onchocerciasis patients because of reported serious side effects, including irreversible eye damage, rash, peeling of the skin, diarrhea, and kidney damage. Severe side effects are attributed to the sudden death of billions of microfilariae, also known as the Mazzotti reaction (2).

Suramin is another drug that has been used to try to control the effects of onchocerciasis. Suramin is administered by intravenous injection, given in six incremental weekly doses. It is most effective in treating trypanosomiasis (sleeping sickness), attacking the parasites as they circulate in the bloodstream. When Suramin was administered to onchocerciasis patients it was effective in killing the adult worm. However, because of its intrinsic toxicity, severe adverse effects were reported, including thrombocytopenia (presence of relatively few platelets in blood), neutropenia (a haematological disorder), photophobia (excessive sensitivity to light), foreign body sensation, edema (swelling of any organ or tissue due to accumulation of excess fluid), cornea deposits, and a high incidence of optic atrophy (6).

Nodulectation was a surgical procedure used to try to permanently eradicate the disease from the infected human. In a nodulectomy, the nodules which harbor female Onchocerca volvulus are surgically removed from the skin of infected humans. These procedures are performed in hopes of interrupting the transmission of the disease by removing the parasites from the human population. Unfortunately, while nodulectomies do assist in reducing the intensity of the disease, new nodules continue to appear in the infected human from the maturation of the microfilariae (7).

In 1988, Merck & Co. instituted the Mecitzan Donation Program (MDP), a program that worked together with ministries of health and other non-governmental development organizations to provide free Mecitzan to those who needed it in endemic areas. It is currently the most effective drug on the market for controlling and limiting the transmission and adverse side effects of onchocerciasis. Mecitzan is tablet that must be administered once a year and is derived from the compound Streptomyces avermitilis, a bacteria that causes nematode paralysis by impairing neuromuscular function. As a result, Mecitzan is effective in paralyzing the microfilariae. However, the drug does not kill the adult worms, but only prevents it from producing additional offspring. While the drug is effective in minimizing the terrible side effects of the disease, it is by no means a cure, as it fails to fully eradicate the parasite from the infected human. Furthermore, severe adverse neurological reactions, including several deaths, have been reported in persons with a high intensity of Loa loa infection, (a skin and eye disease caused by the nematode worm). This has seriously affected Mecitzan treatment programs in areas potentially co-endemic for Loa loa, including most of Central Africa (2).

While the world has worked diligently to develop new medical treatments and other strategies to defeat onchocerciasis, the disease is still ravaging these underdeveloped areas. Although pesticide spraying, nodulectomies and the distribution of Mecitzan has controlled the spread of the disease, hundreds of thousands of Africans are still blind and suffering. Furthermore, it has also been found that drugs such as Suramin and DEC can have the same toxic effects as the disease itself. It is unbelievable to me that it is so difficult to battle a worm and a fly. It is certainly ironic that something so small, something that we view to be so insignificant, can have such deleterious effects upon countries that are already struggling to survive.

While the failure of Western science to produce a cure for onchocerciasis is certainly a major reason why many of these countries are struggling to overcome it, a closer examination illustrates that severe internal problems are also barriers that need to be conquered in order to destroy the disease. Teshome Gebre, an Ethiopian who has attended many public health conferences all over the United States, asserts that, "the [Ethiopian] economy goes from bad to worse...development resources are not utilized...there is poor management, poor government. We fight among ourselves instead of tackling our common problems" (8).

Gebre's statement leaves one to ponder the social and cultural barriers that are holding these underdeveloped countries back. The lack of education, unity, and stable leadership seem to only help in the spread of disease. As a result, I think that the world should not just focus on scientific advances, but also on instituting programs to rebuild the infrastructure of these countries. Perhaps it is necessary to look beyond the scope of biology and science, as it is obviously not curing this disease, and delve further into rebuilding cultural relations and infrastructures of these countries. The synergy of education, reorganization of politics and leadership, and continued medical research could be a more effective means of permanently defeating this scourge and improving the general welfare of the lives of African citizens.

Sources

1) Division of Parasitic Disease Website, Provides information on disease, symptoms, transmission and treatment options.

2) World Health Organization Website , Includes information on how it is caused, its transmission, and prevention and control methods.

3) written by Jason F Okulicz, MD, Physician, Department of Internal Medicine, Wilford Hall Medical Center. , extensive disease information

4) encyclopedia article, information on parasite life cycle and symptoms and treatment of the disease.

5) Worldbank website , disease information and treatment options.

6) Optometry website , information on Suramin

7) Research at the University of Tübingen , information on nodulectomies.

8) Carter Center Website, provides disease information and what its organization does to help stop its transmission. Also has article from The Houston Chronicle.


As Kazakh as Apple Pie
Name: Zach Withe
Date: 2005-12-15 23:40:59
Link to this Comment: 17419

<mytitle> Biology 103
2005 Final Paper
On Serendip

 

Apples. We eat an awful lot of them – about 19 pounds per person per year [2]. They're America's second-favorite fruit, and a big business to boot. The annual American Apple crop is worth about $1.8 billion [3]. On top of that, though, they may be the first known successful instance of mass cloning. All in all, they're a very interesting sort of fruit.

            Like anything else that reproduces sexually, apple trees are heterozygous – the offspring of an apple tree is likely to be very different from the parent tree. [1]  For human purposes, this is a bad thing. We want an apple to taste like an apple, or at least a Fuji apple to taste like a Fuji apple, a goal which would be difficult to meet if we had to wait 20 years from each tree's sprouting to know what kind of apple it would produce. Nevertheless, that's basically the process we once went through. In 1905, a USDA publication listed 14,000 varieties of apple grown in the United States. As on pomologist describes it, "They came in all shapes and guises, some with rough, sandpapery skin, others as misshapen as potatoes, and ranging from the size of a cherry to bigger than a grapefruit. Colors ran the entire spectrum with a wonderful impressionistic array of patterning—flushes, stripes, splashes, and dots." [1]

            In order to ensure more consistency of quality, however, apple growers turned more and more toward the process of grafting, a form of asexual reproduction which genetically amounts to cloning. A seed is allowed to sprout and develop a root system. Shortly after a stem begins to appear from the new seedling, it is cut just above the surface. A twig is then cut from the tree one wishes to reproduce, and attached to the cut surface of the new seedling. After a short time, the stem and the twig grow together. After that, they will continue to grow as if they had been joined from birth. [3]

            The resulting tree is an interesting specimen. Its DNA is actually different above and below the grafting point. The seedling DNA in the roots is not overwritten, rather, the roots never "know" that any cutting has taken place. The roots continue to do what they do, which is to grow and send water and minerals upward, not really "caring" those materials are going. The aboveground portion of the tree, on the other hand, is totally unaware that its root structure is not its own. As long as it keeps getting water and minerals from somewhere, it just keeps doing what it does, which is to grow, branch, and eventually flower and produce fruit for reproduction. Note that in this process, the root DNA is thrown away with every generation - only the stem DNA, which is identical to its parent, reproduces.

            Grafting is a process known from ancient times, but little used until recently. First of all, it's time consuming. If you're just trying to get the food you need to survive, poor-tasting, inconsistent apples from dozens of spontaneously-sprouting apple seedlings is preferable to predictably tasty apples from one grafted tree. Additionally, however, early American apple growers had an idea that grafting was unnatural, and would reduce the vigor of the apple crop over the long term [3]. With the advance of the economy and better guaranteed food availability, however, the first objection dwindled in importance, and pre-Mendel, pre-Darwin, the second objection couldn't compete with the appeal of bigger, tastier, hardier apples.

            Virtually the entirety of America's commercial apple crop today is graft-grown. Additionally, compared to the 14,000 varieties eaten in 1900, the number today is closer to 90, with just a few varieties (especially Red and Golden Delicious, Fuji, Gala, and Granny Smith) making up the bulk of the apples grown. [1, 2]  On a genetic level, the similarity is even more pronounced, as many of those 90 varieties are just hybrids of the others. The Fuji, for example, is a Red Delicious crossed with Thomas Jefferson's favorite apple, the Ralls Genet. People like the idea that anywhere you go, you can get an apple, and it will be the same as an apple anywhere else. It's the McDonald's factor. The difference, however, is that no biological systems rest on the diversity of fast-food restaurants. Genetic diversity in plants, however, serves (or, rather, served, when we had it) an important purpose.

            Today's apple crops, genetically similar and genetically stagnant as they are, are extremely prone to pests and disease. [5]  They present a standing target to other organisms which are constantly evolving to better target them. In recent years, diseases like apple scab, cedar apple rust and fireblight have appeared or become more prevalent [6], and growers have been forced to use more and more pesticides to keep the bugs off – about a dozen sprayings per season at this point. [5]  A really good pandemic, as seen with Dutch Elm Disease, could even wipe out a crop. Growers and scientists alike, for that reason, have begun to search for ways to broaden the gene pool.

            Just allowing more seedling reproduction would probably not do the trick, although botique orchards have begun increased cultivation of heirloom and new varieties. [5]  At this point, it's estimated that 90% of the world's apples are descendants of two trees. [6]  While I'm unable to address the scientific merits of that claim, it appears that at least in spirit it's correct – there just isn't enough diversity in the American commercial stock to reinvigorate the crop in the near future. For that reason, scientists have begun to go to the source – Central Asia, specifically Kazakhstan, where it is believed apples originated. The wild apple groves around Almaty (capital of Kazakhstan, "father of apples" in Kazakh), have proven to be highly disease resistant, and 15-20 thousand of their seedlings have been planted for study in American research orchards. [8]

            One of the main dangers to the Kazakh solution, is, of course, human short sightedness. The Soviets cleared much of the Almaty apple forests, although that might be excused, since they had no idea they were anything other than just more apples. However, since the 1991 Soviet breakup, at which point the genetic importance of the forests was clear, there has been another spate of forest clearing, this time for luxury houses built with oil wealth. Since 1940, about 92% of the Almaty apple forests have been destroyed. [6]

            The story of the apple teaches us a few lessons about mucking with nature – what we can and can't do. Creating hybrids is simply guiding the natural process, selecting among the possible random pollen mates a plant might have. It trades good qualities that might have developed by random selection for good qualities that will develop by intentional selection, and trades the random bad qualities of random selection for the unforeseen bad qualities of intelligent selection. Either way, motion continues. Stopping evolutionary motion entirely, on the other hand, is profoundly unnatural. Sexual reproduction has become the norm among life on Earth for a reason – because varying lifeforms constantly "trying out" new features have a survival advantage over those lifeforms which don't. It should come as no surprise then that, having created a clone army, we now see that those clones are suffering from rare diseases en masse. It's a consequence we could have foreseen by examining the geneology of any inbred family of European royalty.


Sources

1. Henseley, Tim; A Curious Tale: The Apple in North America, http://www.bbg.org/gar2/topics/kitchen/handbooks/apples/northamerica.html

 

2. Philips, Becky; Innovation, specialization

grow with world apple market, http://www.wsutoday.wsu.edu/completestory.asp?StoryID=1256

 

3. White, Gerald B.; Cornell 2005 Fruit Outlook, http://72.14.203.104/search?q=cache:7ffXZHV-raQJ:aem.cornell.edu/outreach/outlook/2005/Chap8Fruit2005.pdf+american+apple+crop+value&hl=en

 

4. Asexual Reproduction, http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/A/AsexualReproduction.html

 

5. Banerji, Chitrita; Heirlooms for the Future, http://www.clf.org/general/index.asp?id=466

 

6. Levine, Steve, Kazakhstan's Eden of Apples May Also Be Their Salvation, http://www.mongabay.com/external/wild_apples.htm#1

 

7. Trivedi, Bijal P.; Quest for Better Apples Bears Fruit for U.S. Botanists, http://www.mongabay.com/external/wild_apples.htm#2

 

8. Frazer, Jennifer; Scientists Work to Preserve Apple Diversity, http://www.mongabay.com/external/wild_apples.htm#3

 


A Reason to Go Bananas? The Possible Extinction of
Name: Zachary Gr
Date: 2005-12-16 02:14:45
Link to this Comment: 17422

<mytitle> Biology 103
2005 Final Paper
On Serendip

The banana is by many accounts the world's most popular fruit. Despite its American connotation as a sweet, dessert-style fruit, the banana is a complicated and important food that comes in many varieties. Recently, in popular science magazines especially, there has been somewhat of a "banana scare." Some scientists have predicted that the popular Cavendish banana, which serves as the poster-banana for Americans, is going to be extinct in 10 years, with no replacement readily available((1)). This web paper will focus on the current debate surrounding the fate of the Cavendish banana.

The word "banana" is a general term that covers a large variety of species and hybrids in genus Musa of the family Musaceae. The word also is used for the fruit of the plant, which is technically a false berry. ((2)) Banana plants are not trees but rather herbs, with a stem that resembles a tree trunk. ((2)) Suckers grow out from the main plant, and it is on these suckers that the fruit clumps form. The eldest sucker replaces the main plant when the main plant dies. In this way, banana plants are able to reproduce indefinitely without genetic variation, as though the plant never really dies. ((2))

This fact concerning bananas is especially important in understanding the current "threat" to the Cavendish variety of banana. First, some information on the Cavendish: Cavendish bananas are the standard banana for much of the world. They are yellow, elongated tubes containing a sweet fleshy fruit which can be eaten raw (many banana types and plantains must be cooked before being consumed.) The Cavendish was first found in Southeast Asia and brought over to the Caribbean in the early 20th century, being put into commercial production soon after. ((1)) The average American eats 26.2 pounds of Cavendish bananas every year and 100 billion total Cavendish bananas are consumed every year. ((3)) Every one of these fruits is identical, at least genetically, to its brothers and sisters. The Cavendish banana plant does not reproduce by mixing genes, but rather through the method of removing suckers and replanting them. Essentially, every Cavendish banana in the world is a the same plant brought over to the Caribbean earlier this century. ((3)) The plant has no other means to reproduce; each fruit produced is 100% sterile. ((1))

Clearly, this can be a problem. As anyone with a basic understanding of the theory of natural selection understands, genetic diversity is the basis for the survival of any living organism. A series of clones, continuing on in an unchanging existence, will not be prepared for the onslaught of various challenges nature sets to her creations. That is why banana farmers are concerned for the Cavendish's future. Ironically, the Cavendish was adopted as the banana of choice because of a genetic strength: its resistance to Panama disease, or Banana Wilt, which devastated crops of an earlier world favorite, the 'Gros Michel." ((4)) This does not mean, however, that Cavendish bananas are resistant to the many other pests and diseases that threaten banana production all over the world every day.

Cavendish bananas are already under attack from the 'Black Sigtoka' fungus, but farmers are able to control the threat with constant pesticide sprayings. ((4)) This process is expensive and inefficient, but effective. There has been recent talk of a greater threat: a new version of the Panama disease, known as "race-4," which the Cavendish are particularly susceptible. Race-4 is such a threat because it is a soil-bound fungus resistant to all known fungicides. ((3)) Most of the bananas that reach American shores are grown in Latin America, which has not yet felt the effects of Race-4—which is why many are concerned that it is only a matter of time before the fungus finds its way to the New World. ((5)) News articles quoting scientists have predicted that we will lose the Cavendish within 5-10 years.

However, there is hope. After the initial swell of concern for the Cavendish, other scientists spoke out, claiming that, while we should be concerned for the Cavendish's future, all is not lost. The UN's Food and Agriculture Organization, responding to the media's attention to the issue, advised that growers increase genetic mixing of the hundreds of other forms of bananas in order to prevent any kind of widespread extinction. ((2))((6)) The FAO claimed that the many small-scale farmers growing other varieties of bananas that are not threatened by many of the diseases threatening the Cavendish will be integral in providing the bananas capable of replacing the Cavendish if the many threats, most notably race-4, wipe it out. As long as growers remain aware of the dangers in the mass-production of genetically non-diverse fruit from now on, says the FAO, we will not run into a problem on a crisis-level.((6)) Other scientists claim that even if race-4 were to spread to Latin America, it could be controlled through highly advanced measures of containment and plantation placement. ((7))

Though some organizations, like the INIBAP (The International Network for the Improvement of the Banana and Plantain), are claiming that not enough work is being done to ensure the future of the banana as the 4th most economically important fruit source in the world, scientists are working towards solutions to these problems. ((8)) Some, like the FAO suggests, are working with the hundreds of other kinds of bananas to find a hybrid replacement. Others are working with the Cavendish and attempting to use biotechnology to manipulate the genetics of the Cavendish in order to produce a more disease resistant fruit. ((1)) Will the Cavendish suffer the same fate as the 'Gros Michel' did in the 1960's? The answer, it appears, is not important, as long as the world is open to some change. While efforts have not produced that 'perfect' banana, there are many promising beginnings, such as a sweet banana resistant to Black Sigatoka, thanks to some genetic material from radishes. ((1)) Variety, in any case, is a good thing. We may lose the Cavendish, but the banana will go on.

1) Article in Popular science
2)Banana Information Page 3)Wikipedia's entry on bananas
4) Biotech's article on Extinction
5) Article on the debate of the issuea at prweb
6) FAO Release 7)Article on debate at eurekalert
8)The INIBAP's Homepage


Marathons:Going the Distance
Name: Brom Snyde
Date: 2005-12-16 13:14:12
Link to this Comment: 17440


<mytitle>

Biology 103

2005 Final Paper

On Serendip


26.2 miles, every year hundreds of thousands of people in six continents attempt to run this distance. Running marathons has become popular pastime in the United States, fuelled by the heath and fitness movement sweeping the country, the number of entries continues to grow. The marathon is a test of endurance and to be properly prepared for it one must understand the biology behind the marathon, because running twenty-six miles without that knowledge is a dangerous and potentially fatal proposition.
Having twice attempted to run a marathon ( finishing the first one, dropping out of the second one at mile 18) I have personally experienced "the wall." For me hitting "the wall" meant around mile 21 I started feeling as if my legs had lead weights chained to them, each step an excruciating and exhausting process. "The wall" takes different forms for each individual, sometimes resulting in being unable to feel ones' feet, loss of muscle control, and inability to do some mental processes, forgetting what mile one is on and inability to read watches or calculate pace. (1) In biological terms "the wall" marks the point at which the body has broken down and derived all the energy from the glycogen at its disposal. Glycogen is polymerized form of glucose, a string of glucose molecules. (2) The maximum amount of glucose stored in glycogen for the average person is around 2,000 calories worth. On average, most people burn around 100 calories per mile so by mile 20 all of the available glucose stored in glycogen has been burned and the body is looking for another source of glucose. At this juncture the glucose in fatty acids becomes the source of energy. The process breaking down fatty acids down into usable energy is not as efficient as breaking down glucose, it requires more oxygen than glycogenolysis, thus making the heart work harder delivering the necessary oxygen. (1) (3) As the heart works harder to pump oxygenated blood to the muscles screaming for it, lightheadedness may occur ( the brain is not getting enough oxygen) and various muscle groups do not work as well due to the depletion of glycogen stores.
There are ways to combat hitting "the wall." Many runners "carbo-load" (eating foods rich in carbohydrate days before marathons) in an attempt to maximize the amount of glucose stored in glycogen in the muscles. "Carbo-loading" can be augmented by eating during marathons; most marathons have stations every few miles with power gels (rich in glucose), cookies, and candy. Another strategy utilizes using different sets of muscles, slightly varying the pace and stride length, thereby burning the glycogen in more muscles, allowing the glycogen supplies to last longer. As a runner quickens the pace fast-twitch muscle fibers are utilized. These muscles have not been used as much as slow-twitch muscles in marathons so if a runner is able to use these muscles, they have another source of glycogen, thus holding off the shift to breaking down fat a little longer. (1) (5) Training regimes attempt to acquaint the body with the transfer from the breaking down of glycogen to fat. Repeated exposure to this phenomenon helps runners prepare mentally for when it occurs in the race, forewarned is forearmed. If a runner is willing to put into the miles, between 100-140 miles a week, it can aid the body in adapting to breaking down fat when glycogen supplies are exhausted. (5)
Most runners, and particularly marathoners, fear dehydration. This fear causes many runners( in a study of 488 runners in the Boston Marathon, thirteen percent) to suffer from a condition called hyponatremia. Hyponatremia occurs when too much water is consumed, creating very diluted blood with very low blood sodium levels. In periods of intense exercise the kidneys cannot get rid of the excess water and therefore continuing to drink water results in the dilution of the blood and water moving into cells. If water is moving into cells, these cells swell. If this occurs in the brain, the brain cells will expand pressing against the skull, compressing the brain stem, thereby preventing the performance of vital functions like breathing, possibly causing death. This problem is more prevalent as number of marathon entrants rises. Most of these new entrants are not world class athletes, taking more than four hours to finish the race. The length of time they are running, the frequency of water stations, and fear of hydration all contribute to many suffering from hyponatremia. (1) (4)
There are factors beyond training and knowledge of running which can affect performance in events like the marathon. At the most elite levels of distance running, East Africans, particularly the Kenyans and Ethiopians dominate. The Kalenjins, a tribe in Kenya of 500,000, "win 40 percent of top international distance running honors...three times as many distance medals as athletes from any other nation in the world." (6) While environmental factors, like elevation (the higher the elevation the less oxygen available so one's body conditions itself to deal with situations where oxygen is less available, like during a race), affect the development of athletes; the Kalenjins' dominance points to factors beyond environmental. If it was just a matter of training at higher elevation American athletes could all train in Colorado and be competitive internationally. In laboratory tests where Kenyan runners were compared with Scandinavians, another group which at one point dominated distance running, the Kenyans' muscle fibers contained more mitochondria (where the glucose combines with oxygen resulting in energy) and capillaries. (6) The greater number of capillaries means more oxygen is delivered to the mitochondria and more energy is produced. With more mitochondria and capillaries than the Scandinavians, the Kenyans are more efficient, with each breath resulting in a greater production of energy. Anyone can have an increased number of capillaries and mitochondria but this trait seems to appear more frequently in East African black populations, particularly those of the Kalenjin tribe. While having the right genes does not make an elite distance runner, having the genetic predisposition for more efficient aerobic exercise, like the Kalenjins apparently do certainly helps.
Biology helps us prepare and succeed in running marathons but there are factors beyond biology's current scope that affect one's ability to run a marathon. Where does the desire to put one's self through 26.2 miles of excruciating physical exertion come from? From a biological perspective one is depleting one's body's energy to frighteningly low levels and weakening one's immune system for no discernible benefit. The question of why humans do things like run marathons, is their a deep-seeded undiscovered biological need to do so, are the questions biology must tackle as it moves into the 21st century,

1)marathon and beyond, COMMENTS ABOUT IT Sara Latta, " Hitting 'the Wall'" 2003

2) Kimball's Biology Pages "Glycogen" ,

3),"Marathon(sport)"

4)New York Times , Gina Kolata "Study Cautions Runners to Limit Their Water Intake"

5) Boston Globe, Judy Foreman "Is there a limit to how fast, long someone can run?" April 13, 2004

6) Run-Down , Jon Entine "Shattering Racist Myths: The Science Behind Why Kenyans Dominate Distance Running"


Circumcision
Name: Iris Mejia
Date: 2005-12-16 16:57:36
Link to this Comment: 17443


<mytitle>

Biology 103

2005 Final Paper

On Serendip


Male circumcision has been dated back to 2500 B.C. in Egypt [(1)]. It has been performed as rituals in many cultures specifically Judaism as a symbol of the covenant. However, its spread into the English Speaking world, around the 19th century, is attributed to the outrage manifested against masturbation and the perceived, unproven, diseases associated with masturbating. It is an extremely common procedure around the world. Most recently discussion about male circumcision results in debates about the possible benefits and risks of the act. There still remains significant differences in the way male and female circumcision are addressed, mainly due to the lack of benefits for female circumcision. With 60 to 70% American newborn males being routinely circumcised [(1)], I will look at its prevalence, mainly the reasons why it may still exist in such high percentages by examining its cost and benefits.
A discussion about circumcision is incomplete without knowing what it is. Male circumcision begins with the removal of the foreskin that covers the glans of the penis and continues through a child's development as the foreskin retracts. The cutting can be done with numerous tools. Depending on the method used for the circumcision, different levels of complications may occur, which are cited as risks for the operation. If performed correctly, the rate of complication in America is estimated between 0.2 to 2% [(2)], which is quite small compared to the incidence in the developing world. Other costs of male circumcision includes pain, infection and bleeding among a list of others, which can be easily treated and don't normally occur in operations performed by medically qualified individuals. There are some complications that are deemed severe, which include laceration of the glans, as well as, death but they are extremely uncommon. Opposition to circumcision also arises from the belief that it reduces sexual enjoyment in males, but a study found a slight increase in sexual dysfunction in men who had not been circumcised [(2)]. The most compelling reason against circumcision is its similarity to female circumcision, which tends to be looked at as mutilation. Mutilation in itself can be seen as positive or negative in that yes a part of the body is being removed, but circumstances may deem it necessary or beneficial for it to occur.
Research shows that circumcision lowers the risk of certain diseases including STDs. Among the top STDs being investigated there are AIDS/HIV and HPV. "The international Agency for Research on cancer found that the odds that circumcised men had penile HPV infection were about 60% lower than the odds that uncircumcised men had this diagnosis[(3)]. Circumcision also lowered the transmission of HPV to women, but mainly in men exhibiting risky sexual behavior. Research into a link between circumcision and AIDS began due to the puzzling difference in the rate of AIDS in America and Africa. Scientists were looking for an explanation and discovered that "uncircumcised men run a greater risk of becoming infected by AIDS"[(4)], which can be supported with the fact a really large percentage of American Males were circumcised. Other research showed that not being circumcised increased the risk of AIDS infection by 5 to 8 times [(3)]. An analysis that pooled data from 28 studies also had similar findings. It differed in finding that circumcision before the age of 12 decreased risk of HIV yet at 13 yrs or older it did not [[(5)]. The lower risk has been associated with the lack of foreskin, which typically provides a moist protective environment that would allow viruses to live longer, but the previous statement suggests it must be more than just lack of foreskin.
In the Jewish community, the lack of penile cancers led to a perceived correlation between not being circumcised and penile cancer. Generally, circumcised individuals have not been found in any case of penile cancer. In five studies of this disease, "all penile cancers occurred in uncircumcised individuals"[(2)]. Due to the low likelihood of developing penile cancer, circumcision is not advocated for prevention. It is considered an option for males with phimosis, which is a condition that prevents the retraction of the foreskin in uncircumcised males and is considered a risk factor for penile cancer. It is indicated for loosening the foreskin, but it can also be done by self-loosening, which means the male performs foreskin stretching exercises. Recent studies of penile cancers take into account the different forms of the disease compared to circumcision. The results showed that less severe forms of the disease were found in circumcised men and that there is no protectiveness of circumcision against these forms [(6)]
A benefit of circumcision that has not had contrary results is that circumcision lowers the risk of urinary tract infections. The percentage of the decrease varies, but even with a universal percentage it can be argued that there isn't considerable benefits since the disease affects males in the early stages of life and it is easily curable.
Studies on male circumcision attempt to show that the costs and benefits balance out one another, but for parents concerned with particular diseases the benefits for that disease may be most important, hence they opt for circumcision. However, this is assuming parents have been educated about the risks and benefits of circumcision, which doesn't seem to be the case. Many parents are influenced by their culture, including religious and societal reasons. As I mentioned earlier, it is a part of Jewish custom. In certain cultures, males who have not been circumcised are treated differently. For example, Among the Karembola of Southern Madagascar circumcision is an ancestral blessing and during funerals uncircumcised men are marked [(7)]A societal reason includes parents wanting their children to be accepted into their culture. In places were circumcision is "normal" many parents circumcise their children so that they will fit in.

Dritsas, Lawrence S. Science, Technology, & Human Values. Vol. 26, No. 2.
Hirji, Hassan. Male Circumcision: a review of the evidence. JMHG. Vol. 2, No. 1.
Lane, T. Circumcision may lower risk of both acquiring ad transmitting HPV. Perspectives on Sexual and Reproductive Health. Vol. 34, No. 4.
Marx, Jean L. Circumcision may protect against the AIDS Virus. Science. New series, Vol. 245 No. 4917.
Hirozawa, A. In Sub-Saharan Africa, circumcised men are less likely than uncircumcised men to become infected with HIV. International Family Planning Perspectives, Vol. 27, No. 2.

Benatar, Michael and David Benatar.2003. Between Prophylaxis and child abuse: The Ethics of Neonatal Male Circumcision. The American Journal of Bioethics Vol. 3, No. 2.
Middleton Karen. Circumcision, Death, and Strangers. Journal of Religion in Africa, Vol. 27 Fasc. 4.


My Mind Says What My Words Do Not Think
Name: Scott Shep
Date: 2005-12-18 22:10:17
Link to this Comment: 17449


<mytitle>

Biology 103

2005 Final Paper

On Serendip


For thousands of years, philosophers, theologians, biologists, and more recently psychologists have tried to understand, conceptualize, and define what it means to be conscious. For some thinkers the conscious state is inextricable from language, and the blooming, buzzing confusion that is perception takes on the form of consciousness because of the infusing structure of language. Aphasia, which is a disorder in which some or all linguistic capabilities are lost, and split-brain disorders have exemplified the complex ontology of consciousness because even cases which seem to isolate language functions in the brain from the other aspects of the conscious state, do not properly so. The modern trend of science to reduce biological phenomena to isolating schemas has been challenged by a post-modern movement which aims to more appropriately re-align the goals of science to continue discovering the multifold relationships between physical, biological, and psychological interpretations of consciousness. Although strict, modern biological approaches claim that language is a secondary aspect of consciousness, the post-modern aspect understands that language cannot be stripped from consciousness because this begs the question of what consciousness is. As disparate disciplines perpetuate their isolated growth, they will continue to improperly conceive of the mutual incorporation of consciousness and language.
Biology can benefit from isolating problems and phenomena, but it must always remember that this only describes a model of an organism—it does not describe the reality. Aphasia is an umbrella term for disorders with language production and reception, and it usually occurs in people who have had damage to the left sides of their brain. Strokes, physical trauma, and brain tumors are among the most common causes of aphasia. (2) Each case of aphasia is different, but pathologists have come up with many differentiated categories in order to specify an aphasic's language abilities. The chief distinction between aphasia disorders is the ability to understand language and the ability to produce language. The varying degree to which an aphasic can fluently speak sentences, mimic, tell stories, answer questions, re-create aspects of syntax and grammar, not only conveys the amount and types of damage an aphasiac has incurred but it also conveys how multi-faceted language is in the mind and brain.
The two major types of aphasia are Broca's aphasia and Wernicke's aphasia, where, "Broca's aphasia results from damage to the front portion of the language dominant side of the brain", and "Wernicke's results from damage to the back portion of the language dominant side of the brain" (4). Agrammatic speech often follows from Broca's aphasia, which means that a person has difficulty creating appropriate syntax and grammar. This does not mean, however, that they are unable to comprehend situations, emotional complexity, and narratives. In fact Broca's aphasiacs seem very troubled because their mind's damage is fairly limited to re-creating language rather than understanding it. Here is one example of a Broca's patient explaining the story of the Cinderella through words:
"Cinderella...poor...um 'dopted her...scrubbed floor, um, tidy...poor, um...'dopted...Si-sisters and mother...ball. Ball, prince um, shoe... Scrubbed and uh washed and un...tidy, uh, sisters and mother, prince, no, prince, yes. Cinderella hooked prince. (Laughs.) Um, um, shoes, um, twelve o'clock ball, finished" (4)

This patient was also able to answer other questions about the story in the same punctuated, agrammatic way. This capability showed some degree of comprehension, which is usually lacking from a patient with Wernicke's aphasia. This is an example from a person who is attempting to describe a picture of a child taking a cookie.
"can't tell you what that is, but I know what it is, but I don't now where it is. But I don't know what's under. I know it's you couldn't say it's ... I couldn't say what it is. I couldn't say what that is. This shu-- that should be right in here. That's very bad in there. Anyway, this one here, and that, and that's it. This is the getting in here and that's the getting around here, and that, and that's it. This is getting in here and that's the getting around here, this one and one with this one. And this one, and that's it, isn't it? I don't know what else you'd want" (4)

Wernicke's patients have a fluency to there speech, and they are often able to intone sentences more normally, but they also lack the ability to comprehend information and so the form of their speech may be more natural, but the content is often nonsensical. Broca's patients are often aware on some internal level that they cannot make language work, but Wernicke's patients are often unaware that they are unintelligible and do not show signs of synthesizing information on higher thinking levels (4).
In order to make sense of this very intriguing distinction between the two major types of aphasia, Michael Ullman has proposed a model that differentiates between declarative memory and procedural memory. Loss of declarative memory is connected to Wernicke's aphasia because although they can still carry out the linguistic procedure (the unconscious ability of speaking through grammar, their sentences do not hold meaningful content as Broca's aphasiacs do. Declarative memory is also called lexical memory because it refers to the ability to connect the arbitrary system of language with meaningful ideas, objects, and concepts. By distinguishing these two types of memory, one can conclude that the frontal part of the language section of the brain is more responsible for declarative memory, whereas the back part of the language section of the brain is more responsible for procedural memory. This is why damage to each of these parts of the head creates such different types of aphasia disorders. As useful as this model is, it is most important to understand that this modernist move to palimpsest the mind onto the physical organ—the brain—does not ultimately explain what consciousness is. It is merely another set of results that must be combined with all others in order to get at the best idea of the whole picture—the whole however will always be more than the sum of its parts. (4)
In the evolution of language, it seems that verbal syntax came after a sort of protolanguage which allowed pre-human beings to communicate, but the movement from a protolanguage to language can never be fully understood. To compare the internal experiences of a consciousness with language and a consciousness with protolanguage is a paradox. To know how the two compare, one must not really be in type of consciousness or the other. (1) The relationship between consciousness and language is one that misunderstands itself in conceiving as the two ideas as separate to begin with. When psychology burgeoned from a study of the brain into a study of the mind, it continued to aspire be as reputable of a science as physics or mathematics—Newton's laws burned in the hearts of men, and made them believe in atomistic truths. (5) As aphasia continues to be studied in an attempt to isolate language from consciousness, scientists must continue to realize that one can never understand a phenomenon by dissecting it and looking at its parts. New studies will continue to classify and represent the ways in which language is connected to parts of the brain, but to explain consciousness and language as two entities is a doomed process that gets caught in itself the moment it begins.


Bibliography

1), Consciousness, Communication, Speech

2), Aphasia Description

3), Aphasia and Parrhesia: Code and Speech in the Neural Topographies of the Net.

4), Mind and Brain

5), Postmodern Psychological Approach

6), Aphasia Fact Sheet

7), New Agenda For Studying Consciousness


Why Can't My Dog Hear Me?: A Study of Congenital D
Name: Lizzy de V
Date: 2005-12-20 20:50:20
Link to this Comment: 17454

<mytitle> Biology 103
2005 Final Paper
On Serendip

When I was ten years old, my parents bought me a three month old Cavalier King Charles Spaniel puppy who I named Mickey. Mickey was a very happy little dog and was constantly doing things that everyone found to be extremely humorous. One of his funniest "tricks" took place every night when I practiced the violin. Mickey would come to the room in which I was practicing, lie down and howl with his nose toward the sky, like a wolf howling at the moon. This usually took place when I played on the E string, the highest notes on the instrument. At first we thought that the music must be bothering Mickey, that the high pitched noises were hurting his ears. But Mickey insisted on always being in the room where I practiced. He never seemed to be in pain, and never left. It was almost as if he was singing along with my performances of Vivaldi and Bach.

Around five years after we got Mickey, we began to notice that he frequently was not responsive to his name being called. We thought that perhaps he was just a bit defiant, but after various home made tests, it seemed clear that he was losing his hearing. If his back was turned, Mickey rarely responded when pots and pans were banged together. He never sang along anymore when I practiced the violin. It seemed crazy that such a young dog would be going deaf, but the veterinarian was able to verify it. She covered Mickey's eyes and brought him into the back of her office where various sick dogs barked and whined. He didn't appear to hear any of them.

Congenital deafness in dogs can be caused by many of the same reasons as in humans. It can be genetic, or acquired through a number of possibilities. Drug toxicity (also called ototoxicity), the negative effect of the dispensation of a drug or chemical, can directly or indirectly damage cochlear hair cells. This may result in hearing loss or total deafness. Aminoglycoside antibiotics (including gentamicin, kanamycin, neomycin, tobramycin and others), which are sometimes the only treatment for a life-threatening infection in dogs, are the most common drugs which cause ototoxicity. Ear cleaning solutions which contain chlorhexidine and other less common chemicals may also cause deafness. These solutions have since been taken off the market. Drug toxicity can also be vestibulotoxic and disturb the dog's sense of balance, giving it a head tilt and sometimes causing it to walk in circles. (1) Like aminoglycoside antibiotics and chlorhexidine, general anesthesia may also cause deafness. While the causes of this possibility have yet to be established, it may be that after receiving general anesthesia, the dog's body sends blood away from the cochlea to shield other critical organs, or that the positioning of the dog's jaw constricts the arterial supply and keeps it from reaching the cochlea. Deafness sometimes occurs when a dog has gone under anesthesia for an ear or teeth cleaning. (1)

Two additional possibilities for acquired hearing loss are noise trauma and ear infections. Depending on the volume of a noise, temporary or permanent hearing loss can be caused. The middle ear has small muscles which "reflexly contract to reduce sound transmission into the inner ear in response to loud sounds and prior to vocalization), which helps in sustained or continuous noise." Percussive noises (gun fire, explosions), though, are too quick for the middle ear muscles to protect the inner ear, and hair cells and support cells are disrupted. Infections of the middle ear (otitis media) or inner ear (otitis interna) can also cause deafness. Both types of infection can leave behind "crud" which blocks sound transmission. In the case of otitis media, the body can clear out this "crud" and hearing can gradually improve. Otitis interna, however, will result in permanent nerve deafness if it is not treated right away. (1)

More common than acquired deafness in dogs is inherited deafness, caused by an autosomal dominant, recessive or sex-linked gene. Deafness usually develops in the first few weeks of life when the ear canal is still closed. It occurs when the blood supply to the cochlea degenerates and the nerve cells of the cochlea die. (2) Congenital deafness has been found in over 85 breeds of dog, with high prevalence in the Australian Cattle Dog, the Australian Shepherd, the Bull Terrier, the Catahoula Leopard Dog, the Dalmatian, the English Cocker Spaniel and the English Setter. (3) If you are familiar with dog breeds, you may notice that these breeds, along with most of the other breeds who suffer from congenital deafness, have some white pigmentation in their coats. It has been suggested that the cause of the degeneration of the blood supply to the cochlea is associated with the lack of pigment producing cells (melanocytes) in the blood vessels. (2)

Inherited congenital deafness in dogs is associated with two pigmentation genes, the merle gene and the piebald gene. The merle (dapple) gene causes dogs to have coats with a mingled or patchwork combination of light and dark areas. The merle gene is dominant. Dogs affected with the merle gene will have the pigmentation pattern on their coats. When two dogs with heterozygous merle genes are bred, 25% of their puppies will not have the merle pigmentation pattern, but a solid white coat and blue irises. They are often deaf and/or blind, and sterile. Pigmentation has been disrupted and has produced deaf dogs. (2) The piebald and extreme white piebald pigment genes are less well-understood. The Dalmation, a breed with a 29.9% deafness prevalence (10), has the extreme white piebald pigment gene, which affects the amount and distribution of white areas on the body of the dog. The genetic pattern of deafness in Dalmations has lead to a great deal of confusion. Deaf puppies have resulted from hearing parents, so deafness does not appear to be autosomal dominant. Pairs of deaf Dalmations have been bred and produced bilaterally hearing and unilaterally hearing puppies. If deafness was recessive, all of the puppies would have been deaf. It is, however, possible that there is a multi-gene cause for deafness in dogs with the piebald pigment genes, such as the existence of two different autosomal recessive deafness genes. (5)

If you have concerns that your dog may have lost its hearing, there are a few ways you can test for deafness. When still in the liter, a deaf puppy may play or bite more aggressively than others because it does not hear that the other puppies are yelping in pain. A deaf puppy may also not wake up at feeding time unless it is bumped by a littermate. Later in life, you will notice that the deaf dog does not respond when it is called or a noise is made. When it is sleeping, far away, or not looking at you, it will not acknowledge that you are calling for it. Home tests for deafness can include rattling a can of coins or keys, squeaking a toy, turning on a vacuum cleaner or ringing the doorbell. The BAER test (Brainstem Auditory Evoked Response) is the only 100% reliable test for deafness in a dog. The test is not painful and can be performed on dogs at least six weeks of age. In the BAER test, a computer records the electrical activity of the brain in response to sound stimulation, measuring the same range of hearing as in human infants. It does not measure the full range of canine hearing, but it is able to determine if a dog has hearing within the normal human range. (4)

Deafness in dogs cannot be cured or treated, but it can be dealt with. Because dogs that are bilaterally deaf can often startle easily and are difficult to train, many are euthanized as puppies. (6) This solution is cruel and absolutely unnecessary. Dogs can be trained to respond to hand signals for approval, punishment and activities such as eating and going outside. The greatest challenge is getting a deaf puppy's attention. Until a signal connected with "attention" is created, the deaf puppy has no "name". Technology can be of help in communicating with the deaf dog. Flashlights are useful in the evening hours, but are of limited use during daylight. During the day, a laser pointer can be used as a way of getting the dog's attention. (7) The possibility of developing hearing aids for dogs where residual auditory function remains has been researched. Dr. A.E. Marshall at Auburn University in Alabama placed human hearing aids in a collar-mounted container, and led a plastic tube from the aid that terminated in a foam plug placed in the ear canal. Smaller dogs tolerated the presence of a foam plug in the ear better than large breeds. These units, however, are generally not favored by veterinarians. (8)

In the case of my own dog, Mickey, my research has led me to believe that his deafness is genetic, not acquired. According to CavalierHealth.org, progressive hereditary deafness is prevalent in Cavalier King Charles Spaniels. Hearing loss generally begins during puppyhood and progresses until the dog is completely deaf, usually between the ages of three and five years, when we noticed Mickey could not hear. (9) I am personally shocked at the suggestion of many websites that a deaf puppy should be euthanized. None of my research indicates that a deaf dog has an inferior quality of life to that of a dog with full hearing. Mickey seems to not notice at all that he cannot hear. He goes along with his happy dog life, eating, sleeping and licking, and communicates with us as well as any other dog I have ever met. Just as is the case with humans, no animal should be selected for life over another according to their health. People all over the world are willing to love all dogs, blind, deaf, healthy and sick. Every person and animal should be allowed to love and to give love in return.

WWW Sources

1)Causes of Sudden Onset of Deafnessa>, from the Louisiana State University website
2)Genetics of Deafness in Dogs, from the Louisiana State University website
3)Dog Breeds With Reported Congenital Deafness, from the Louisiana State University website
4)Frequently Asked Questionsa>, from the Deaf Dog Education Action Fund website
5)Congenital Deafness and its Recognition, from the Louisiana State University website
6)What is Deafness?, from the Canine Inherited Disorders Database
7)Why the Deaf Dog Barks, from the Click and Treat website
8)What About Hearing Aids?, from the Louisiana State University website
9)Deafness is Hereditary and Progressive in Cavalier King Charles Spaniels, from the CavalierHealth.org website
10)Breed-Specific Deafness Prevalence In Dogs (percent), from the Louisiana State University website
11)How Well Do Dogs and Other Animals Hear?, from the Louisiana State University website


Did Dinosaurs Exist?
Drawing out Leviatha

Name: Norma Alts
Date: 2006-01-02 09:35:08
Link to this Comment: 17509


<mytitle>

Biology 103

2005 Final Paper

On Serendip


How do we know that dinosaurs existed? As humans, we have (and, some would argue, are shaped by) cultural forces and economic interests. Is science contingent on inherently subjective perspectives? How can scientists remove themselves from their cultural ideologies and maternal interests to make unbiased claims about dinosaurs?


In Drawing out Leviathan, Keith Parsons attempts to mobilize the history of dinosaurs in science to support the validity of traditional scientific methods and discourses. Parsons portrays the "science wars" - the ongoing disputes about the subjectivity of science - as a battle between rational sciencists and constructivists, whom Parsons claims are committed to one or both of two theses. First, all modes of knowledge are "necessarily relative and parochial" - palentological methods or factual claims have no more objective validity than Greek mythology (Parsons 82). Second, even if rational standards exist (or are constructed), consensus is based on conflict and negotiation rather than these standards. He describes his book as "another shot fired in the science wars," which he claims that "rational people have a duty to win" (Parsons xv). The point of this book review is not to determine who won (or should win) the science wars, but rather to summarize and critique Parsons' argument, and to extrapolate lessons about how to approach the constructivist-rationalist debate.


Despite his negative (and, at times, mocking) portrait of constructivists, Parsons does not dismiss all arguments about the influence of culture on science. Indeed, one of his examples, the Carnegie Museum's decision to put the head of the wrong kind of dinosaur on its prize skeleton, shows that the museum's desire for public acclaim lead to a decision that scientific consensus would later refute. However, Parsons claims that "rationalists hold that in the long run... science can transcend ideology and politics and achieve the rigorous constraint of theory" by observing or interacting with nature (Parsons 81).


Central to Parson' arguments are four case studies of episodes in the history of dinosaurs in science, and an examination of the theories of constructivist Bruno Latour. In these, Parsons does an excellent job of engaging closely with relevant scientific theories, historical events and constructivist arguments. His contention that despite the significant influence of social factors, "reason and evidence, the traditional 'scientific' factors, also modeled every step" of the aforementioned Carnegie episode is well documented, as is his argument that similar methods of rationality prevailed in all of David Raup's work. The later case is significant because (Parsons argues) David Raup's "conversion" to accepting the argument of a group of scientists, including Luis and Walter Alavarez, that dinosaur extinction was caused by a large asteroid resembles a Kuhnian paradigm shift. Thomas Kuhn's theory, often invoked by constructionists, claims that scientific standards, methods and theoretical commitments periodically shift radically - and, in Parsons words, that scientists "experience something like a religious conversion" (Parsons 52). Parsons uses his analysis of Raup to argue that scientists generally have a "wide array of broadly shared and deeply grounded standards, criteria, methods, techniques, data, etc." that allow them to make "fully rational decisions" about whether to accept or reject theories (Parsons 78, emphasis in original). In brief, Parsons contends that scientific rationality remains consistent over time.


However, Parsons would do well to say more about what rationality - the school of thought that he sets out to defend - means. In relation to Raup, whose work he argues was "traditionally rational," Parsons defines rational theory change as "persuaded.... by evidence and arguments based on... broadly shared standards, criteria, methods, techniques, and so on," and claims that it is present "across theories and disciplines" (Parson 59). But Parsons does not directly expand on what this shared notion and practice of science look like, leaving the reader to fill in the gaps. Given the universal nature of this definition, it can, presumably, generalized to the rest of Parsons' argument about rationality. By asserted the omnipresence of rationality to challenge Kuhn's concept of shifts, Parsons does not engage with the development of scientific rationality, or allow that it can evolve or vary. He is skeptical of a universal Scientific Method, but sees rationally behind all scientific methods. This broad a claim deserves more explication.


Further, Parsons inappropriately allows scientific methodology to seep into his methods of analyzing paleontology. He gives examples of paleontologists uncovering more bones (and facts) and developing of better methodological tools, and claims that constructionists cannot account for progress. Instead, Parson contends, constructions discount the standards used to evaluate progress as socially formed and meaningless. Since constructivism cannot "deliver the goods" of adequately accounting for progress, Parsons argues, it will "therefore be abandoned" (Parsons 133). But doesn't this conclusion assume that theories about science will be accepted or rejected rationally? Parsons seeks to prove that science operates rationally - isn't he prematurely concluded that not only science but also common ideas of science operate according to standards of rationality?


Parsons' definition of constructionists (outlined in the second paragraph of this review) risk homogenizing constructionists. While Parsons outlines constructionists' two purported hypotheses clearly and specifically, he provides no real support for generalizing constructivists in this way. He claims that they see science as "sophisticated mythology, the folk beliefs of a tribe of scientists... no more or less true than Zande beliefs in witches" (Parsons ix). Surely some fall between Parsons' portrait (albeit vague) of rationalists, and construvistists who would take this extreme of an approach. Parsons draws upon an excerpt David Young's The Discovery of Evolution as the coda for his central chapter, a selection of which appears below:
A sensible view of science must lie somewhere in between" the rationalist extreme of science as free from human influence and the constructivist extreme that "scientific knowledge is... no more than the expression of a particular social group... a sensible view of scientific theory must lie somewhere between these two extremes (Young in Parsons, 104).
Surely much of the interesting (not to mention important) work of the history and philosophy of sciences lies in exploring the terrain between these extremes, and engaging with these possibilities.


Surprisingly, given his attempts to discredit constructionists, Parsons concludes Drawing Out Leviathan with "possible grounds for rapprochement" that "might satisfy the intuitions of rationalists while accommodating the genuine insights of constructivists" (Parsons xx-xxi). Citing Richard Bernstein's interpretations of Kuhn, Parsons suggests that perhaps we can draw on constructivism to show that science cannot be reduced to an algorithmic formal, when realizing that we can use reason to decide what theories to test and to choose between scientific paradigms. Parsons hopes that this model will lead to the end of the science wars (although he does not express overwhelming optimism on this point). But if constructivism is really what Parsons claims it is - if it contends that reasonable standards do not exist and/or that scientific outcomes are always constructed - then this model would not be acceptable to rationalists. Indeed, Parsons' aim - "firing another shot in the science wars" –hardly lends itself to the notion of reconciliation (Parsons xv, emphasis added).


Parsons fears that the "two cultures" in the academy - constructivism and rationality, the literary and the scientific - will make achieving the "traditional goal of a liberal arts education, the formation of a whole person" much harder (Parsons 150). But isn't the process of examining and evaluating conflicting views at the heart of education? If rationalists and constructionists are separated into different divisions, however, it is certainly likely that many students will not put these parts of their educations together.


While this review has criticized Parsons, I do not mean to dismiss his work. His analysis of case studies and particular constructivists' theories engage closely with the events and texts he examines, and his analysis is always careful and sometimes brilliant. Parsons is able to give a range of readers a sense of the nature of paleontology and the debates between rationalists and constructivists. If Parsons, as he claims, is another shot in the science wars, and if, as I argue, this conflict can be a pedagogical process, Parsons is a particularly powerful tool.


Work Cited: Parsons, K. Drawing Out Leviathan: Dinosaurs and the Science Wars. Bloomington: Indiana University Press, 2001.



Name:
Date: 2007-09-28 22:42:18
Link to this Comment: 21956

Very interesting. Thanks for the incite.


Christmas Holiday Cards





| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:25 CDT