Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Animal Models of the Brain: Ethical Considerations and Alternatives

Gillian Starkey's picture
Our decision to discuss animal research arose from several questions that we think are important to consider, as scientists who have used, or might use, animal models at some point in our career. First, we wondered who is in charge of the regulation of lab animals. Who is keeping track of the numbers, and who monitors the treatment of these lab animals? Secondly, we wanted to investigate the written legal restrictions on use of animals in scientific research. Where is the line drawn between what is acceptable to do to an animal and what isn’t? We also observed that oftentimes, when we (both individually and as a community) do animal research, we tend to separate ourselves from thinking about animals as living, breathing (and possibly also thinking and feeling) beings. We wanted our presentation to inspire people to think about different perspectives on the ethics of using animals in scientific research. By exploring all sides of this issue, we can start to think about the larger picture of using non-human animals to test products and procedures that we eventually intend to apply to ourselves.
We also thought it was important to discuss the viability of using animal models of the brain from both optimistic and skeptical perspectives: What can these models tell us…and what can’t they tell us? Are we on our way toward developing more useful alternatives to animal models, and what might these be? Furthermore, we wanted to extend our discussion of ethics to include future models of the brain. Thus, we decided to frame the presentation topic as “Animal Models of the Brain: Ethical Considerations and Alternatives,” in order to encapsulate these questions and engender further class discussion.

An Introduction to Animal Research
The use of animal models in research is not a new scientific phenomenon -- quite the opposite. Galen dissected animals in the late 100’s to early 200’s AD. Today, fruit flies, zebrafish, and frogs, all considered “model organisms,” are used to research developmental and genetic phenomena. Other animals, particularly mice and rats, are commonly used by psychologists as models of fear, anxiety, and pain. At this point, the ability to use animals for scientific research is a given. However, it wasn’t until fairly recently that lab animals came under legal protection. Anti-cruelty acts began in the early 1800’s in the United States, culminating in the 1966 Animal Welfare Act, which established baseline standards for the acceptable treatment of animals regarding housing, transportation, and sanitation.
The Animal Welfare Act also mandated that the government track and record the number and types of animals used for scientific research in the United States. However, the last “thorough” investigation of this sort was done in 1986, and was most likely a significant underestimate. More than twenty years have passed without a clarification of the total number of animals used for research purposes. Our opinion, with which the class appeared to resoundingly agree, was that these are very valuable numbers that the U.S. government has failed to adequately obtain. It seems as though it would be quite easy to breed one’s own animals for research and stay under the government radar. This indicates that legislation is clearly insufficient in regulating animal research.

Why Animals?
There are several reasons for using animals in research rather than humans. First, animals are easy to obtain and inexpensive to maintain. Second, animals are easy to regulate; researchers can control food and water intake, weight, and activity, which adds to the reliability of their findings. Additionally, animals breed more efficiently than humans: their gestation period is shorter, and they usually give birth to several animals rather than just one. However, aside from these practical advantages, what is troubling about the tendency to automatically use animals for research is that it is implicitly based on an assumption that humans are a “higher species.” We are allowed to test highly experimental drugs and procedures on animals that we wouldn’t try on humans; furthermore, we expect that medicines we take or products we use have been tested on animals before they were put on the market for humans. To be someone’s “guinea pig” is to be their first line of experimentation and to bear all risk of mishap; this expression originated in the use of animals for scientific research. Why is there such a double standard for the treatment of animals versus humans? Where did this assumed hierarchy of importance of species (with humans, of course, on top) come from?

Just How Similar Are Animals and Humans?
Aside from the slightly questionable logic that forms the basis for why we use animals in research, there have been some serious slip-ups with animal testing that demonstrate the limits of its usefulness. One such incident occurred in the 1950’s with Thalidomide, a substance tested in animals and subsequently prescribed to pregnant women to prevent morning sickness. Studies of Thalidomide in mice, rats, cats, and dogs showed no toxicity or harmful effects. However, when approximately 10,000 babies were born with severe birth defects such as phocomelia (abnormal or stunted limb growth), scientists realized that animal testing had failed to reveal the catastrophic teratogenic effects of the drug in humans. The researchers’ quickness to generalize findings from animal studies to humans had shown disastrous results.
If this type of mistake is possible, what are animal models of the brain good for? Currently, animal testing is conducted as a precursor to clinical trials in humans. There are some obvious significant differences between the brains of humans and those of non-human animals. Mice and rats have a much smaller neocortex to brainstem volume ratio. In primates, the neocortex is larger, but the proportions are still significantly different than in humans. These basic neuroanatomical differences demonstrate that a research finding in an animal brain cannot automatically be generalized to human brains. An understanding of the limits of generalizability is essential to any researcher looking to extend findings from animal studies to humans. Moreover, as we discover further complexities in the human brain, animal models become an increasingly remote option.

Restrictions on Human Research
While legislation regarding research with animals may be relatively loose and unenforced, research with humans is much more tightly regulated. The U.S. government developed Institutional Review Boards (IRB’s) as a response to ethically unsound or questionable research (e.g., the Tuskegee Syphilis Study, and Stanley Milgram’s shock experiment). IRB’s are established at every educational institution, and are charged with the responsibility of protecting human rights and welfare and maximizing safety while participating in human subjects research (HSR). IRB approval must be obtained by any proposed research study before research is initiated.
An IRB conducts a cost-benefit analysis of HSR proposals, weighing the risk to subjects against the potential scientific benefits. In order to gain IRB approval, a proposed study must involve minimal risk to participants, obtain informed consent from all participants, and maintain confidentiality. In addition, IRB’s retain stricter guidelines for more vulnerable populations, including children, the elderly, prisoners, pregnant women, and people of “diminished comprehension.” These participants must name a suitable proxy to give informed consent, and the guidelines for risk and treatment are more restrictive.
In short, animal research is often unconvincing, and the differences between animal and human brains present numerous limitations to the generalizability of animal research findings. However, there are more restrictions on what can be done in HSR, and the process takes longer because of the necessary IRB approval. An ideal research subject, therefore, would be a perfect model of a brain that would flawlessly mimic brain functioning, but would not require the protective restrictions placed on human research. Fortunately for scientists, this “perfect model” may be on the way.

The Future of Brain Models: The Blue Brain Project
In 2005, researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland began a collaboration with IBM. Their goal was to create a complete and comprehensive virtual model of the human brain, using the world’s most powerful supercomputers. Led by Henry Markram of EPFL, the team of researchers named this collaboration the “Blue Brain Project.”
The Blue Brain team is assuming a bottom-up approach in developing this model, meaning that they have compiled decades’ worth of research on cellular connectivity to assemble a mass network of microchips that will function together a whole precisely the way the brain does. Each microchip equals one neuron, and is programmed to respond to input and form connections to other microchips in exactly the way that a specific type of neuron would. When data is fed into the whole system, it behaves as a group of neurons in the brain would behave. In addition, the Blue Brain computer network is connected to a visualization interface, which translates the circuitry into a 3-dimensional visual model that looks like a cluster of neurons. This allows the Blue Brain team to see the virtual neurons’ responses to input. Building up the network means adding more microchips and connecting them to the existing network in the way that the human brain is connected. In this way, according to Markram, the Blue Brain is about as precise as it can get: “There are a lot of models out there, but his is the only one that is totally biologically accurate” (Lehrer).
This is an ambitious project, but the Blue Brain team has set out a number of steps that will help them gradually scale up to their goal. In December of 2006, they completed a model of one neocortical column (NCC) of a rat. The NCC is one of the most complex components of the brain of a mammal, so this was no small feat; however, an entire rat brain is comprised of many NCC’s. The Blue Brain team’s next step is to complete the entire brain of a rat. From there, they will scale up to a cat brain, which is bigger and therefore more complex; then they will model a monkey brain, and then a human brain. Ultimately, the virtual brain should be able to simulate all of the functions that a real human brain can. Markram says, “If we build this brain right, it will do everything” (Lehrer).
Several potential problems may hinder the Blue Brain team’s progress toward their lofty goal. The first is that of limited technological resources. Building one rat NCC required one of the world’s most powerful supercomputers; the human brain is made of thousands as many NCC’s as an entire rat brain. Markram has calculated that a human brain model would need 500 petabytes in order to run. One petabyte is equal to 1015 bytes, which means that the virtual human brain would need 500 x 1015 bytes, which is more than 200 times the amount of data stored on all of Google’s servers. There is no guarantee that this vast amount of technology will be available for the Blue Brain team when they need it. In addition to potential technological restrictions, the team faces increasingly limited electrical resources. Markram calculates that running the virtual human brain would cost $3 billion a year. At this point, that kind of energy store is not available (nor is the money). Even so, Markram argues that with the rate at which computers are being developed, they should be able to finish the human brain model in the next ten years or so.
Many researchers are still skeptical of the Blue Brain team’s ability to reach this goal so soon, if at all. One such researcher is Christof Koch, who has created simpler and less precise models of the brain before. Koch argues that what is really going to hinder the Blue Brain team is the lack of adequate neurophysiological data. He says that building a comprehensive model of the brain requires an understanding of how specific brain functions relate to specific behaviors; scientists don’t yet have this understanding, which is why animal models are being used for brain research in the first place. In other words, Koch thinks the Blue Brain project will take longer than expected, because it requires information that we don’t yet have.
As feasible (or not) as the Blue Brain team’s goal seems to be, the project presents some very important questions and implications to consider. One practical question is, once the Blue Brain is completed, what can we do with it? Several members of the class expressed doubts about the usefulness of translating the effects of drugs into software to use in a virtual environment. To echo Koch’s criticism of the project, how will we know what data to put into the system, if we are uncertain about the exact neurophysiological mechanisms of drugs? It seems that what we are hoping to learn from the Blue Brain is impeding what we can learn from it.
Further questions have to do with Markram’s claim that consciousness might arise from the completed Blue Brain. The team is modeling the brain from the bottom up, from its most basic elements to a large network, including every detail. If all the right parts are there, and they’re networked in exactly the right way, consciousness should be a possibility. “Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don’t know why you wouldn’t be able to generate a conscious mind,” says Markram (Lehrer). There seems to be some vital difference between a network of neurons and a network of microchips, but what is it? If consciousness does arise from the Blue Brain (which raises a host of other questions about how we should define it and how we will know if it has it), this will debunk Cartesian dualism by showing that the “mind” is just a byproduct of physiological activity. If the Blue Brain is determined to not have consciousness, what is it missing? What is the fundamental characteristic of neurons that give rise to consciousness that cannot be replicated in hard cold data-storing microchips?

Class Discussion Topics (For Further Thought)
The class discussion following our presentation gravitated toward issues of the ethics surrounding animal research. We focused mainly on not what we think is acceptable, but why. We lamented the lack of adequate statistics regarding the use of animals in research, and generally agreed that it should be a government priority to obtain these numbers correctly. The class also speculated as to why there is such a double standard with how humans and animals are treated in research. Why do humans assume we are a higher species, and that products need to be tested on animals before we use them? Why are there such discrepancies in the regulations for treatment humans and animals in research? There are even differences between how types of animals are treated; for example, we would conduct experiments with fruit flies that we wouldn’t dream of doing with primates. The line is drawn presumably with animals that are evolutionarily close to humans, but why have we adopted this hierarchy of species, with humans on top?
On a more productive note, what are our ethical obligations to animals? Many students noted that they have fewer “moral qualms” with conducting experiments on animals than on humans. And how do we balance these ethical obligations, whatever they may be, with the scientific benefit that questionable experiments could potentially yield? In the end, we all agreed that it would benefit the scientific community to have some sort of “Scientific Code of Conduct” to use as a guideline for what is and is not acceptable. To extend these questions to future models of the brain, if we create a perfect model of the human brain, and it is determined to have consciousness, how can our moral obligations to it be any different from our moral obligations to other humans? Is this, then, really an “alternative” to human or animal research, as was intended by the Blue Brain team? I have come to think that it is our perceived level of consciousness of another being that determines our ethical considerations in how we treat it.


References

Amen, A. (2008, March 24). There was a common theme in. Message posted to </exchange/node/2218>.

Graham-Rowe, D. (2007). A Working Brain Model. Technology Review. Retrieved on 3/4/2008 from <http://www.technologyreview.com/Biotech/19767/>.

Grobstein, P. (2008, March 22). Scientific code of practice…. Message posted to </exchange/node/2218>.

Lehrer, J. (2008). Out of the Blue. Seed Magazine. Retrieved on 3/18/2008 from <http://www.seedmagazine.com/news/2008/03/out_of_the_blue.php>.

Marck, D. (2008, 22 March). Human v. Animal Morality. Message posted to </exchange/node/2218>.

Rabinowitz, E. (2008, March 24). Animal Research…. Message posted to </exchange/node/2218>.

Starkey, G. (2008, March 23). This is a huge problem that. Message posted to </exchange/node/2218>.

Thalidomide. Retrieved from <http://www.nlm.nih.gov/medlineplus/druginfo/medmaster/a699032.html>.

The Blue Brain Project. Retrieved from <http://bluebrain.epfl.ch/>.

United States Department of Health and Human Services: Office for Human Research Protections. Retrieved from <http://www.hhs.gov/ohrp/>.

Comments

Paul Grobstein's picture

the need for a scientific code of conduct?

See The Need for a Scientific Code of Conduct, with this discussion linked to from there.