Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Faculty Learning Commmunity: Agenda and Notes (October 19, 2009)

Anne Dalke's picture

The Faculty Learning Community
for Science and Math Educaiton


Agenda and Discussion Notes
(October 19, 2009)

Howard Glasser and Anne Dalke

SUGGESTED READING:
Berliner, D. C. (2002). "Educational research: The hardest science of all." Educational Researcher, 31(8), 18-20.

Lunch served in Dorothy Vernon Room of Haffner Dining Hall

PARTICIPANTS:
Don Barber (Geology Department; Bryn Mawr College)
Anne Dalke (English Department; Bryn Mawr College)
Lynne Elkins (Geology Department; Bryn Mawr College)
Howard Glasser (Education Program; Bryn Mawr College)
Paul Grobstein (Biology Department; Bryn Mawr College)
Bill Huber (Computer Science Department; Haverford College)
Steve Lindell (Computer Science Department; Haverford College)

AGENDA (by Howard Glasser)
1.    Introductions and opening prompt
a.    What is science

2.    Brief history of the study of learning
a.    Behavioral, cognitive, & other perspectives
b.    “Science” is watching: Is it the academic apex? The brass ring?

3.    Education as a Science?
a.    Campuses often have science faculty who are engaged in education but few,
if any, education faculty who are engaged in science: Responses and reactions?
b.    Berliner piece

4.    Will our group generate a concrete outcome, interesting conversation, or both?
a.    Still to be decided, but research and writing in education often require IRB approval and consent forms
i.    Object of study

5.    Assessment

6.    Next meetings
a.    Monday, Nov 2: Anne and ???
b.    New member of dyad takes notes to post on Serendip

MEETING SUMMARY (by Anne Dalke)

Howard began by explaining that we’d selected Berliner’s 2002 essay on “Educational Research: The Hardest Science of All” for us to read because

1) educational research, in general, is struggling with the ways in which it is using “science” to define and legitimate itself; there are tensions, internal to the field, about what that means;

2) this group, in particular, had begun discussing what form of scientific research we might be conducting here, and whether that might mean the need for us to get IRB approval before we go further in our discussions;

3) a curious comment had been made, in our last session, that while “science faculty are engaged in education, few education faculty are engaged in science”—striking, given that educational researchers view their work as science.
We wrote in response to a prompt -- what is science? -- before Howard walked us through a brief history of the field of education. Seen chronologically, there have been “three constitutive perspectives.” Begun as “philosophical reflections about thinking,” education tried, @ the start of the 20th century, to “position itself as a science.” Three significant stages in this process of conversion from humanities to science included

1) behaviorism: a belief that we could only study what as observable (this did not include thinking or reasoning, which “can’t be seen and are hard to imagine”); the focus here was on stimuli and responses, on “strengthening automaticity.”

But in response to an increasing sense that reasoning, thinking and understanding are real, and an increasing awareness that “how well students understand what is happening can benefit their later learning,”

2) cognitivism emerged, as a search for indirect evidence of “thinking taking place.” Students were re-conceptualized, in this phase of educational research, as active agents, rather than as “stimuli response mechanics.”  But in time, this study of knowing as “bits of information that students can learn to master” gave way in turn to

3) situative learning: a re-conception of knowledge not as “bits to archive,” but rather as an expression of an “interrelated system of individuals, tools,  and environment,” in which the aim of teachers is to develop students’ practice with all the interacting pieces (they might be tested, for example, on their ability to engage and convince others of something they themselves have learned).

All of these ways of thinking about education are still in play; much curriculum development, for example, still uses behavioral models.

This historical survey raised a series of related questions for and comments from us:

* Should the field of education should be modeling itself on scientific methods and  processes?

* How much of this modeling is actually based on “scientism” (an attention to method) rather than what we understand, more largely, as science (for example: most state testing does not aim to determine reasoning skills); how much does this focus “degrade the mission” of education?

*Another complication arises when policy makers and politicians get involved in defining what constitutes science

* The U.S. crisis in science and math is seen either as 1)  a technological failure or 2) a failure to achieve the elite status of financial economic security.

* Why is education not called the “science of learning”? (because most of the field actually involves the “science of teaching; other branches look @ policy and administration).

* Isn’t education really “more engineering” than science? More manipulation of objects (human beings ) than a open-ended process of inquiry?

* Historically, education derives neither from engineering or science, but from “problems of social organization”; the original theorizing came from philosophers and political scientists.

* Neither learning or teaching were central issues of education when it first came into existence. The original practices of education came from religious and spiritual domains; the goals were “edification” and the cultivation of “virtue.”

* The original intention of education, in other words, was to make wise people and good citizens (Thomas Jefferson was educated, for instance, @ the College of William and Mary, where a plaque speaks of  education into “letters and manners”).

* This history has an interesting parallel in psychology, which also attempted, over the course of the 20th century, to become increasingly “scientific” on two levels, both methodological and theoretical. In reaction again “instrospectionism,” and an interest in data that was both public and replicable, psychology developed an “obsession with statistical measures.” It accordingly “excluded a whole realm of material” (such as anecdotes and personal impressions).

* We felt that the story Howard told is very much in accord w/ the account Berliner gives in his article: when educational theorists try to be “scientific” by “insisting on the purity of their data,” they are excluding a “whole realm of material.”

*Similar problems have developed in computer science research, where, without quantifiable data, teaching stories are not found to be publishable.

*Debates have emerged also between the field of education and that of educational psychology: which occupies a “higher rung”? Which better defines the field of study?

* These are debates not only about methodology but also about goals and ends. For example, in medicine the “gold standard” of evidence has been defined by large scale, double blind clinical trials. Such studies are the best way to get information on what is useful , on the average, to large groups.  A “big enough” population sample will yield the comfort of statistical norms.

* Compare the paradigm offered by a number of medical journals, which supply reports about individual cases.  Such qualitative research involved a systematic effort to correct large scale studies aimed @ identifying the average. Might educational researchers look to that paradigm?

* Standardized tests, for example, are actually designed to produce a normal distribution. They are also totally irrelevant to improving learning.

* Such tests are not very useful if the objective, as in education, is not mass production, and the goals are locally specific, particular to urban, rural or suburban sites, to our own situation in the bi-co, or to large public universities.

* Working more locally would require a large paradigm shift: the model for teaching in large urban high schools, for instance, is currently derived from small college classes.

*Might educational research start with questions, rather than with methods?

* There might be a way out of the quandary of feeling caught between local and national goals, a practice useable for all students, despite their variation. Such a different approach might take as its objective “providing an environment in which every student will prosper.” This approach, which should bring all students to new positions, advantageous to them as individuals, of course raises the problem of evaluation: a new methodology will be needed.

* Will it still qualify as “science”? Will it still be objective?

* The core of the controversy, as we see it, is the scope, extent, and practice of science. “No Child Left Behind” exhibits a very narrow view of science.

* Following a mistaken understanding of science, education has defined its task too narrowly. Education might well re-think its thinking, for example, about the social elements of classrooms, which are largely imagined now as distracting, uncontrolled variables. Rather than treating them as problems to get rid of, mightn’t we re-define them as virtues?  Rather than continually refining our instruments of measurement, to “get rid of noise,” mightn’t we recognize that the so-called noise is actually information?

*For example, rather than defining science as mastering what is external, try to use the classroom itself as a laboratory: run the experiment of trying to achieve a steady state by turning the heat up and down, and….?

* An irony here is that the increased importance placed on quantitative reasoning in college produces an educational environment that values quantitative measures of achievement in K-12 schools.

*Science classrooms might move towards increasingly broad objectives, of increasingly sophisticated inquiry.

*Students reporting back on summer internships say that the most valuable part of their experience was personal development: learning “persistence and motivation in the face of failure” (a pretty good description of doing science!).

We heard a story about learning to teach by teaching people how to paddle in white water.  The objectives for the graduates were to learn what they needed to pursue that sport. In such a chaotic environment, they needed first to learn how to observe safety and manage fear. Since they were not in the academy, they didn’t have to fail; they could repeat the course; there was no pre-defined limit to their continued learning.
We found this a striking parallel to the classroom situations we had been discussing; perhaps our goal should be to enable our students to better able live in the world:, which is such an odd, unpredictable mix of order and chaos. The challenges for us all are always very situational. How can you “get people to be good @ unpredictable situations”? Is that an objective standard? One that is difficult to evaluate? The matter of becoming the “master of an environment”? Or of learning how to flourish there?

Most students don’t actually have to learn how to be successful “rafters”; most of them will not go on to become professional scientists. What they all will need, however, is an enhanced skill in negotiating an uncertain world.

Contrast, however, the very different objective newly defined by Haverford College’s Faculty Committee on Academic Achievement (FCAE): a major goal of the College is now to introduce students to scholarly research earlier in their careers, in order to “ produce scholars.”

Our discussion of the science of educational research turned then to the perusal of an IRB form. Why have these become de rigueur in educational studies? Is the motivation one of legal protection? Or is it part of education’s push to be more scientific? Might the two be closely linked? IRBs provide guidelines for research done with human subjects. They assure researchers that “nothing distinctively personal is going on,” and involve the effort to keep such discussion out of public conversation.  Given our conversation, they may be seen as a “symptom of the problem”: putting certain forms of inquiry outside the purview of science, and enforcing its “collectivist, impersonal character.”

We briefly discussed the potential scope of this group’s activities. Does it have a goal? A direction? Are we interested in conducting some sort of formalized study?

We concluded by returning to the question of what science is. Had our definitions changed over the course of our discussion? A few were offered:

* Science is an imaginative resistance to authority.
* Science is storytelling and story revision
* Science is empirically based, socially created scepticism.
* “Science is a belief in the ignorance of authority.”
* “Science is individuals doing their damndest with their minds, no holds barred."
* Science is learning to successfully navigate an unfamiliar environment.

Steve agreed to provide the reading for our next session, on November 2 , which he will co-lead with Anne.

Comments

Paul Grobstein's picture

beyond the present in science/education/educational research

What struck me particulary from our conversation was how readily we (and others) accede to a curiously narrow view of science, education, and research in education.  Science is equated with a body of knowledge and understandings.  Education is equated with learning, the acquisition of knowledge and understandings.  And research in education is then equated with asessing the efficacy of various methods of promoting the acquisition of knowledge and understandings.  What's missing, in all three contexts, is any engagement with the idea of evolution, of change over time and the resulting creation of new things.  Perhaps a shift in orientation from a concern about what has been/is to an enthusiasm for what might be could be useful in all three realms.

Science is, of course, at any given time a body of knowledge and understandings.  But it is also continually evolving.  From this perspective, what is important about science is not what the understandings are at any given time but the new possibilites that existing understandings open up for future understandings.  Maybe that's a good way to think about education as well?  It is not about "mastering" current knowledge/understandings for their own sake, but rather about acquiring the skill to use knowledge/understandings to open up new possibilities of understanding?  Its not about "learning" but rather about acquiring greater facility in generating new understandings?  Education is not about "elucidation," about explaining things, but about "edification," about becoming wiser?

If so, research in education shouldn't be narrowly focused on "learning," but rather more broadly concerned with exploring how people become better able to use observations and existing understandings to generate new understandings.  My guess is that the current focus on research on "learning," and the associated tendency to define research in terms of particular methods, reflects a significant failure of imagination, of the ability to move beyond current understandings.  Let's be adventurous, take some risks, see what new things we can imagine not only in science and in education but in educational research as well?

I like Bill's white water rafting education as a metaphor for this.  But let's imagine an infinitely long and very variable river.  Sure there are understandings that can be conveyed by people who have successfully negotiated parts of the river and, yes, there is some "objective" measure of success of both teachers and students: how far down the river students can get without overturning after interacting with teachers in comparison to before.  But there is no absolute measure of success nor any presumption that any given student will successfully negotiate any stretch of the river in the same way.  What has to be learned is not how to deal with what has experienced by someone but rather how to deal with the as yet not dealt with.   That seems to me an apt characterization of science, a reasonable characterization of the task of education, and an opening for seriously interesting and practically meaningful educational research that will require moving well beyond a reliance on existing methods.

 

Anne Dalke's picture

"Not quite like the natural sciences"

There was a piece in the NYTimes Books section last week which seemed quite interestingly to echo our most recent conversation. Field Study: Just How Relevant Is Political Science? "concedes that political science is not quite like the natural sciences. First, the subjects under study 'can argue back'....But.... it uses the same rigorous mechanisms to evaluate observations as any other science." The essay also references the "Perestroika" movement, described a few years ago by Sandy Schram, of our Graduate School of Social Work and Social Research as "working to promote methodological pluralism in the field of political science: this is an 'increasingly mainstreamed' attempt to move beyond conventional 'scientistic, objective, rational choice' analysis of isolated issues in order to situate them in their historical and political contexts."

Anne Dalke's picture

Just a note....

...having completed the notes for our last discussion. We might start, next time, w/ a couple of threads left hanging this time. It was said, for example, that the field of education is less “the science of learning” than “the science of teaching.” Do we actually think that teaching is a science? An art? If the latter: might the study of education still be a science, which studies  an art….?
 

whuber's picture

Agenda

The agenda, especially at items 3a, 3b, and 4a, seems to orient this learning community towards conducting scientific research in education.  That's an interesting topic and a controversial one as Berliner's article (and the others that accompany it) attest.  But isn't this peripheral to most faculty interests?  As applied practitioners, we would more naturally be oriented towards (a) improving learning among our students, (b) augmenting our own teaching skills, and (c) creating "infrastructure" (organizations, expectations, communities, outlooks, philosophies) that positively influences STEM education at all levels in society.  As such, it seems our interest in the debate over how education research should be conducted would not focus on its methodology but that our conversation would gravitate quickly to issues that are, for us, more basic: how can we tell when a claim (about effectiveness of an educational treatment) is valid?  How can we assess the effects of our own efforts to improve educational outcomes?  Can we go beyond personal, anecdotal evidence?  Can scientific principles (if not scientific methodology) help us with this?

Post new comment

The content of this field is kept private and will not be shown publicly.
To prevent automated spam submissions leave this field empty.
1 + 5 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.