Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

October 27, 2003

Elena Bernal (Institutional Research)
"College Rankings: What Factors Drive Them?"

Summary
Prepared by Anne Dalke
Additions, revisions, extensions are encouraged in the Forum
Participants

Elena gave a very thorough presentation of the methodology, impact and visibility of the U.S. News College Rankings. Since they first appeared in 1988, both the methodologies used and Bryn Mawr's place in the hierarchy have been fairly consistent. Elena reviewed the comparative ranking histories of Bryn Mawr, the Sisters, and the Tri-Co; explained the current methodology used, including the most critical factors and the trends in subranking; Bryn Mawr's current standing on critical factors, relative to our peers; and turned finally to the thorny issues raised by the methodology used by U.S. News.

The largest single factor in the rankings is "peer assessment" (Bryn Mawr is asked to "grade" other national liberal arts colleges, as we are asked to "grade" them). Other factors include faculty resources, selectivity in admissions, and percentage of alumni giving; a particularly interesting factor is the "value-added" aspect of the figure for graduation and retention rates (in which the actual rate is subtracted from the predicted rate). Retention rates play out differently at single-sex schools; women's colleges are systematicaly low in retention.

The issues raised by the methodology used by U.S. News include their limited knowledge of higher education; the validity of the factors they measure (and the relative weight given to each of those factors): the fact that this data is self-reported; the fact that the relative weight of various factors can be altered and (more importantly) is guided by convenience--that is, by what numbers are available, rather than by an intuition of what numbers would be most useful and relevant. Finally, no disclaimers are provided to consumers about any of the above.

There are also serious questions (as Ralph Kuncl showed in a study he conducted a few years ago) about reliability and statistical variance: a school's ranking can change + or - 5 with no measurable change in the actual data being reported. A number of different scenarios can affect the rankings, such as changes in the methodology used by U.S.News, as well as a variety of statistical procedures. (For instance, a school can be "tied" with a large number of schools one year, then drop precipitously the next, when the tie is broken.) Also quite striking is the use of "nullifying values" across reported data. (If, for instance, less than 35% of an entering class report their rank in their high school graduating class, rather than using the data provided by the college, U.S. News will assign a "penalty" (because they assume that the college is attempting to hide the percentage of their students who come from the top 10% of their high school class?) There are also questions about the relationship between ranking and applications; students have a range of reasons for selecting the colleges they do, and Elena has been unable to find any correlation between the rankings and the choices made by students.

A number of hard questions were asked during discussion. Do the "self-selected" students who come to Bryn Mawr "really know what they are getting into"? How much does the desire of Bryn Mawr faculty to "weed out" students contribute to the low satisfaction with the first-year experience here? Might we cultivate a "more inclusive" mentality, rather than identifying the students we like and teaching to them? Mention was made of the recent article in Atlantic Monthly, "What Makes a College Good?" which highlights a new survey which tries to "get at" how good a job colleges do educating their students by conducting a National Measure of Student Engagement on campus.

There was debate, as we ended, about whether we "really shouldn't look at these rankings": they seem an objective representation of data focused on good educational outcomes. But once we understand the way they are formulated, what should we do? Insist on having more control over how the information we provide is used? (We contribute to their money-making venture; should we not have more say in how we are represented?) Or is our increased participation in the activity an endorsement of values to which we do not suscribe? (DO we subscribe to these values? When we grade our students, we are ranking them along a single axis of value....) If we do not appear in the rankings, how will first-generation college students find us? Will we lose prestige? Should we be asking U.S. News to do our advertising for us? If we do, have we any right to complain about how they do so? Should we follow the example of the institutions that have chosen to opt-out of the whole procedure? This conversation continued online, where you are invited to contribute to it further.

It will resume (from a very different angle!) next Monday, when Radcliffe Edmonds of the Classics Department will discuss "The Theology of Arithmetic."

Return to Brown Bag Home Page


Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

Director: Liz McCormack -
emccorma@brynmawr.edu | Faculty Steering Committee | Secretary: Lisa Kolonay
© 1994- , by Center for Science in Society, Bryn Mawr College and Serendip

Last Modified: Wednesday, 02-May-2018 10:51:20 CDT