Tuesday, September 20, 2005

"Worst Practices" in Studying Colleges/Universities

Ballardvale Research has posted this pdf entitled "Online Crisis Management: 30 Top Colleges/Universities Respond to Katrina." Though I have no doubt of the good intentions of the researchers, this strikes me as a prime example of how poor methodology yields irrelevant data.

The title of the piece is a bit misleading, because it isn't a study of 30 top schools ... rather, it is a study of what the researchers think are THE 30 top schools. And from whence cometh this data? Why, from US News and World Report! For those of you not in the academic world, let me explain to you that the US News & World Report rankings are generally considered a troublesome joke. They are a joke in that they are generally divorced from reality. For example, one of their top 100 schools in their 2006 rankings nearly lost its accreditation last year. Here's a simple formula: If you are on accreditation probation, you are not a top school. In fact, you are likely one of the worst schools in the country (or at least in your accrediting region).

We would laugh at the annual report, but the problem is that most parents naturally don't have the insight into the world of academics to know the good schools from the poor schools. Good students get pushed by their parents toward schools ranking high in the list. The end result is that schools fight to get near the top of the list while all the time knowing that the list is of little intrinsic value [I would link a bunch of articles in the Chronicle of Higher Ed here, but all the ones I want require a subscription. So, take my word for it ... this is a common perception among academics].

This study, then, uses the US News rankings uncritically ... something that no one involved in research should ever do. This poor choice is compounded by the regional bias of the rankings ... none of the schools are near the Hurricane affected area. The report then focuses on the efforts of schools that are unlikely to have a large number of students affected in any way.

The report singles out Ginnell, Harvey Mudd, and CalTech for "muted" responses, ignoring the obvious fact that these schools are in Iowa and California. The authors appear to sense that they are stretching, in that they write, "Caltech, with 19,000 living alumni (and presumably some living along the Gulf Coast)..." My own presumption is that the vast majority of alumni from a school in California are living on the West Coast, not the Gulf Coast. Let us assume, though, that fully half of Caltech's alumni live on the Gulf Coast. Except for students and academics, who out there thought, Gosh, that Hurricane Katrina is awful. You know, I should really check my alma mater's website to see what I can do? My guess is that most alumni would check in with the International Red Cross or some other relief organization. Heck, I'm an academic, and I didn't bother checking to see what my undergrad school in Indiana or my grad school in Michigan were doing. What reasonable person would bother?

Up until now, I've been criticizing this particular study, which is not my overall point. What I really want to demonstrate is that methodology is important, not some esoteric concern only for those involved in theoretical work. Because of poor methodology, the report is not wrong ... worse, it is irrelevant. A relevant report might be to study the responses of colleges and universities in Texas, Louisiana, Mississippi, Alabama, Georgia, and Florida, states which will actually have a high number of students affected. Unless I miss my academic geography, the Ballardvale report studies not a single school from the affected region. If they want to do work that is relevant, they should adjust their methodology. A report on relevant schools ... now THAT would be interesting.

h/t Volokh.

3 comments:

  1. Anonymous10:08 AM

    As the author of the report, let me clarify some points. In your post you note that you felt the report didn't study the relevant schools -- in your view, it should have been about the schools in the Gulf Coast region. For the purpose of my report -- which was to study Best Practices in Web site design and structure -- it is precisely those schools I had to avoid.

    The report focused on the top 30 U.S. colleges/universities as defined by U.S. News (none of which are in the region, as you note) for three reasons. First, these institutions are touted as being the best – having the best students, faculty, and infrastructure. If anyone has the processes and resources to respond in the best way, it should be these institutions. Second, they weren’t impacted by Hurricane Katrina – so they had no logistical hurdles to overcome. Their servers weren’t down, their people were available, etc. Third, because their responses were more muted, how they behaved was more interesting. For example, Tulane knew it had to respond – the university was in crisis mode. The affected colleges were expected to respond in Best Practice mode; and if they didn’t, it wouldn’t be easy to figure out if they didn’t due to logistical problems.

    Therefore, how did these more disinterested institutions redesign their Web sites to talk about a crisis to their constituents, given that they had everything going for them? While a Gulf Coast universities story might be more interesting from a human interest point-of-view, these institutions were not the best laboratory for talking about differing responses to Web site design issues in a controlled environment.

    While your argument that alumni wouldn't think of checking their college's Web site for news seems to make sense in the abstract, it wasn't borne out by my experience. I attended Williams College, and a lot of Williams alumni did go to the college Web site looking for information about their friends and former classmates.

    Finally, while academics may grind their teeth about the U.S. News rankings, they are nevertheless accepted by the consumers of the service (parents and students) as a useful benchmark. It's my experience that there is little difference between schools 1 and 2, but certainly some difference between a school ranked 1 and another ranked 50. I chose to use the U.S. News rankings because they are well-known and served as a useful proxy for schools that have large endowments, solid infrastructure, and have relatively sophisticated Web sites.

    ReplyDelete
  2. Thanks for your comments. I'm still not clear, though, on why you felt that the actions of the disinterested are more interesting. I'm wondering (and maybe it is not polite to ask) if the implication you are getting at when you write: "First, these institutions are touted as being the best – having the best students, faculty, and infrastructure. If anyone has the processes and resources to respond in the best way, it should be these institutions" -- if the implication there is that these schools aren't what they are cracked up to be? Or perhaps you are implying that aggressive action by high-profile schools unaffected is a cheap PR tactic?

    I guess the reason I find it so confusing is that, on the one hand, the viewpoint of the practices in your study was "Katrina didn't affect us," but your explanation is that you only wanted to study schools for which that viewpoint was true. Why would you criticize schools for having an accurate view? Is this ultimately considering Katrina a dry run for when crises happen in their own back yards?

    So, to sum up my confusing and confused questions, if this methodology shows you exactly what you wanted to know ... was this ultimately a question about web design, and not about the institutions themselves?

    ReplyDelete
  3. Anonymous9:26 AM

    Finally, while academics may grind their teeth about the U.S. News rankings, they are nevertheless accepted by the consumers of the service (parents and students) as a useful benchmark.

    Are they really? Has anyone done a study to see what parents and students think about the U.S. News rankings?

    And really, whether they are "accepted" as a useful benchmark is irrelevant to the "truth" of their "benchmarkness."

    To use another silly example: The Associated Press puts out a preseason football ranking of teams that haven't played yet. And people accept this as a useful benchmark. Yet as we saw in the first three weeks of the season, a lot of those rankings were pure crap.

    Such may not be *totally* the case with the U.S. News rankings, but enough so that they are worth questioning.

    And besides, why not a random sample? Even using carnegie rankings to gather a list?

    But more to the point of the actual study, the conclusion states (paraphrased) that "universities should use best practices," but doesn't tell us why? What *effects* were seen by universities that used best practices? Did their constituencies even *notice* the efforts? Did the constituencies see these added efforts as valuable to their experience? These are not questions that can be answered by a quick survey of web sites, but through more substantive methodologies like survey research or focus groups.

    And really, connecting those two dots is where the study fails.

    Bryan S.
    Arguing with signposts

    ReplyDelete