The Good Schools, the Bad Ones and the Ugly Ways We Rank Them
There is a certain tragic irony in the fact that a crackdown on U.S. colleges with low graduation rates was announced during the very week when the Times Higher Education Supplement published its annual list of the world’s best universities. The coincidence provokes the thought that there is too much of this going on.
By “this” I mean the constant effort in higher education to sort the best from the not-best, and the best among the best from the best, and the best among the best among the best from the best among the best. (Try saying that twice quickly.) There are so many rankings now, using so many different methodologies. The Times Higher Education Supplement list -- the THE list, as it is awkwardly known -- emphasizes research done at the school. The U.S. News and World Report Rankings, which also came out this month, emphasize 15 factors, including alumni giving, graduation rate and admission selectivity. There are lists of the best party schools, the schools with the best teachers, the schools with the best dormitories.
Others have said it, and so will I. Such rankings are basically silly. They yield little useful information to would-be consumers of education. At the very top of the THE list, Oxford University has supplanted Cal Tech, which had a five-year run at the top. Does this mean that you should redirect your budding physicist to England?
Even when you think you’re getting information, you may not be. U.S. News has created, among other things, a list of law schools ranked by the percentage of graduates who wind up earning a clerkship with a federal judge. But what good does this do? Even an applicant whose heart is already set on a clerkship would find such a list of small utility. If school A sends a higher proportion of its graduates into the clerkship ranks than does school B, can the applicant be sure that it will be easier to impress a judge if she attends school A? Not unless she has a lot of other facts, too -- for example, the rate at which the students at each school apply for clerkships -- facts U.S. News does not supply and has no easy way to discover.
The many lists, crude and opaque as they might be, have largely swamped the traditional idea of searching in a personalized way not for the best school but for the school that best fits each applicant. Moreover, the existence of the lists -- and the fact that students and their parents pay so much attention to them -- encourages gaming. A former president of Northeastern University, from 1996 to 2006, admitted making a “systematic effort” to improve his school’s U.S. News ranking. He broke no rules. He simply chose to emphasize things that would move the school higher. This decision was driven not by an assessment of the needs of pedagogy, but by a recognition of market realities.
Other schools have been accused of outright lying -- excuse me, fudging the numbers -- to improve their rankings. Several law school graduates have sued the institutions that granted their degrees, claiming that they were deceived about employment opportunities. But judges have been unsympathetic. The New York Times referred to one recent lawsuit, won by the defendant school at trial, as “the first -- and perhaps last -- such case to reach the courtroom.” In short, the courts are unlikely to fix all of this for us.
Which brings us to the potential crackdown on schools at what we might call the other end of the list. According to the Wall Street Journal, the member organizations that comprise the Council of Regional Accrediting Commissions have decided to take a closer look at schools that graduate under 25 percent of their students in four years. I understand the impulse, but the tool of measurement is crude.
Experts on higher education have been predicting for years that the traditional four-year model is going to die, except at a handful of highly selective institutions. Maybe so. But it’s not obvious that we can tell from the graduation rate which schools to euthanize, or even which ones deserve re-examination. A college might well have a low graduation rate because it serves a marginal population -- young people for whom, to be blunt, going to college is a risk. If we punish schools with low graduation rates, we may send the message that these kids shouldn’t be in higher education at all. (And if you’re saying, “Well, that’s true,” you might be right -- but it would be better if we could debate the point openly.)
More important, graduation rate is too crude, and easily gamed. If the accreditors set the level for closer investigation at a given graduation rate, they will simply give colleges, even poor ones, an incentive to graduate more students. They might wind up passing almost everybody. As my Bloomberg View colleague Matt Levine recently noted, you get what you measure. He had banking in mind. But the notion holds just as true for colleges and universities struggling to move up the ratings list. We shouldn’t be surprised when the same idea occurs to the schools at the bottom.
Efforts to create lists of the “worst” colleges have run afoul of something like this. When the Washington Monthly decided to rank the worst colleges, it quickly realized that it was sweeping into its net a number of historically black colleges and universities. It then found a way of adjusting the rankings that made the historically black schools vanish from the list.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Stephen L. Carter at email@example.com
To contact the editor responsible for this story:
Stacey Shick at firstname.lastname@example.org