The college rankings are out, and the ridicule is in. Will Oremus at Slate has released his own ranking of the rankings, and U.S. News and World Report took a tumble to just behind “Your father’s strong opinions based on his memories of where he and his buddies went to school 32 years ago,” and the USA Today (football) Coaches Poll. The methodology, “which is in fact almost entirely arbitrary,” is bulletproof.

Valerie Strauss at the Washington Post asks what value the rankings provide and answers: “Not much, if any.” John Tierney at The Atlantic says “my best advice is simply to ignore the U.S. News rankings,” because while “for parents and prospective students who know almost nothing about America’s colleges and universities, the ranking provides a rough guide to the institutional landscape of American higher education,” he finds that “using the U.S. News rankings for any more exacting purpose is about as good for you as eating potato chips and Gummy Bears for dinner. With maple syrup.”

There are many substantive critiques that the rankings critics make, from Malcolm Gladwell’s assertion that “Who comes out on top, in any ranking system, is really about who is doing the ranking,” to a National Opinion Research Council finding that the numbers “lack any defensible empirical or theoretical basis,” from the argument that they fuel building booms that inflate costs instead of controlling them, to the simple idea that “they encourage colleges and universities to game the system,” as Tierney says.

As the tiny Great Books college St. John’s puts it in their explanation of why they do not participate in such rankings:

The kinds of data used to rank schools in the U.S. News and World Report survey are not indications of educational excellence. Some results highlight competitiveness, particularly in admissions. Examples are the acceptance to rejection ratio among applicants, average SAT scores, and class rank. Endowment per undergraduate, faculty salaries, alumni giving are indications of fiscal status, not necessarily of quality of education. So-called reputation rankings—in which college presidents, deans, and admissions officers rate other schools—are also misleading; they may overlook a fine but little-known college, and even if they do point out a good one, they do not tell you for whom that school is a good choice and why.

More worrisome for some of us, though, than any of the particular methodological errors, wrong weightings, or unintended consequences are the homogenizing forces that the ranking project necessarily empowers. The linear ranking system puts the variances of human experience into discrete chunks, and blends them together into steps of conformity utterly at odds with any reasonable experience of education.

While we deride the practitioners of numerology in small Indian villages, we assign these numbers the power to bestow prestige upon some of our most powerful societal institutions, and to measurably guide the streams of millions of students into their appropriate castes.

As John Tierney puts it, giving these rankings anything more than cursory and casual attention just isn’t healthy.