The world is full of strata

Thu 28 Jun 2007 07:06 AM

As Greg noted recently, there are no real measures of scholarly impact for philosophy journals. The blog Brains links to a recent effort by the European Science Foundation to provide such a measure. (I encountered the Brains entry via Brian Leiter's blog.) Various journals in philosophy and science studies are ranked A, B, and C. These lists are meant to represent the exposure and stature of the journals.

The lists are available as PDFs: philosophy and HPS.

The ESF FAQ offers several caveats: These are not intended to be rankings of journal quality. C ranked journals might still be quite influential within a region or scholarly niche. The rankings may be used to judge programs or institutions, but should not be used to judge individual scholars.

One wonders whether people will mind these caveats, however. Especially to an American, A, B, and C look like grades of quality. (Although I know that students are given numbers instead of letter grades in parts of Europe, I'm not sure whether letter grades are an exclusively American affectation.) Regardless, there is a tendency to overinterpret rankings when there are no other rankings available.

As an analogy, consider Leiter's Philosophical Gourmet Report. It is an influential ranking of graduate departments, but it is specifically a ranking of the research stature of the faculty within such departments. Nevertheless, it is used much more broadly than that-- largely because there is no comparable way of explicitly comparing graduate programs or philosophy departments.

The methodology of the Gourmet Report has been revised in recent iterations, and I will grant for the sake of discussion that it is now a decent instrument for measuring what it claims to measure. However, its influence was waxing even before its methodology had been honed. And even an accurate instrument can be used incorrectly. Consider some examples. (1) The tendency to take the rankings as judgments of department quality may lead job candidates to treat any ranked department as being better than an unranked department. A job at a first-rate liberal arts college might still have much to recommend it over a job at a school near the bottom of the list, but liberal arts colleges are not even eligible for the list. (2) It is an all too common fallacy to judge a philosopher by the prestige of their institution rather than on the basis of their own work. This does not require explicit rankings, but it is perhaps abetted by them.

Leiter offers such caveats, of course-- just as the FAQ for the ESF journal ratings explains that they are not ratings of quality. Yet a straight-forward rating is an appealing thing. Once we've got one, especially when there is no other instrument at hand, it is tempting to use it too widely.

Once you've got a hammer, the world is full of nails. Once you have a ranking instrument, the world is full of strata.