Juan reads a paper part 1: The blame.

Juan reads a paper part 1: The blame.

University rankings are frequently misused by the public, but is it their fault or the fault of the ranking creators? Join me as I discover a paper that could answer this question.

Some universities love to boast about their positions in university rankings, almost as if they were part of a football championship. However, these rankings were never intended to be used this way. This is common knowledge within the science and technology studies community, but the causes are open for debate. For example, who is to blame for the misuse? Are the producers of the rankings guilty of negligence or are the consumers of the rankings guilty of a long shot mentality? I was just debating this issue with my colleague Thomas Franssen, who argued that the producers have a responsibility in how their rankings are used while I argued that they have not. To support his argument, he suggested me to read the paper Rankings and Reactivity: How Public Measures Recreate Social Worlds. In my other blog post called Juan reads a paper part 2: The experience, I described my experience of reading this paper but now I will give a short overview of my impressions after reading it because I believe that this paper is quite relevant for anyone interested in the misuse of university rankings.

The paper proposes to frame the misuse of rankings through the concept of Reactivity, which, in the field of sociology, means the phenomenon that happens when the measurement of an object of study also changes that object. I believe that Reactivity is useful for thinking about the consequences of misusing a ranking. The paper identified two mechanisms for Reactivity to manifest:

  • Selffulfilling prophecy: A false assessment about the university makes that assessment to become true. This mechanism can happen in four ways:
    • The Effects of Rankings on External Audiences: The students think that the university is good, therefore good students go to the university, and therefore the university becomes good.
    • The Influences of Prior Rankings on Survey Responses: The evaluators of a university know that the university was evaluated positively in the past, so in the absence of relevant knowledge they evaluate it positively again.
    • Distributing Resources by Ranking: Positively evaluated universities receive more money, therefore they become better universities.
    • Realizing Embedded Assumptions: The universities want to perform better in rankings, therefore they focus on improving the attributes that the ranking measures.
  • Commensuration: An assessment about universities changes how the public assesses universities. This mechanism can happen in three ways:
    • Simplifying Information: The ranking measures few attributes of the universities, therefore the public thinks that only these attributes matter.
    • Commensuration Unites and Distinguishes Relations: The ranking includes different types of universities, even though the public might think that these universities are of the same type. For example, if you make a ranking of law schools, the users of the ranking would naturally think that the schools are comparable, when in reality they have totally different focuses (e.g. penal, business).
    • Commensuration Invites Reflection on What Numbers Represent: The attributes that the ranking measures are supposed to represent something that they don't actually represent. For example, number of papers published is supposed to represent productivity, but it ignores other forms of productivity (more information on this example can be found in this other paper published by our institute).

With these new concepts I could think and express myself more clearly on the misuse of university rankings. Now I have a new position on my debate with Thomas: I believe that the producers of rankings are responsible of Simplifying Information, and if they make more rankings that measure different attributes of the universities then the people will make their own judgments about which university is better according to the attributes they value the most. For example, I am currently working with my colleague Rodrigo Costas on a paper about analyzing the researchers of universities (i.e. their workforce), as in how much the researchers of a university collaborate with each other or how much the university production depends on its most productive researchers. We expect that these new perspectives will provide more contextualized insights on how universities perform.

My take away message for you is to remember the concept of Reactivity: the next time you think about university rankings, it will give you a clearer vision and you might be more critical.


Add a comment