Rank Outsiders

16th October 2023

Can a new ranking reverse fragmentation in higher education?

Last month saw the latest Times Higher Education (THE) World University Rankings 2024 published to the usual fanfare of marketing (from highly ranked universities) and criticism (from disdainful academics and commentators). Few things get educators riled up and divided like university rankings, the definitive wedge issue of academia. But this year, the rankings have been published in the shadow of a different set of alternative voices from the BRICS countries; one that could change the face of university rankings globally.


What are university rankings for? Originally, they were conceived as a way to provide a systematic list of institutions dependent on certain criteria that would be of value to potential students and their parents making one of the biggest decisions of their lives. As the father of a 19 year old, I have just gone through this painful process, with conversations something like this:

CHILD: I want to go to X University

ME: But what about Y University – the grades are lower for admission, it’s a great place to live, and the course offers a wide variety of options in your third year?

CHILD: I want to go to X University. It’s better than Y. It has prestige.

ME: How do you know?

CHILD: It says in those rankings.

ME: But they are based on irrelevant criteria, cover research more than undergraduate studies, and are completely disowned by the academic community!

CHILD: (shrugs) I still want to go to X University

Needless to say, there was no discussion about alternative ways to weigh up universities’ relative merits, but if they had it might have been useful to reflect on events in July 2023 where a meeting of education ministers from BRICS countries – Brazil, Russia, India, China and South Africa – declared war on existing university rankings, and committed themselves to developing a new one. 

Their reasoning was based on their objection to the cost of participating in such rankings and the effect they had on research culture in universities. While a timeframe and further details have yet to be announced, it has long been felt by many universities in Global South countries that playing the rankings game was not worthwhile, as it sacrificed research norms such as collaboration and sharing. Furthermore, many academics can get drawn into trying to publish research in certain journals in order to ‘score’ more highly for their institution – journals that may not typically publish methodologies they or their research cultures would normally utilize. We have, then, systems in place where rankings for both universities and the journals their academics publish in dominate the higher education agenda. 

Criticism

University rankings have not been around as long as some may think. The US News & World Report first published its US college rankings in the early 1980s, followed in the early 2000s by Times Higher Education (THE). Several other rankings have also sprung up in the meantime at international and national levels, meaning that prospective students have never had more choice in terms of data and rankings to support their decision.

Providing more data to improve decision-making is usually a good thing, but the flip side is that such is the power of the rankings that universities are tempted to chase ranking points rather than focus on their core mission. In his book Breaking Ranks , former university administrator Colin Diver charts the rise of rankings and how they can persuade applicants to zero in on pedigree and prestige, while inducing HEIs to go for short-term gains. Not only does this rig the system, Diver also argues it reduces diversity and intellectual rigor in US colleges.

Looking at this problem more globally, a panel discussion entitled ‘University Rankings: Accept, Amend or Avoid?’ was convened  at the STI Conference in Leiden in The Netherlands (https://www.sti2023.org/) in October 2023. In establishing the panel discussion, the conference detailed events that had led up to the inclusion of this topic at the highly regarded conference on science, technology and innovation indicators. Just in the prior 12 months these involved the creation of an international coalition of stakeholders including a commitment to avoid use of university rankings in the assessment of researchers; a new initiative for HEIs called ‘More Than Our Rank’; the Harnessing the Metric Tide review of indicators, infrastructures and priorities for UK responsible research assessment; Yale University withdrawing from the US News & World Report Law Rankings, followed by several medical schools doing something similar.

Speaking on the STI Conference panel, UK-based research assessment expert Lizzie Gadd commented on the move by BRICS education ministers. Speaking on, Dr Gadd saying: “The BRICS states are expressing their dissatisfaction with the well known university rankings (THE,QS etc) due to these favouring the Global North. However, their chosen response of developing an alternative ranking based on qualitative inputs will only be effective if it displaces the existing dominant rankings in those regions. This is unlikely given previous efforts in this direction have not had this effect”.

Case study

So, thinking of the BRICS countries and many other commentators in this: when combined together, the domination of English language in research journals, Western-dominated university rankings, Western research paradigms and Western-located publishers works against authors from Global South countries, creating a form of hegemony that has been difficult to break down for decades. But how do these phenomena manifest themselves?

Back in 2010, I published an article with one of the Editors I worked with in academic publishing on the impact of the research assessment programs in the UK (REF), Australia (ERA) and New Zealand (PBRF). We interviewed academics from all three countries and asked them if they felt they were influenced in their research choices by the systems they worked under. Sure enough we found that they did, with impacts felt on lower-ranked journals who did not receive submissions in favour of higher-ranked titles.

In other words, attempts to rank or score universities in terms of their research leads to some of those universities or individual researchers impacted by those attempts to fundamentally change their approach. More insidious is how research itself is determined by a relatively narrow band of publications both for academics to publish in, and identified as ‘top’ research. For business schools, inclusion in the Financial Times Top 100 is marketing nirvana, with the potential to increase the number of MBA students (and therefore revenue) as well as the prestige of their institution. This ranking is derived in small part by the FT50, a well-established list of business and management journals that has hardly changed for decades. 

The result? Not only is it limiting for academics who are tasked with publishing in those journals as part of their commitments to their universities, it also limits what is regarded as the best examples of research in a certain area. In a study I co-authored on impact assessment, we used an AI tool to identify those business and management journals which included the most research relating to the UN Sustainable Development Goals (SDGs). Of the top 50 with the most related content, just one was also in the FT50. 

Identifying progress

So can these fractures caused and maintained by Global North-dominated university rankings be healed? There are green shoots that, if able to flourish, could help turn things around. The San Francisco Declaration on Research Assessment (DORA) has gained significant traction among stakeholders in the last decade, and the work of Dr Gadd and others at the STI Conference have inspired many universities to turn their back on rankings in favor of more balanced assessment methods. For example, at the end of September the University of Utrecht declared it had withdrawn its involvement in the THE rankings, citing that such rankings placed too much stress on university competition, not collaboration; the difficulty in scoring the quality of an institution as complex as a university, and the use of some questionable methodology. 

Progress is also being made through representation of more impactful data points in research platforms. For example, with data on nearly 140m publications, Digital Science’s Dimensions enables access to publications from a wide range of research outputs outside the Global North context, as well as content not in the English language. With translation becoming easier thanks to advances in AI, access to non-English content opens up a huge depth of opportunities for researchers the world over.

In addition, using Dimensions database researchers can identify how studies relate to the SDGs using a specific filter, or order articles based on their influence outside academia with Altmetric. Other platforms are also adding wider functionality that means citations – and specifically the Impact Factor from Web of Science – are no longer the only means of filtering research outputs.

Practical uses for this functionality include a recent report from THE, Prince Sultan University and Digital Science where for the first time Global South-oriented data was used in analyzing impact, as well as research integrity data now included in the DImensions database. The analysis in the report showed there was a significant gap between higher- and lower-income nations in regards to SDG-focused research. However, it was evident that there was growing SDG research in lower-income countries over the past 15 years or so, with some increases in collaboration within those regions.

University rankings providers are then listening to the need for wider representation in their data, with THE now also providing its Impact Ranking which has three Global South universities in its latest Top 20 and only one from the US. Other rankings providers are also widening the context of what they are evaluating, so while the methodological problems may persist of what can or cannot be ranked effectively, at least the focus of this activity is not squarely on prestige and performance.

Paradigm shift?

These developments help move the dial away from the dominance rankings have had on many university agendas, but may not be enough to engender the paradigm shift away from defining a ‘good’ university as one that simply satisfies a narrow set of criteria. What may be required is a concerted effort from funders, researchers, policymakers and universities themselves to follow a different path that instead celebrates the diversity of global research and different higher education approaches. 

We saw in an earlier piece in the Fragmentation campaign by Dr Briony Fane how an increase in focus in collaborative pharmaceutical research on the Global South was growing and that was where the biggest need was for medicines and other interventions. For all sorts of reasons, a fragmented research world has bad outcomes for huge swathes of the global population. Similarly, the fragmentation created by university rankings has impacted much of the developing world, which is why the BRICS countries have been moved to try and do something about it. Another ranking may not be the way to go about it, as it will do little to reduce fragmentation and its effects. Universities across the world need to collaborate more in research and meeting global challenges to bridge the divides between them, not compete for more meaningless points on a ranking. 

Share this article
Link copied to clipboard

Subscribe to our newsletter

Explore More From Digital Science