Home / Forum
Don’t trust the varsity rankings
Published on: Sunday, June 18, 2017

By Dr Sean Matthews
OVER the past few months I’ve been working with IDEAS on a project entitled “Autonomy and Accountability in Higher Education”.

We’ve looked at the way universities are managed, funded and designed.

One striking thing about the modern university is the importance of university rankings: however different the context, culture or history, it is clear that university managers and government policy makers all have an eye on the annual rankings. Every year when the University rankings are published there is a burst of controversy.

Who is up? Who is down? Are we in the Top 100? How many of our local institutions are doing well?

Rankings catch our interest because they are simple and clear. They tell parents and prospective students which are the best universities, where to make our investment in fees and years of our lives.

Rankings tell employers which universities produce good graduates. They tell policy-wonks and managers, whether their strategies and directives are paying off. They tell us which universities, even which departments, are performing well...and who is slacking. Rankings tell politicians whether the massive state investment in higher education is delivering results, whether we are building world-class universities and producing high quality, world-class graduates.

But here’s the thing. Rankings are a scam. They don’t do any of those things. They can’t do any of those things.

They’re subjective, misleading, and profoundly damaging. When we use rankings to determine education policy, set performance targets for academics (from vice-chancellors to the newest lecturers), or even just to choose a programme of study, we are putting our trust in a horribly flawed enterprise. We are surrendering the autonomy to choose the priorities and direction for our children, our students, our universities, our country.

World University Rankings only began in 2003 and 2004 when Shanghai Jiao Tong University published its ARWU (Academic Ranking of World Universities) and then the UK Times Higher Education Supplement produced its THE World Rankings. But right from the start the rankers had a fundamental problem.

What data could they use to calculate their rankings? If there isn’t even universal agreement on how to define “student” or “member of staff” – how is it possible to move to sophisticated categories of value?

How can you compare Oxford with Universiti Sains Islam Malaysia, MIT with UTAR?

The only readily available, quantifiable and more-or-less verifiable data at institutional level, in fact, does relate to numbers of staff (and students), perhaps their qualifications, maybe the amount of research grants won, and of course the number of publications and patents to which they’ve contributed. And so these indicators have always been the basis and core of rankings.

The rankers realised immediately that such a dataset was hardly a convincing basis for cross-border comparison – for ranking – of the world’s universities. They introduced a further variable: Reputation.

They send questionnaires to academics all over the world and ask them what they think are the top universities.

And every year the top 10 or 20 are more or less the same. Think about it: ask anyone about the “top” football teams in the UK, you’ll always hear Manchester United, Liverpool, Arsenal, Chelsea. Maybe Manchester City.

You’ll never hear Leicester City (except perhaps in Leicester), and yet they won the Premiership last year.

Recent analyses of major rankings agencies argue that, in effect, rankings only look at research.

This is because there is simply no data available which might allow us to compare other important things that are central to a university’s mission. Things like teaching quality, student experience, local and regional impact, graduate destinations and employability, or even facilities.

The flaws with the rankings don’t stop there. Even once they know how many publications have come out of an institution, how many patents were submitted, how much research cash was accrued, and how well that place is thought of by a few academics around the world, our rankers still have to give “weightings” to all those elements. The results are wholly arbitrary, and surprisingly volatile.

The rankings have an appallingly damaging impact on state and university policy and strategy.

Measures become targets, unintended consequences proliferate, incentives become more and more perverse, at all levels.

Rankings agencies are, in the end, commercial entities. They sell a single product – rankings.

If the product becomes boring – the rankings are stable, the results predictable – we’ll lose interest.

So despite the fact that it is ludicrous to imagine that from one year to the next much really changes (unlike in the Premiership), each year the rankers hype the new “league table” and we all hold our breath...

In 2008, there was a financial crash. Investments that were meant to be rock-solid, AAA-rated, proved to be valueless. Banks and investment houses failed or had to be bailed out. Recession and depression followed.

Many of us wondered how such a thing could happen. And when we looked closely we found that the Ratings Agencies, the very bodies that advised us where to make safe investments, that advised policy wonks and managers whether their strategies were paying off, that advised politicians about the strength and security of the economy, had done no such thing. The global economy has not yet recovered.

The University Rankings Agencies are to higher education what the Ratings Agencies were to the global financial system. We know how it ends. We have been warned.

Dr Sean Matthews University of Nottingham Malaysia

Most Read