The pitfalls of global university rankings: Numbers don’t lie, but context matters
The pitfalls of global university rankings: Numbers don’t lie, but context matters
What makes a university stand out? Is it the amount of research it conducts? Is it the world-famous faculties it hires? Or is it the employment rate among its graduates? Questions such as these cannot just be answered and weighed in a statistical sense. Yes, numbers never lie, but we often don’t know which numbers to prioritise.
The purpose of this article is to persuade you to look beyond mere comparisons and understand that any ranking that lists every university in the world irrespective of their economic status and other capabilities is not a fair ranking at all.
The companies involved
Among the other rankings, Quacquarelli Symonds (QS), Times Higher Education Ranking, and Academic Ranking of World Universities (ARWU or Shanghai Ranking) are the most widely accepted.
Although the three of them utilise various statistical methods to reach their own findings, the variables taken into account are almost similar and thus have similar merits paired with similar concerns.
The variables themselves need to be understood. For instance, research or contribution to research by a university is a variable most commonly prioritised in all rankings. ARWU focuses and provides a 40% weighted score in its analysis of research contribution.
However, ARWU only takes into account high-level peer-reviewed papers and thus does not venture into small or local publications. Data come from publicly available and verified sources such as Clarivate Analytics (Web of Science), the Nobel Foundation, or the Fields Medal Committee.
This obviously leaves out countries such as Bangladesh because very few academics in Bangladeshi universities contribute to these high-level publications, if at all. The other variables, such as the quality of education and the quality of faculty, are all based on the performances of the alumni and faculties in the field of research.
The QS is not so different. 50% of its weighted score is given to research output. It takes a survey methodology in determining research quality and considers academics’ opinions as to which institutions they think are the top-ranked ones. Some may view it as a fairer model, but this too has the bias of academics prioritising their own institutions to please their mentors and validate their own organisations. Also, citation numbers of each of the faculty members are considered with 20% weightage.
Similarly, THE includes research quality and research environment as separate variables, having close to 60% weightage combined.
All this jargon means that, in order to rank higher in the global ranking, any university from Bangladesh must focus almost exclusively on research. Here lie the underlying issues.
The culture of research
Contrary to popular belief, research does not only include writing for a journal about nuanced findings by collecting real-world data.
Although conventional thought in Bangladesh made it so that any STEM student pursuing an undergraduate thesis is made to look for opportunities to make his real-world observations correlate to an existing hypothesis, thereby making a contribution to a journal and publishing it as a paper.
Research, in the truest sense, is very simple: the ability to generate knowledge, no matter how or by what means. And generating knowledge requires proper incentive structures, both financial and administrative.
For a nation mired in economic turmoil, devoid of infrastructure, and without any connection with the private market, research is something that is not meant to be a priority.
Funding towards research may be considered decadent, as it needs immense resources to fund the scientists, support the students, and adequate willpower to go through something that may not bring expected results.
This is where a strong private market comes into play. In developed nations such as the US, private industry is the biggest contributor to university research as they hire teams of professors and students to benefit the company and subsequently the whole education apparatus.
It may be testing the efficacy of a new drug, building new design portfolios of advanced robotics, or developing policy for a social welfare project, but the academicians are seen as experts who can lead the way, benefiting the company by educating them on real-world functionalities.
However, what happens when there is no market to support the universities? Countries such as Bangladesh still lag behind in proprietary laws, proper regulatory policies, and innovative markets. The companies operating in this economy are conventional manufacturing and service-based, and seldom have the ability to fund third-party universities to benefit them in developing new products and services.
Even when companies require it, such as those in the pharmaceutical industry, they are more incentivised to build in-house R&D facilities or depend on third-party institutions such as USAID, ICDDR,B, or Oxfam. The rise of such third-party research-based organisations is a by-product of universities being rejected as trusted institutions for research by the small segment of industry that can even afford it.
This is not a problem unique to Bangladesh; rather, almost all of South Asia suffers from it, apart from major universities in East Asia and China.
As a consequence, one might ask: which comes first, the market that is able to fund and develop the research culture in universities or the educational culture that can gradually build a research-based market economy?
It is a question similar to the famed egg and chicken paradox. Arguing over it may prove useless, but there is no doubt that one fuels the other’s proper development.
To burst the bubble, the government can play a role. The government has already proven its worth in incentivising and cultivating the research culture from within. The Bangladesh Rice Research Institute, the Atomic Energy Institute, and many other government bodies are now famed for contributing to innovation in the field of agriculture in Bangladesh.
BRRI 27, 28, 107, and 110 are all domestically modified rice breeds developed by the scientists of BRRI to combat the changing climate of Bangladesh. From flood-resistant to saline soil-resilient, these innovations are uniquely made to benefit Bangladesh, and this has been made possible by the government incentivising agricultural universities to work exclusively on research and generating jobs in departments such as BRRI to support graduates after completing their education.
As for an underdeveloped country without proper econocentric policy, these precise policies are few and far between and directly influence the culture of education.
Judging our universities fairly
As there are various factors involved in having a good research base in our universities, it is evident that expecting high-quality research just by injecting funds into universities will never be possible.
It requires a collective economic model that can incentivise not only graduate-level studies but also influence students to stay back in their country and work in innovative markets. There are, however, other variables in play when determining the rankings.
The QS ranking system, for example, looks for employer reputation and employment outcomes as its variables in determining the ranking.
This is also a problematic determinant for judging the universities of Bangladesh. Local employers might be under-represented in global employer surveys, and tracking graduate outcomes systematically is resource-intensive. Some institutions may not publish enough papers or have the required structure to meet inclusion criteria and so may not be ranked or may be grouped broadly.
The employer impact and employment outcome are not clearly outlined on the QS website. QS uses its database of over 82,000 “impactful graduates” (in business, politics, higher education, charity, etc.) to measure how many of an institution’s alumni have made significant societal or professional advances. Defined as the percentage of graduates who go into paid (non-voluntary) full-time or part-time work within 15 months of finishing their degree, voluntary or unpaid work, further study, disability, travel, or caring are excluded.
The database QS uses is proprietary and not public; therefore, it is not clear if employment history among South Asians in small and medium-sized enterprises is even counted.
In Bangladesh, various business houses and small industries are often not transparent about their size and stature for tax evasion and other purposes. For some, the QS might not deem those companies significant enough when compared to global markets and thus may not include them.
QS also depends on institutions providing their own alumni data, and Bangladeshi universities still do not have a collective database of their graduate students. The existing data some may have are from unofficial sources and are not reliable for QS to consider.
This puts the estimation performed by QS on employer reputation and employment outcome in considerable doubt.
The need to develop our own system
Having observed the loopholes present in these systems, and knowing the disparity among the countries from which the universities are measured, it becomes obvious that comparing a university with one from the US is illogical and does not provide the full picture.
In terms of employment, employer preference, and skill level, Bangladeshi students in general cannot be far behind their Western counterparts. Glimpses of this are seen whenever Bangladeshi pupils participate in global competitions in STEM fields, competing with all the disadvantages they face at home.
However, the top percentage of our institutions cannot serve as a sample size for assessing our holistic system. This we need to understand even when comparing our universities with Harvard, Yale, or Oxford, for a business graduate from Harvard is expected to join Goldman Sachs with a high salary in the global market, whilst a business graduate from our nation may join a competitive company within our borders but with a lower salary when judged at the international level.
This precise discrepancy is not something that QS can cover, which is why employability factors should not be considered in the same field between contesting nations across the world.
That is why, in order to judge our universities fairly, in terms of our economic capabilities and compared to our global competitors such as India, Vietnam, or South Korea, we need to develop our own system.
It should take into consideration SMEs, small businesses, and other non-conventional employment options to truly judge a graduate’s success rate after graduation. This extends to understanding the employability of a particular university as well.
Local company norms and culture do not resemble global ones. Private employees reading this article after doing their ten-hour shift can easily recognise the need to evaluate employability in terms of the requirements demanded by local companies.
Overall, the old proverb that ‘numbers do not lie’ may very well be true, but not in all contexts. Because in order for numbers to make sense, it is we who need to understand the premise that validates their credibility. Therefore, any ranking that judges universities across the world without considering individual economics, cultures, or political contexts surrounding such institutions is not a ranking that can have credibility.
It is not the fault of numbers; the fault lies in us who calculate them without proper regard.