Whereas there is a definite need for some ranking - if only to spur a healthy measure of inter-university emulation - in my experience much about these rankings has to do with PR rather than with actual achievements and/or quality in teaching and research. There is also a hard-to-avoid definite slant towards English-speaking universities.
As a case in point, I have studied / respectively been staff or a visiting scientist / at 5 universities, Melbourne University, Monash University, INPG in Grenoble, KIT in Karlsruhe, and Free University in Brussels. These 5 universities rank rather differently in worldwide surveys , yet I have found that there were essentially no differences between them, and the differences lay far more in what the students themselves brought to the table - what a student wanted to gain from their time and experience there and the level of (intense) effort and focus that they'd be prepared to bring about. All 5 universities had both stellar and occasionally somewhat less good faculty. World-renowned physicist Charles Joachain taught at relatively lower-ranking Free University of Brussels , for instance.
Further underscoring the unreliability of such rankings, depending upon the particular survey, different universities score and rank differently - as a case in point, in some key categories, INPG Grenoble ranks either before or below Ecole Polytechnique at Paris/Palaiseau depending on survey. Emphatically, visibility and ranking seems to be strongly influenced by every University's PR policies and use of the English language in their external communications (which may explain why Ecole Polytechnique de Lausanne consistently ranks significantly higher than Polytechnique Paris)
The system is imperfect, yet it is necessary. Some of the imperfections cannot be easily addressed, because addressing them could lead to negative consequences (e.g., 1- if we try to remove the language bias this could lead to less effort to use the lingua franca of science on the part of some and to new fragmentation of science, or 2- it could lead to less inter-university rivalry and emulation), so we may be stuck with a necessarily imperfect, yet useful system
Whereas there is a definite need for some ranking - if only to spur a healthy measure of inter-university emulation - in my experience much about these rankings has to do with PR rather than with actual achievements and/or quality in teaching and research. There is also a hard-to-avoid definite slant towards English-speaking universities.
As a case in point, I have studied / respectively been staff or a visiting scientist / at 5 universities, Melbourne University, Monash University, INPG in Grenoble, KIT in Karlsruhe, and Free University in Brussels. These 5 universities rank rather differently in worldwide surveys , yet I have found that there were essentially no differences between them, and the differences lay far more in what the students themselves brought to the table - what a student wanted to gain from their time and experience there and the level of (intense) effort and focus that they'd be prepared to bring about. All 5 universities had both stellar and occasionally somewhat less good faculty. World-renowned physicist Charles Joachain taught at relatively lower-ranking Free University of Brussels , for instance.
Further underscoring the unreliability of such rankings, depending upon the particular survey, different universities score and rank differently - as a case in point, in some key categories, INPG Grenoble ranks either before or below Ecole Polytechnique at Paris/Palaiseau depending on survey. Emphatically, visibility and ranking seems to be strongly influenced by every University's PR policies and use of the English language in their external communications (which may explain why Ecole Polytechnique de Lausanne consistently ranks significantly higher than Polytechnique Paris)
The system is imperfect, yet it is necessary. Some of the imperfections cannot be easily addressed, because addressing them could lead to negative consequences (e.g., 1- if we try to remove the language bias this could lead to less effort to use the lingua franca of science on the part of some and to new fragmentation of science, or 2- it could lead to less inter-university rivalry and emulation), so we may be stuck with a necessarily imperfect, yet useful system
The main purpose of universities are teaching, research and technology transfer. To these must be added instrumental goals such as efficiency, financial and economic balance, organizational wellbeing. Each of these dimensions must be measured and evaluated according to different parameters. therefore it is impossible sintetizare into one value the performance of a university. Therefore, the university ranking can be very dangerous for decision makers inside and outside.
Most of students have got only obscure and uncertain views on university rankings. I think university rankings have no importance in the local choice of future students. Comparison of universities – as H Chris Ransford put it – is difficult to do. For me, this question is merely an interesting theoretical game.
Academic ranking and. benchmarking of universities (nationally as well as internationally) is important as it gives an independent way of judging a university and also it is good for the health of the university system. University ranking is not everything, but without university ranking, most of the effort to improve our universities may be nothing. However, it is necessary to distinguish different types of evaluation. For some, it is reduced to science, and science is reduced to teaching and research. But surely science is more than teaching and research.
Rankings are just a guideline in choosing a good place to study. Most of the international students are looking for high rank university since it ensure the support and services are at the top level. However, the quality of supervisors especially for students with research should be taken into account.
There is no universal agreement upon universities’ rankings. The rankings are mere parameters or indicators on overall varied basis. When some university is ranked badly by different methods of ranking, then the university’s administration must move to rectify the shortcoming with concrete steps. Rectification ought not to be directed at the “weakest” loop (I mean the academic staff) but it has to be directed also towards the “strongest” loop (i.e. the administration staff). A management, which thinks of the university is its kingdom or backyard garden, must go. When things are straightened at the top, the students will reap the advantages eventually.
At present, globally we speak value based education. Therefore, we need not to worry about the university rankings. The criteria about the measurement is quite often questioned. Instead, we should consider what the university has achieved in changing the society through the students.
Those rankings are at best a big mistake in my opinion. We should have surveys about the universities as there are some about hotels in the market. Say, the report of the survey says that a university has wi-fi, library, emergency health care on campus, laboratories, dorms on campus, support 24 hours, convenience stores, bookstores, the level of teachers, if the students find a job right after graduation, their teaching methods, etc. The students then compare the schools with those published standards. That is more than needed and helpful. There are some directories with those items since a long time ago. But the ranking is negative service given to the students and to the institutions. Imagine if a student can only attend the worst university of his country and that is a not bad one after all? Always there will be the last position. How would he feel about himself and his formation? This is only useful for financial purposes so that the private institutions can charge more for tuition or ask for bigger investments from their government if they are publicly funded. The rankings are useless for the teaching process or worse, contra-productive.
Rating is an index to rank websites webometric universities and scientific centers, educational and research laboratories around the world by Webometrics (CINDOC) unit of Spain's National Research Council is provided. Each parameter has the details and procedures for calculating each Vbvmtryks there. Webometric ranking scientific and educational activities of universities and scientific institutions and educational websites show periodically. Ranking criteria webometric attention institutes and universities to the Internet broadcast show.
We (universities and "buyers of education") should be wary of rankings in general and any referral system as they can easily play into the hands of lobbyists, and become an end in itself. Especially when a particular ranking attains de facto status. This is the lesson we learned from credit ranking agencies and the 2008 financial crisis. Far from being a neutral arbiter of credibility, ranking organizations sent out the wrong signals. Much more the case for universities, because they exist not just to educate but to signal the employability of their graduates. The point is not to be locked-in to any particular ranking, but to consider a battery of rankings. Thus to hedge against the "systemic risks" that tends to pervade the business of ranking. Not to mention the wisdom of diversity- leveraging the relative strengths and weaknesses of different rankings.
Due to asymmetric information on the part of students, parents and other stakeholders, there is a tendency to aggregate information and to create platforms to do so if they do not exist. This can lead to moral hazards and adverse selection problems. We need to take a fundamental approach, just as we are advised to with regards to businesses. The common denominator of credibility seems to be longevity and survivorship- the best universities are often those that have been around relatively longer than their peers, both locally and globally. Think about it would you want to have someone on staff who's degree comes from a now-defunct institution, that was known, for a time, for its state-of-the-art/science education?
Resilience-longevity (Peter Drucker predicted the demise of the university but we have ended up with more of them rather than less) and survivorship (survivorship bias is good) is even more important nowadays when we are swamped with new-fangled education philosophies and pedagogies. These are often purveyed by the "new kids on the block" who have nothing to lose. Older more established universities see little need for it, other than to be seen as enlightened and most are unfazed by these emergent "frontiers".
Most fail to realize that despite the change in technologies and the sprouting of fads, our so-called "wetware" is basically the same. Not all "software" upgrades to the "wetware" have led to improvements. Improvements should not be confused with resilience. Resilience would eventually undergird employability.
Beware of rankings that stress the new-fangled instead of resilience.
I think rankings are important. They however represent only part of the picture. We should therefore be careful in our interpretation of these rankings. If we are simplistic, they provide a distorted picture and this can be the basis of faulty decisions.
To sum it up, I would say that I don't want to spend my life being a number!
Yes, everyone SHOULD be wary (especially trained researchers). And, no, the idea of transforming data into a "rank" is problematic at the most fundamental level. Data is necessary, yes. But obscuring it in the fashion that ranking methods use is simply unacceptable. We should be transparent. Not transparent with misleading numbers.
Please be cautious in interpreting University rankings. The most recent methods applied have been particularly problematic. See this article for more details:
On Academic Rankings, Unacceptable Methods and the Social Obligations of Business Schools – Decision Sciences Journal