For one, see Times Higher Education or Shanghai ranking. These do steer a number of discussions on university rankings every time the updated lists are published. Hence, university ranking could be seen as an evolving mechanism (?).
Please note the question reflects the fact that RG is evolving, and what potential it may have.
In one sense it would be disappointing if ResearchGate became another mechanism for measuring performance. Once this happens I can see that a lot of the authenticity of ResearchGate will disappear.
No. ResearchGate would have to research the topic properly. Currently, RG scores for institutions are only a measure on the number of people actively contributing to RG! The ranking of a university depends inter alia on the quality of the intake, the quality of the lecturers, level of support from alumni and industry, quality of admin, libraries, IT, workshops & labs, student facilities, geographic location, and value for money as well as quality of research. Do you wish to set up an annual survey?
@Ian. Thanks, all answers are important. My question does relate to the future, acknowledging the steps researchgate needs to take. But does one see potential? This may be something to envisage. Could you clarify the point about annual survey?
Even if the researchgate would modify certain things with the aim of rankings the main problem is that the impact factor is not really a good measure of academic excellence. All a university had to do is get a large department in chemistry and biology in order to get top rankings.
@a.Risat. Sure, but there can be safety links, I.e. indication of impact weighting, hence avoiding generalisations in case of a particularly large and successful unit in a university.
Well, of course it can be done. However, some of the principals of the Researchgate have to be changed or adjusted. As it is, there are some very disturbing issues with RG
- the evaluating is done in a very skewed way. The impact factor as it is is stupid and would never work. The ranking of journals has to be done within the area of research and by researchers with some credibility.
- The authors can not add papers themselves and even assign journals. Many "publications" in RG are fake.
- The points for being "popular" must be removed. Otherwise , we have people who ask and answer questions only to become popular and get higher scores.
However, I agree that that probably should be the goal of RG, to provide an accurate ranking of people, departments, universities.
You highlight concerns. In researchgate, researchers interact, hence this could be a starting point. And yes, a code of ethics should be followed, and with appropriate filters something could arise from credibility point of view...
As for "accurate", maybe we could refer to as "appropriate" metrics, usually the most challenging part is to define accurate. The most realistic is to be evolving...
Also, re: impact factor, it is certainly not "stupid" (I believe you mean something else) but it is a matter which requires a discussion section on its own.
@Argyrios: Universities would complain about the static rankings, "We are now better" and so it will be necessary to repeat the surveys each year.
Yes, if something appropriate was available, RG could offer yearly surveys.
ResearchGate can be an indicator for some universities in some countries (small scale) that are basically depended on research activities and have a huge sources of funds. However, in other countries such as "" Africa, Asia, etc"" (basically based on teaching). So, the question can not be answer in short term (not far).
On the other hand, RG can be used as useful indicator in the same university and also within the same country.
I see some leaning towards RG used as complementary index in the beginning, aiming towards a more interactive, evolving index in the future. Once a framework arises, then one could have categorised outcomes of course.
I agree with Tony. A proper assessment framework takes every element in account. Teaching and Research are intertwined nevertheless...
In one sense it would be disappointing if ResearchGate became another mechanism for measuring performance. Once this happens I can see that a lot of the authenticity of ResearchGate will disappear.
@Kevin. Nevertheless, RG could attempt to remain "robust" to such a situation. Why should the authenticity be lost? Probably a way to have a resilient framework where academics play an important role.
RG moving into institutional rankings would be a positive. Many of the current ranks are not pretty accurate, RG may shed some positive light. Allow researchers to get a say on the rankings of institutions.
Diverse approaches are beautiful and may provide a more accurate picture of any phenomenon. Thus a summary of RG scores could be used for characterising university performances. Each former ranking might had its weak point. A complex evaluation containing the RG scores may be more suitable. However, who will implement it?
@Andras. ThAnk you for your input. I have actually some ideas on this, and it is a matter of interaction and collaborative approach. The implementation is a rather different section for discussion and actions. The current question was more to steer discussion towards the aforementioned area, and it seems slowly but steadily does so.
Deal colleagues,
my apologies, but I have to pour some water into your wine. I do not want to be the "bad guy" here, but for me any ranking is highly dangerous and irrelevant.
Rankings always mean to "measure" something. This term is irrelevant for rankings since it is not a "measuring" what they do. To measure something means to assign a quantitative value to a physical property. This is hard enough to do without feedback on the property by performing it and therefore making an error. Usually, even this is not possible correctly. If you go to the micro world, we know that this is even impossible.
Ranking MUST ALWAYS be irrelevant since they assume that they can do at least three impossible things:
1. to quantify something that cannot be quantified. You cannot quantify the "impact factor" or some other stupid things by a number. It is strictly impossible. Take, for example, Immanuel Kant, one of the most famous philosophers - at least for the Western World - and the idea of freedom. How could you quantify his "impact factor", say, for five years after his publications? You would end up with zero.
2. How to choose the "properties" you want to measure? This is completely arbitrarily. How could you justify the "properties" and the weight you give them to come up with a number on a linear (or a small dimensional multiple) scale? Do you really belive that all these beauty contests define a real "beauty measure"? It highly depends on the taste of the individual, do you agree? Maybe, the nose it too long, the breast is too small.... How should it be weighted to come up with a number? Another one has completely different preferences. Who decides about how to choose and to weight this properties?
3. How do you make sure that the feedback to the system where you want to measure is significantly small? If you want to "measure" something in systems or processes where humans are involved, there are two extreme things you can do. First, you must completely hide the measurement from the humans that are involved. This is usually done in psychological experiments. Do you want to have this in science where YOU are measured? The second extreme is what they love to call "transparency". The properties in this case are known, and the weights are also known. It is obvious that human beings in this kind of "measuring" process will behave to get a high score. But this not measuring, this is control since they implicitely enforce some pressure of the behavior to adjust to their "measurements", especially if they play the role of a burocreaut who decides about your further prospective.
My conclusion: Anyone who belives in those rankings gives way to control by politicians that usually are NOT interested in the freedom of thinking but in gaining the maximum control. Anyone who supports this way of thinking is shoveling his own grave as an indepent researcher.
This would end-up in so-called "Universities" that are nothing more than schools for training "human resources" in a totally commercialized "New World Order".
With best regards, Hans-Michael Hanisch
P.S. Please keep in mind that we are HUMANS and not teaching or research machines. Each one of us is unique and therefore should not be exposed to such an anti-human procedure.
@Hans. You have correctly raised the issue of quantifying, I.e. metrics. The interesting fact about RG is the way it seems to have being setup and evolving, where researchers can interact. As mentioned all answers are important, and do raise issues one should consider in critical thinking especially about issues such as ranking.
Dear Hans-Michael,
This is only a discussion. A kind of game for being informed about our views. Certainly, quantifying, measuring, ranking are dangerous activities of bureaucrats and this assessing serves mainly the maintenance of mainstream frames of “our” world.
Of course, you have been right.
Please, be careful.
I have looked for https://www.researchgate.net/post/Why_research_gate_RG_scores_decreases but in vain. It has disappeared.
Indeed this is a discussion as mentioned above. It is a much different dimension getting an appropriate framework/ approach based on RG implemented...
Hi Argyrios, I think RG is already generating rankings. See here [ https://www.researchgate.net/institutions/Worldwide?order=rgScore&method=total ]. It is refreshing to see how their algorithms hold leading universities in such poor regard, and how it includes non academic institutions such as the UK National Health Service [NHS] ranked in the 60's. If we accept that rankings are flawed and that they are probably a fact of life RG may make an interesting contribution.
Thanks Matt. Indeed RG generates some indexes, but here we refer to a more elaborate issue. One should be very careful on how a ranking mechanism is interpreted and RG is in early days if it was to be accepted as another "legitimate" mechanism(rather than just another set of statistics) for univ/inst ranking. Having said that, all answers are important and we see a very interesting interaction in this question section.
Dear Andras,
sure, this is only a discussion. Yes, but the "idea" (I would rather call it an evil) is already implemented in the brains of researchers....
I therefore took the time yesterday to give a rather sceptic comment on some kind of technological overkill.
With best regards, Hans-Michael Hanisch
Dear Hans-Michael,
Unfortunately, there are rather interests and vanity in the mind of people who happen to be researchers(?). Some wanted a bigger part of the bred than the share they would merit. This is the old melody.
Research assessment, or rankings are not necessarily deemed "evil", it is usually the way the ranking is done that may raise issues. In the case of research ranking (although some aspects of teaching should also be included, teaching and research are closely linked aren't they? esp. research-informed teaching), researchers should also have a say. RG includes some of the interlinked aspects, where researchers interact, hence the discussion in this section.
Hi. There are too many dimensions which are not clear in RG (metrics). So, I look at RG as a forum to brainstorm ideas within the context of social groups with the difference that RG brings researchers together.
I was checking on the founders of RG, Ijad Madish (also member here) who has stated that he wishes to win a Nobel Prize through the site by disrupting the way in which science is conducted.
I may guess (and hopefully I am wrong), looking at RG (in the future). In business, you work on a product until it becomes well accepted in the market and then you offer it for sale at a very good price!! or when the product is well known, then you start adding "business related issues" exactly as what Facebook people did, and so on...
So, will you adopt the ratings at this stage, possibly not, however, when taken as part of an evaluation system (looking at interactivity, ideas generation, service to other researchers, collaborative work resulting in research papers, ...) plus other dimensions which may be defined even in RG, then the answer would be different.
So, are we able to define metrics and how these are measured within RG, it is something to look forward to.
@Hussin. Indeed, the way to implement a framework based on RG is a different matter.
I've read the initial question and the conversation it prompted. My concern is that the question assumes that RG should become another tool to measure and rank institutions and/or academicians (for not to say researchers). I think a ranking is a tool that serves a specific purpose within a specific way of thinking. One of the problems of using rankings is to assume all higher education institutions are homogeneous and that all education and knowledge-generation/use contexts are comparable. On the side of academicians, there is a huge pressure to promote and measure productivity and that is linked to competitiveness. The whole system should be reviewed: The role of education and knowledge in a society, the notion of quality, and the use of higher education and the science and technology systems as contributors to utterly economic development (that is at the end the motif behind rankings, competitiveness, productivity, etc.).
@jorge. Happy new year. The question does not assume that RG should become, rather prompts discussion if it could be accepted or has potential to do so.
Jorge, your concerns are authentic. However, here we are discussing the pros and cons and the possibility that RG participants add value to the ongoing discussion by shedding light (as you did) on all possible variables that could lead or not to reach to a generic measurement system adaptable to all special cases pertaining to the different cultures around the world. Looking too high, what not?
The community at RG is diverse, specialized, experienced, all are published to different degrees, many have acted as editors or reviewers, all are opinionated, and hopefully all are ready to pitch in. So, let us take Argyrios's addressed question into consideration.
(:-)))))
As we say in Spanish, "Animo"
Dear Colleges,
I am disappointed by this discussion.
I gave you three reasons why a ranking is impossible. No one really gave me an answer.
If you go on this way, you deliver a gun to politicians who usually have no idea of the freedom of thinking and research. You must be sure that this weapon will be used against you. Politicians are natural enemies to independent research.
For those who master German Language, I recommend this book to you (he is an Austrian person): Konrad Paul Liessman: Theorie der Unbildung.
For all others, go back to the roots and read Neil Postman: Technopoly.
I do not want to promote a definitive answer to this discussion, but I MUST at least provide some way of thinking about a natural resistance of science against manipulation.
Take care.
With best regards, Hans-Michael Hanisch
I believe this discussion is constructive, and particularly the question relates to RG and its possible utilisation as another potential approach in univ. ranking. The question does not really relate to a critical discussion on university ranking in general, hence why colleagues avoided replying on this other matter. Indeed, it is difficult to use exact metrics in ranking universities, and from a general point of view this is in it's entity another sensitive matter.
Dear Argyrios,
My apologies. If you talk about rankings, especially such a highly questionable one as RG, you must discuss the foundations.
Br
@Hans. What you refer to is a long and rather different discussion going back to the roots of ranking. Your suggestions are indeed useful, but will need a different discussion section.
Dear Argyrios,
I did not make any suggestions. I just asked.
I promise to be quite. Anyway, it is funny that I raised a question (ranking, rating, all this stuff....) without any answer.
Probably, RG emerges to be a forum for discussion. I appreciate that, but it does not change my mind about useless rankings.
All the best, Hans-Michael.
@Hans. On the contrary, one should not be "quite" as this how frameworks progress and evolve. What you also mention on useless rankings is true. Personally, I believe rankings are usually relative. I believe, that in the question section we should discuss along the lines of RG. I am sure we will soon start getting more interaction.
Best, argyris
Hans, you cannot be quite. Your arguments are well taken and deserve thinking. I would like to clarify with you, the relation of ranking with politicians, you mean politicians as university policy makers? or external politicians who influence university funding?
Your concerns relate to quantification, measurement and feedback. Don't you think participants in this forum could provide ideas (just ideas to start with) about these three dimensions, and the brainstorm these, be selective, and use an agreed upon weighting system, and a transparent feedback approach?
It seems you have lots of reservations, at least myself, I would like to hear about your experiences (and then other could join along).
Best regards
I love Devil advocates (:-))))
They make deliberations creative.
Indeed, this section is an attempt towards interactive feedback-based discussion on the question context. Cheers,
Argyris
Dear Hussin,
my Department was closed (Engineering) in 2004.
I live in Germany. Did you get the message?
Please take care of yourself in Lebanon.
With best regards, Hans-Michael Hanisch
Dear Hans
I agree this part of internal politics. But let me share with you. I studied in Syracuse, NY, and I am an engineer before moving to a different career, business. Back in 1992, a new chancellor was elected to the university and with him came new plans among which is cost cuts. Guess what, the Engineering school lost funds versus the business and other social sciences schools simply because sustaining the second entities is cheaper as we talk lab requirements. Although the engineering school was not shut but it was highly affected in its funds for projects and research. Back then, I started thinking to get prepared in a different academic field, so I moved from Solid State Sciences and Technology to Information Resources Management before I left Syracuse. I did that to guarantee my continuity. Today, I love my previous major but I am practicing another and obliges me to keep abreast of things.
I felt the consequences and decided to move to a different track. I took that issue seriously and decided to strengthen my academic background differently to survive any other future decisions to those I witnessed.
Thank you for your concern (Lebanon), we try to be as cautious as we could (:-)))
Dear Hussin,
I was there, in Syracuse, N.Y. I have good memories.
I do understand that what you wrote happens all the time all over the world.
This is actually the essence of what I wrote. Things are bad enough, giving those guys another instrument for justifying their evil ideas will make it even worse.
With best regards, Hans-Michael Hanisch
Recently, I found, a not very (but rather) recent paper on a study about rankings. I had a brief look, in case anyone else is also interested to read:
www.ugr.es/~aepc/articulo/ranking.pdf
the paper is on: "Comparative study of international academic rankings of universities" by G. BUELA-CASAL et al. (2007).
This is a very interesting discussion. I agree with some of the critics that the current research evaluation and quantification system is flawed in many ways. I wouldn't go as far as Hans-Michael and deny the justification for rankings and other evaluation instruments in their totality. However, we should - and here I agree with him - question their foundations and be critical. At the moment, a big problem of "the (evaluation) system" is that many researchers don't question it enough and blindly play the Impact Factor game (maybe because they have no other choice or are pressured to do so).
I'm still pretty optimistic that this will change. Impact measurement gets more diverse with the Internet and social media. Altmetrics are a big advancement in this regard. I see more and more journals taking up services, such as Altmetric (http://www.altmetric.com/) and increasingly researchers are also using person-based metrics, as provided, for example, by Impact Story (http://impactstory.org/). Academic SNS, such as ResearchGate and academia, are also part of this development.
Now the pressure is on the researchers. They need to take up these opportunities, for example by adding Altmetrics onto their CV or using Altmetrics for grant applications and job talks to demonstrate their outreach. Sure, Altemtrics are not without problems. So, it's probably best to provide a wide range of different impact criteria. ResarchGate and the RG-Score could be one of them.
BTW (and shameless plug), I attached a recent publication of ours that addresses and investigates some of the points we discuss here. Feel free to have a look!
For those not willing to read all that lenghty stuff, here you find a condensed version in the form of a presentation: http://de.slideshare.net/long_nights/beyond-citation-counts-the-potential-of-academic-social-network-sites-for-scientific-impact-assessment
Conference Paper Impact Factor 2.0: Applying Social Network Analysis to Scien...
@Argyrios, this could be true only if RG will reach a critical % participation all around the world, something like Facebook in social media. Now, since the % is relative low, the fact that some people are active does not represent the whole of the Department where they belong neither their University. It is an open question, you see: We don't know if such an open forum will continue to exist due to commercial conflicts (copyright material uploads and other issues). I'd like RG to be a new ranking index, but...?
I found an interesting publication on that topic and it's attached. According to the findings, there are in fact moderate correlations between RG indicators and established university rankings. I find the discussion of outliers (Harvard, University of Washington) epsecially interesting. The RG ranking seems to weight some aspects differently than the established rankings, which might make this system interesting for the high-scoring institutions (e.g. University of Washington).
Article ResearchGate: Disseminating, Communicating, and Measuring Scholarship?
This question section has now sort of paused. Actually not surprising, it is a sensitive subject, that is ranking, and a number of implications arise. I will keep it live, as it may be the case we revisit it at a later point in time.
Research gate is the best tool to measure the performance of schools, universities and individual scolars. To confirm top ranking of Universities, measure their performance in research gate, which is the gateway of the world scolars. There is no other ranking that is worth more than the daily and weekly update receive from researchgate. This is my view.
Fina, it really depends on how detailed one views Researchgate. It depends on how one uses Researchgate, hence wise usage will probably make it a much more useful tool in the future.
Dear Argyrios,
Sorry i came late to your discussion about the possibility of using the research gate score to rank the institutions. As said before the researchgate publish a university ranking report. In this way, the reearchgate administration thinks in this direction. But because the researchgate is new and emerging, its score for evaluating the researchgate members including the institutions is not sufficiently faithful at present.
In additions, it may not be sufficiently objective as it includes also the upvotes and downvotes which could be manipulated. In additions, may staff is not a member of the researchgate. The teaching activities are not taken into consideration. May be it is taken indirectly through the Q&A. It may be a measure of the research activities. Moreover, the impact factor is not homogeneous.For some branches of science the impact factor is high.
If the reearchgate administration take into consideration this critic, their score may be one of the referenced and acknowledged ranking scale.
Lastly, i asked a similar question on the researchgate. Please follow the Link:/www.researchgate.net/post/Can_the_ResearchGate_score_be_acknowledged_as_a_measure_for_scientific_performance_like_the_impact_factor
I like that the researchgate develops itself to become an acknowledged reference for the scientific contributions.
best wishes
Thank you Abdelhalim.
Indeed I am folllowing your question as well. I also think that university ranking systems is a sensitive issue. Anyway, lets see how RG develops.
My dear friends, here we are again together under this sensitive thread. What do You think about this actual RG ranking. For example, the most of my colleagues from Technical College have no account at RG!!! How about your colleagues? Is it an obligation to be a member of ResearchGate?! What is the policy about it at your Universities?
I have attached the RG worldwide ranking for our three institutions! What do you think about? Is it objective?
https://www.researchgate.net/institutions?facility=University_of_Lincoln
https://www.researchgate.net/institutions?facility=Ain_Shams_University2
https://www.researchgate.net/institution/Technical_College_Poarevac
Dear Ljubomir,
You are sort of revitalising interest in this question...
I think as RG stands is quite relative and indeed membership of staff from universities will have an impact, i.e. researchers with high impact may not yet be member of their school profile on RG or university etc. This is just one point, there is many other points that can be discussed on this issue.
I think if we consider the normalized total Rg score of a university (Rg divided by the number of research gate members of that university), beside the distribution of the Rg score among members( that is provided by Research gate, we can see graphically how the RG Scores of researchers from the university are distributed whether its normal or not ) may be useful to qualify universities . The total impact points of the university may also serve . I have similar questions on my account.
Dear @Argyrios, regarding Shanghai ranking, I do bring fine article about it.
ShanghaiRanking Consultancy (SRC) launches Global Ranking of Academic Subjects 2016 using Scopus data and SciVal metrics!
"...This new subject ranking continues the SRC’s use of transparent methodology and third-party data. Ranking indicators include those measures of research productivity, research with high quality, research with top quality, average global research impact, extent of international collaboration, extent of academic-corporation collaboration, researchers with global academic influences, and academic awards..."
http://blog.scopus.com/posts/shanghairanking-consultancy-src-launches-global-ranking-of-academic-subjects-2016-using-scopus?utm_campaign=2016%20Scopus%20Newsletter&utm_campaignPK=160232092&utm_term=OP18752&utm_content=230292650&utm_source=71&BID=683201756&utm_medium=email&SIS_ID=0
Thank you Ljubomir, I am sure followers of this question will find this interesting and hopefully useful.
I am not sure if a link to this ranking page was included before, here it is for completeness:
http://www.umultirank.org/#!/home?trackType=home
[Also, an article from a couple of years ago that linked to an article that discussed around that project relative to some statements in 2012 from the UK's universities minister. Link http://www.euroscientist.com/u-multirank-ambitious-lacking-critical-mass-face-up-other-university-rankings/ ]
It is quite interesting to see the evolution in RG's mechanisms of feedback requests and the way featured research is enabled. Something to keep in mind in the remit of this question for the future.
ResearchGate Can be can be global platform for ranking of universities. To make it regional and national acceptance of RG ranking need to rectify some policy based issues. Universities must accept the importance of researchgate score for academic career of researchers and faculties. All universities should come forward to participate in Research Gate.
Very soon, an academic paper of mine will be published which contains RG rankings for Lebanon and Iran, for example the number of published papers and the total RG points.
Dear @Argyrios, what would happen if ranking according to teaching quality would be applied? Could RG serve, I do not think so. The following article brings some interesting facts from Germany and some other countries.
German Universities Oppose Plan to Compete on Teaching Quality!
German universities have emphatically rejected a proposal that they fear could mean competing for funding on the basis of their teaching quality, but the plan is not off the table.
As England prepares to unveil its controversial teaching excellence framework (TEF) ratings and the Australian government plans to award a portion of teaching funding on the basis of “performance,” German university leaders have argued that comparing teaching quality is a near impossible task...
https://www.insidehighered.com/news/2017/05/26/german-universities-oppose-plan-compete-teaching-quality?utm_content=bufferc7472&utm_medium=social&utm_source=linkedin&utm_campaign=IHEbuffer
Thanks Ljubomir. Very interesting. Although indeed RG links particularly to research related portion of quality ranking.
By the way dear Ljubomir, a consortium of Lebanese universities worked under an Erasmus + project on the elaboration of TEF model for Lebanese Universities supported by the Ministry of Education and Higher Education. May be because in Lebanon we did not have unified Higher Education quality standards at all, but rather each university adopting the specific accreditation standards applied for, so such initiative is getting adopted.
Time will tell!!
Thanks Hussin. I will also try to add some feedback from the UK side.
By the way Argyrios, the benchmark was UK initiative, and the consultants to the consortium were the English. Actually, I was in the panel discussion where the framework TEF was presented, it had 3 fatal errors as I recall, when I discussed the issue, the UK profs and Consultants were on the defensive!!
Example: No mention of Information Technology factor
No mention of Critical thinking as a major competency needed (we lack that a lot in Lebanon)
And yes, sticking to many Metrics (some of these were not culturally sensitive).
Thanks Hussin. If you are intetested on TEF we could talk further. Actually our deputy HoD has more info i think.
Dear @Hussin, can you estimate the date of publication of your article which contains RG rankings for Lebanon and Iran?
Hi, Ljubomir. The article should be out within couple of weeks. Inside I am using RG scores for Lebanon and Iran. I had to visit each university through researchers I know to build up the data, RG administration is not allowing full access to the country statistics. Although I remember that we were able to do that 4 or 5 years ago.
In present ,It would be disappointing . Research Gate not become mechanism for measuring ability.It is a matter of interaction and collaborative approach. The implementation is a rather different section.
To @Kiran Grover. Thank you for the contribution to the answers for this question. We essentially discuss on the complementary nature of RG as such mechanism, not as an absolute and unique metric for such purposes.
To @Hussin. Thank you for the reply, I am sure that all colleagues following this question will be interesting to read the paper once it is out. Many thanks again.
This may be interesting for the followers of this thread.
On 16 June, Times Higher Education will reveal the world’s top 100 universities based on an invitation-only survey of leading scholars across the globe. The survey asked more than 10,500 published academics, from across 137 countries, to outline which universities they perceived to be the best for teaching and research in their specialist discipline...
http://view.mail.tesglobal.com/?qs=122469f4c69c990b2c48078ec97fdae0c0381efae25aeb87c8911cd146ad7cd47d0df47715c52fbc4e6c641a883f6ec26ebd5d9e8121ba96289bf2df437a0d566933df6090338e8d