There are several indexes (e.g. impact factor, h-index, RG Score, i10 index, citations etc) used to evaluate scientists' research productivity, mostly using the databases Google Scholar, scopus and ISI web of Sciences. Since all of these measures are imperfect then how can we decide about the work of a scientist?
The h-index shows something, especially if you remove self-citations, but the context in which the scientist's works have been cited ("this was the first author who developed this method" vs. "this author made a mistake in...") is also important. Moreover, consider these 2 scientists whose h-index is 10: 1) 10 papers, all cited 10 times, 2) 10 papers, all cited 1000 times. Thus, the number of citations (self-citations removed) also shows something. Moreover, in very narrow fields, getting 10 citations is harder as maybe only 5 people are interested in a specific topic and sometimes they write together, so you have to remove self-citations. Also, methodology papers and review papers usually get cited more but this does not always mean that their author is very influential. I have a very high RG score but mostly due to being active in commenting, so I would not use this score to rank scientists: some have been much more influential in publishing, but as they don't comment on this site, their score is much lower than mine.
Not all related to scientific research are scientists; most of them are scientific workers, and only a very few are scientists in fact.
A scientist starts a concept on his own; those who modify, generalize or apply that concept are scientific workers. For example, alexander graham bell was a scientist, while those who expanded his idea of the telephone are scientific workers. Therefore, using indices such as impact factor etc. You can rank scientific workers only. A scientist may not have a high impact factor, yet he would forever be far above the mere scientific workers. So please modify your question a bit!
You can not rank a scientist. Whatever be the discovery of a scientist, whatever be the importance of his discovery, he must be revered, and for that no index is necessary.
Further, impact factor, for example, is a misleading index. You work in a university which may not be of an international standard. That does not mean that you are individually not of an international standard. For example, an indian institute institute of technology may be an institute of international standard; that does not mean that every tom, dick and harry working in that institute is automatically of a better level than your personal level.
Now, impact factor of a journal is decided by the impact of the journal. A journal may have a very high impact factor; that does not mean that every article published in that journal must be of a higher level than that published in a journal with a lower impact factor! Otherwise, everyone working in , say, indian institute of technology, kanpur, should be greater than every ismat beg, for example. Can that be the case?
Then again, say, 'the journal of algebra' is the greatest of all journals publishing the associated matters. But how many of the mathematical workers understand the language used in this journal? Hardly a few therefore would like to read an article published in this journal. In that case, how will the impact factor of this journal be a good index to decide that it is the best journal in the line?
Most of the mathematics journals do not care for computing the impact factor. Accordingly, someone from mathematics may have a very low score if impact factor is considered as an index to decide his level. On the other hand, the journals in biotechnology are very highly interested about computing impact factor. Accordingly, someone working in biotechnology with much less intellect might be called a better scientific worker than someone working in mathematics with a much higher level of intellect.
Am i not right, professor beg?
Therefore, it is meaningless to decide whether one is a good scientific worker or not. A scientific worker is a scientific worker; he would remain a worker even if he has a high impact factor. On the other hand, a scientist would forever remain a scientist even if he has a low impact factor.
I consider the discovery of the sanitary latrine as the most important of all discoveries made in modern times. Imagine the city of delhi without sanitary latrines, and you would understand why i have said so! Do you by any chance know who discovered the sanitary latrine? I do not think anyone ever has given a damn to know this. Just see the impact of this discovery, and also see the impact factor of the poor discoverer!
Unless you are referring to a specific pool of specific scientists, it should be 'to rank scientists', not 'to rank *the* scientists'.
I take it you actually mean in the former meaning, and are not in fact referring to specific scientists from a specific set or list ?
If that is the case, then the only way is the usual: History will decide.
Every great scientist - Wegener in his groundbreaking work on continental drift, Einstein, Darwin, Pasteur, Charles Lyell who explained how canyons form, Gödel, and countless others, were relentlessly scoffed at and twitted by their contemporaries.
Frighteningly, it sometimes works the other way around too - remember Antonio Egas Moniz, who won the accolade of the Nobel Prize in medicine in 1949 ....
The selfsame incidentally applies to politicians, statesmen, and so on. Contemporaries are likely too close to the grindstone to be able to judge, lack the requisite context, and some distance is probably always called for to be able to formulate a proper judgment ....
The best way is the one adapted to the purpose. Is it to grant funds, to make a hall of fame, to evaluate credibility, to choose the best political adviser, to teach the general public? Honestly, I don't think any scientist of some value cares about his ranking. Such a hierarchy is taken seriously only by people outside of the scientific community. Inside, everyone is able to judge the work of his peers.
Scientific work should be judged by peers. Ranks and indexes are for those who have only few bits of memory and they need to "compress" (remark I said compress, not understand!).
In fact, such rankings are of no use. Impact factor is illogical as an index because the impact factor is of the journal, and not really of the author of an article. Even citation index is misleading as an index. A review article may get more citations for obvious reasons. Then again, an author may actually refer to an article that he has never even read! In fact, some authors may actually plagiarize matters without referring to the original.
There is no the best way. Different purposes require different methods of evaluation and ranking.
Unfortunately research evaluation is needed at a collective level to provide public funding. Unless you assume as many experts as there are researchers it is impossible to use human evaluation without numerics. Simply because of the Babel tower effect the best expert in the world cannot have universal knowledge and at some point will have to use numerics. And a similar story holds with journals. Any of us can evaluate the 10 journals in his domain but how do you compare journals in different areas? How do you tell your librarian which subscription to keep?
Ultimately, everything will depend on subjective judgement. As soon as subjectivity enters into the rankings, an exactitude will never be possible. In fact, such rankings are not important. Of course, the citation index is an important matter. If your article is cited by people, it means your work does have an impact.
All these ranking criteria are flawed and no single parameter can be applied to categorize scientists. However, there is a need to have one because in many countries rewards, grants, promotions etc. are decided by the administrators and political people, and they can not judge the quality of one's work; certain indexes can help them in making informed decisions. Probably, it would be better to normalize different indexes available and then take the average. This will minimize the effects of these flaws, at least.
There are many ways in ranking scientists. I think the best way available now is through the Researchgate, where the criteria and the scores formula are established.
There is no better way of ranking for all cases. Best method depends on the objective of ranking. Different objectives require different methods.
@Ismat, I am free to recommend the following reading about the scientist's ranking! "The researchers at Indiana University Bloomington think that they have worked out the best way of correcting this disciplinary bias. And they are publishing their scores online, for the first time letting academics compare rankings across all fields."
http://www.nature.com/news/who-is-the-best-scientist-of-them-all-1.14108
I think the h.index is good to rank scientists. About me I trust in team work. I mean if you find a team to work in the same problem this means this work is very good for some scientists. This give your work very high value.
You are welcome dear @Ismat. I do always my best in terms of adequate resources!
The h-index shows something, especially if you remove self-citations, but the context in which the scientist's works have been cited ("this was the first author who developed this method" vs. "this author made a mistake in...") is also important. Moreover, consider these 2 scientists whose h-index is 10: 1) 10 papers, all cited 10 times, 2) 10 papers, all cited 1000 times. Thus, the number of citations (self-citations removed) also shows something. Moreover, in very narrow fields, getting 10 citations is harder as maybe only 5 people are interested in a specific topic and sometimes they write together, so you have to remove self-citations. Also, methodology papers and review papers usually get cited more but this does not always mean that their author is very influential. I have a very high RG score but mostly due to being active in commenting, so I would not use this score to rank scientists: some have been much more influential in publishing, but as they don't comment on this site, their score is much lower than mine.
There can be no one significant index or measure to identify and rank good scientist, but .....Any one who searches to alleviate the pain of human beings from starvation and death, providing opportunity for good and recoverable health status
1) Research Gate Score also one of the best way to rank scientists.
2) Rate of Research Productivity through the Doctoral Scholars. (ie) Production rate of Doctorate.
3) Assessment of Impact factor, Citations and h-index.
Scientist needs some of the following Qualities:
1) They have a vision – and can articulate it.
2) They are passionate.
3) They are generous and think beyond their own work to support others.
4) They are resilient. And pick themselves up and keep on going when they fall.
I agree @ krishnanan & Gopinath. But few cases labled only their name, works done by other person. Then how can rank it.
Is this their own work? who find that?
ya, you are right Tiia Vissak some have been much more influential in publishing, but as they don't comment on this site, their score is much lower than the one who comment. So, I think there can be no one significant index or measure to identify and rank good scientist.
@ Ismat
The RG index and its derivatives (Views, Downloads, citations etc.,) may take up a lead for measuring a 'scientific enabler identification'.
The reason why RG score or even other RG matrices or combination of the metrics or even a new factor of including qanda up-votes.....
Amongst the available listed "measures of indices", the more powerful and prominent score, emerging for tomorrow's reference is the RG Score and its derivatives.
Elsevier Scopus
Date: Sun, Jul 27, 2014 at 11:08 AM
Subject: Scopus releases 2013 Journal Metrics
Can't see this email properly? Click here to view an online version
Email Facebook Twitter Google Linkedin
Elsevier
Scopus releases 2013 Journal Metrics
Dear Fethi Bin Muhammad Belgacem,
Elsevier is pleased to announce the release of the 2013 Journal Metrics based on Scopus data.
The metrics provide alternative, transparent and accurate views of the citation impact a journal makes, and are all available for free download at www.journalmetrics.com. The impact metrics are based on methodologies developed by external bibliometricians and use Scopus as the data source. Scopus is the largest citation database of peer-reviewed literature and features tools to track, analyze and visualize research output.
Source Normalized Impact per Paper (SNIP)
SNIP measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. As a field-normalized metric SNIP offers researchers, authors and librarians the ability to benchmark and compare journals from different subject areas. A component of the SNIP calculation is the raw Impact per Publication (IPP) which measures the ratio of citations per article published in the journal.
Learn more
SCImago Journal Rank (SJR)
SJR is a prestige metric based on the idea that ‘all citations are not created equal’. With SJR, the subject field, quality and reputation of the journal have a direct effect on the value of a citation. It is a size-independent indicator and it ranks journals by their ‘average prestige per article’ and can be used for journal comparisons in the scientific evaluation process.
Learn more
The SNIP is developed by Leiden University's Centre for Science & Technology Studies (CWTS). The SJR is developed by the SCImago research group in Spain.
More information, including a downloadable file with all journal metrics, can be found on www.journalmetrics.com
The Elsevier Scopus team
To stay up-to-date about Scopus news visit Scopus' blog and follow us on:
This message has been sent to [email protected] from Elsevier Communications on behalf of Elsevier Scopus.
If you no longer wish to receive messages of this nature from us in the future, please click here.
Visit the Elsevier Preference Center to manage more of your communication preferences with us*.
Copyright © 2014 Elsevier B.V.. All rights reserved. | Elsevier Privacy Policy
Elsevier B.V. Registered Office: Radarweg 29, 1043 NX Amsterdam, The Netherlands. Reg. No. 33158992 – Netherlands. VAT No. NL 005033019B01.
--
Coordinates:
Fethi Bin Muhammad Belgacem,
Department of Mathematics,
Faculty of Basic Education,
PAAET, Al-Aardhia, Kuwait.
FBMB Mobile: (+965)-9985-2474.
For Non-Academic Communications: [email protected]
FBMB ResearchGate Profile:
https://www.researchgate.net/profile/Fethi_Belgacem/?ev=hdr_xprf
Dear @Fethi, this 2013 Journal Metrics based on Scopus data is attached! It seems we do speak the same. I have just made PDF of Elsevier's mail because there are links!
Dear Ismat Beg Sir, I think H-index is better measure to rank scientists.
There is not much happening on the PURE SCIENCE or BASIC SCIENCE front. Today's science is rated as per the requirement of the society and its day-to-day relevance, therefore more than being called as scientist rating we can call it "Science Derivation rating".
Regarding Impact factors one shouldbe careful. Please check the folowing message in the attached file on Impact Factor Confusion.
Also please check this publication on an interesting type of journal ironically called Rejecta Mathematicae.
People might publish too much and might cite too much without reading details.
Each index has it's strengths and weaknesses. Publications is one criterion.
One among many other criteria dear @Marcel! Yes , I do agree! The applicability of research and outcome should be always measured in those scientific areas where it is possible.
There is no best way to rank scientists.
In my experience, I have seen that the Scopus publications and "h-index" are often used to check the performance of the scientists. Otherwise, it is not practical to rank scientists in general as we all work in manifold varieties of areas, and as a control engineer, I don't wish to be compared with a linguist, for instance.
Best wishes,
Sundar
Overall performance need to be evaluated and judged including Research projects, quality papers, books, awards/hons., patents, practical utility, novel research, H index, citations, total IF, scopus, reviewer / editor of good national / Int Journals, institutional building works, etc. ....RG score too,
Kuldeep Sharma - well said! A committee always looks into the "collective / overall performance" of a faculty. Points may be awarded under different categories. (In India, self-appraisal forms of faculty have such points under many categories).
With best wishes
Sundar
I believe that h-index scholar citations is one of the most reliable measurements to rank researchers. It is more important than the Impact Factors, because the Impact Factor measurs the rank of journals and not the rank of researchers.
I think it has to be based on the quality of research in the particular field. So it can be a mix of
1. Publications in top journals of that particular discipline .
2. Publications as main author.
3. Publication as sole author.
I could not say better than Prof. Moraru has defined.
But I am surprised about other reviews (eg Prof. Umar).
I believe that scientific contributions of high quality can be reached only in an excellent teamwork. Also, I find it ethically important that employees are by their good quality legwork to a topic, be sure to perform as co-authors. The publications in "Top" -Journals has already been discussed elsewhere "hot". Actually, there are really only a factor of a ranking demands, we need a criterion to allocate research funds can. The research aims to serve the welfare of mankind. A hardly thing to be solved.
First of all, forget the RG score. It has nothing whatsoever to do with scientific ranking or merit. Secondly, I don't think scientists can be ranked by an index. There is something far more important than designating set of numbers beside the name of each scientist. For me there is a quality for which no index can measure, its called character.
Good consideration dear @Issam! We are not numbers, we are characters! Chasing for numbers sometimes brings unethical behaviour of scientists!
I am agree with Dear @Ljubomir, we are character.......as well as with Dear @Roland ..........it is just social needs.......
Just sharing, I have a friend who published so many journal on the ISI....... and he get the good position...... and one day the VC asked him to settle down some related thing related to his expertise.......... surprisingly this guys can not show his expertise as it is........ and after the time.......... the problem settled by the common professional one who use to settle such problem daily............... So since that time the VC never asked "You get to published on the ISI" anymore, but the VC amend it that "You better be professional and published on the peer review........that's it............
When I think of a mathematical problem, I do it because it's interesting for me, and perhaps for some other people. I do not see any reason to strive for improvement of a numerical indicator of my level. Most often, it would mean to adapt to what is now considered (not always clear by whom and why) to be important and fashionable in a particular area or even in science.
several indexes ( impact factor, h-index, RG Score, i10 index, citations)
Other than RG all other factors are mere numbers with no LIVE interaction of contribution-contributor-beneficiary.
The strong elements of various research indicators, and their suitability for various assessment purposes, working out a methodology for the assessment of research, is facilitated only by RG.
Dear Ismat Beg ,
Look the link.
http://www.realclearscience.com/articles/2013/11/19/a_better_way_to_rank_scientists_108364.html
Regards, Shafagat
The scientists can be assesed by the accurate amount of knowledge they have acquired and how they assisted to develop more accurate knowledge .
This is a very good question with many possible answers.
In addition to the very helpful answers already given, there
Is a bit more to consider.
For example, young scientists are sometimes great scientists
In the very beginning of their careers. Frechet (doctoral thesis
On metric spaces) and Canny (master's thesis on edge
detection come to mind. A measure of greatness in this
case would be, for instance, the extent that a result influences
other scientists (e.g., metric spaces in Hausdorff's set theory)
or the extent that a result leads to new application (e.g.,
edge detection in object recognition).
@ Ismat, This is an excellent question. I would go for a weighted score that takes into account impact factor and citations. This should be done separately for each broad discipline; for It may not be quite appropriate to compare scholastic performance of scholars in medical science with those in social sciences.
@ Ismat, I am not sure if RG Score is qualified as a primary evaluation method to rank scientists, because it is mostly manipulated. If you carefully see the individuals' profile with a highest RG Score, you could clearly see if it was real or fake! You will very often see, a group of people keep answering and asking questions ranging from palentology to exobiology, and from nanotechnology to biotechnology, and from religion to philosophy. Just to boost the scores, many give multiple responses, even it doesn't make any sense! So, it has to take a back seat. Surely it can be used to see if their scientific credibility is real or not!
For example, you might see, the RG score will be totally out of the limit, however, if you see their h-index or i-10 index, it will give a different picture.
Although h-index can also be manipulated by self-citations; RG score manipulation is much worse! I would go with Hirsch index (h-index).
Of course, there many other indices available. See the enclosed articles. There are so many information available on these evaluation matrices.
https://arxiv.org/pdf/physics/0508025.pdf
https://en.wikipedia.org/wiki/H-index
http://users.telenet.be/ronald.rousseau/CSB_Jin_et_al.pdf
http://www.r-bloggers.com/scholar-indices-h-index-and-g-index-in-pubmed-with-rismed/
http://home.agh.edu.pl/~horzyk/papers/P-index.pdf
http://www.lutz-bornmann.de/icons/viewpoints.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.491.2858&rep=rep1&type=pdf
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2673580/pdf/pone.0005429.pdf
http://www.chemconnector.com/2011/04/23/calculating-my-h-index-with-free-available-tools/
https://doclib.uhasselt.be/dspace/bitstream/1942/15925/2/corrD%202.pdf
@Shukir, what you mentioned doesn't make any sense to me. What you mentioned are all subjective measures. But how would you objectively rank the scientists? Hypothetically speaking, I may feel that I am very knowledgeable, while you might think that I am worthless. Different people will have different opinion on someone's knowledge and skill sets. Academic scientists are often measured objectively. Surely patients could rate their experience about a physician. They are two different things! Can you see my point?
Each field has it appropriate method to assess. Classify people appropriate to their area of expertise and then measured on a weighted average of various scales.
However trans-gelling scores of RG should be weighed more than other passive scales like impact factor.
There is a near "perfect" answer to this question that was posted in a Science editorial, written by Phillip Abelson, then the journal's editor. It was written in the context of scientists needing to "engage" in the public debate (topic of your choice). It was written under the title, "7-11 rule". A "7-11" is a small public market similar to a kiosk in most countries. You stop by for standard things like a bottle of milk, eggs, bread etc. What Phillip wrote is a great message for all of us: you go into the "7-11" on a weekend, and at least half the people in the store recognize who you are! This is the best ranking you can hope to achieve!!
Its not very easy. The H index may be one option, the citation count, the RG score and publishing in impact factor may be another; but the real question is what is the impact of the scientist's work?
This is where the researcher should work to devise a relaiable measure for determining the imapct of research work of scientists in local context and/or internatinal context.
I think number of citations, national and international (other than self citations) is the best measure no matter where the work is published and the uniqueness of ideas and theories he takes forward.All others are relative terms or can be biased or ignore one factor or the other.To me, publishing in impact factor journal without having any citation does not make any difference.It means your ideas/research has not made any difference despite being published in an impact factor journal.
Agreed with Shazia Aziz. Publishing in impact factor is not an indicator of scientist reputation. According to Research, around half of the research papers in impact factor journals do not get citations at all, then how to judge the quality of such citation. It is also not necessary that a marvelous idea in research papers must get a citation, then how to rank such idea; even research papers with fallacious ideas get large citation count being challenged frequently. According to Eugene Garfield (the father of introducing the concept of impact factor) citation count is not an indicator of research quality and ranking journals. (for more reference, see me research paper 'Publish in Impact Factor or Perish').
@ Krishnan - 'However trans-gelling scores of RG should be weighed more than other passive scales like impact factor'
I totally disagree with your argument. Please enumerate your reasons.
Your first statement contradict your second statement - 'classify based on their appropriate area of expertise' vs. 'RG should be weighed more'. First you say, 'expertise', and then you vouch for RG. Many with higher RG score, doesn't mean that they are expert in a particular field. Surely one can identify fake profiles
a. RG score is meaningless for many who beef up their score by answering every question in RG (those who beef up their scores, they never restrict to their domain of specialty).
b. Many posts others work, and many other inappropriate practices in RG. Hence. RG can never be considered as a 'gold standard'.
c. there is no published evidence
The list can go on...
You just order journals in a certain field, and then you assign "points" for each published paper in a certain journal and at the same time you assign for each paper that sites you certain points. Then you can order scientists referring to the total number of points.
RG Score is just a childish race just like fb likes.I think the real measure of a scientist's worth cannot be measured during his/her life time.most often people's theories are understood by the world after they are no more. Hence one needs to be honest to work and research irrespective of the races/measures.One's sense of achievement and work satisfaction is more important.
I am sorry, when the research output is useful to the people, we need to measure apples to apples, within that time space, then that is more appropriate. Otherwise, all other metrics are useless. Especially, Impact factor scores are nothing to us in Industrial research. Our work is time dependent, to find new areas which are useful to people, economize and grow symbiotic.
@ Krishnan, I am still not convinced that you view RG Score as the most appropriate metric which measure 'apples to apples' in an industrial research setting.
In Science, every work has its own impact and contribution to the development of science. We have basic, translational, and clinical research. In these categories, someone does pure basic research others do either translational or clinical or applied research. Doing basic research, doesn't mean that it is less significant. It's output always leads to either translational or clinical research.
You perceive the impact factor scores are useless but RG scores are very significant. There are scientists who are in the field of industrial microbiology and environmental and industrial engineering. I am not sure if people who work in this and other related discipline have the similar view as yours. In scientific research, things takes its own time to evolve and not every basic research has immediate (a time-bound) application to the society. Some of the scientific questions come out of real curiosity, reflecting simple human nature. We don't put time to it! Something works in a model animal, while the same thing with similar research setting will not work well with another animal model. We don't rush things!
One of the responder to this thread Shazia feels that, 'RG Score is just a childish race just like fb likes'. I agree with this view! Unfortunately, that is the reality for many RGers. Many of them are obsessed with their RG scores. If someone looks at the breakdown stats, it will not only shows their h-index and h-index (without self citation), it will also gives a clear picture on their real scientific contribution and the number of questions they have answered or asked.
Although we don't have to convince each other; it is my opinion that RG scores would never qualify to be an appropriate metric to rank real scientists - at least in the current setup. Mostly people try to misuse the RG score!
yes Ismat Beg, I am also disturbed by the fact that where a work is published is more important than the work itself.I often keep asking people on different forums what if you are published in a non impact factor journal but you have more than 10 citations? Isn't it better than being published in an impact factor journal without a single citation?Who will decide?when?I think all these criteria need to be revisited and revised.
Hello Pahlaj, I think there is no one best way to rank scientist as it will mean ranking the minds and potentials of people. Different people have different expertise and there are large research areas too. So, whether you use impact factor, h-index, RG Score, i10 index, or citations, doesn't really matter. The best, I think is the mind of the scientist, which cannot be ranked. Thus ranking scientist will mean ranking their potentials, and this does not really depend on the published articles but what the person is capable of. Thus "one knows thyself". There is therefore no one best ranking system that is suitable to the other, to me, all are the same, but just on a different scale. If you therefore develop and re-scaling system or a conversion system, then, all will come to the same value.
Only you two can close your eyes and dream, in reality the impact factor scores are of no use, only the journals cheat and this has lead to mushrooming predator journals and paid publications a rise.
I value work, and am not interested to increase any number which relates to impact factor, which is not relevant to a scientist contribution. Those ratings are only for "you pat and I pat" on citations, but no concrete work. If a scientist work is relevant than it should be patented and the right claimed as IPR, which is a worthy standard of measure and not mere paper score of impacts.
Dr. Beg,
I appreciate this question of yours.
In my opinion, we should be ranked as per the papers which "actually" contributed towards development of a technology; which an inventor takes as a source of inspiration.
Scientists should be evaluated based on the quality of research they do . In many cases , researchers produced original research contributing significantly to knowledge but the journal in which thier research come out may not be known to the international audience.
In the end, there is no substitute for making your own evaluation by actually reading what they have written, Citation is an aid, but only based on the particular papers not on the journals in which they are published. Even then, citation is shaky. One of my most cited papers was part review, part technical advice. Clearly useful, but not showing that I am a giant of scientific originality.
It follows that we should be very reluctant to assess rank outside of the areas where we have some expertise.
Dear Ismat
I think this depends on the context. For example, for an appointment, there should be advisors who know the field and can judge the worth of papers etc. They can make an assessment of the merits of candidates. This is fairly standard for senior/tenured appointments in my own country, but I fear that a mechanical "which journals have they published in" is used as a short cut.
It is really very difficult to compare contributions of scientists of various areas. Even just on the basis of citations quality of two papers of the same journal cannot be judged. Normally, review papers have very high citations but these are technically very poor. Further, impact factors cannot give any good idea about the research papers of their journals. Can we think that two research papers of the same journal are having the same quality as they are having the same impact factor? Of course No. Thus, we need some better way to judge the contribution of research work.
Any computer-based ranking can never do a fair ranking of research works and scientists. Impartial human involvement may do some fair ranking.
Only publications in high rated journal should not be the criteria to rank the researchers. I know few persons who have not got their research published in reputed journal but have done marvelous job in Mathematics to give solutions of real problem in Management, Software development and health sector.
I agree that it depends on area but every contribution in a journal is interesting ! Perhaps we can a make a difference in applications!
Dear Ismat,
Of course, that is undeniable. It can never be completely overcome. I think that we have gravely weakened the code of honour, or ethos that kept this within limits. The modern problem is that in efforts to avoid it, we turn to more bureaucratic and number-crunching methods that have their own biases, and, as the Americans would say, can be "gamed". And when this is discovered, even more bureaucratic and nominally quantitative checks are introduced. Those making decisions seek to protect their position, by claiming that they followed the rules, and cannot be blamed. There is a nasty similarity to the old excuse for despicable actions " I was only obeying orders". A prominent feature of modern life is the absence of trust. Unfortunately, a lack of trust in people and systems is often justified.
All human measure is or can be manipulable. Only God judges well always, confide in him.
Evaluation of scientists by other scientists who are also human, can easily be exploited and misinterpreted. What may a good paper for me may be the worst paper for others and if evaluations are averaged, the result is a disaster. Citation index may be a better way to go.
@ Tiia: I greatly appreciate your honest opinion ("I have a very high RG score but mostly due to being active in commenting, so I would not use this score to rank scientists: some have been much more influential in publishing, but as they don't comment on this site, their score is much lower than mine"). I totally agree with you! However, I see that many of the people in researchgate in the similar situation, but have different opinion that you and I.
So, Which is the best way to rank scientists?. It has to be decided by real scientists. That said, scientists have accepted H-index at least for now. As I pointed out in my earlier posting, there are very many ways to counter check the reliability with different types of scores. Sure it may have limitation and there might be better index in the future. But this is what we have right now and let us accept it as a fact. h-index gives a decent picture about about a scientist and his caliber.
Many promotions, funding decisions, etc. are based on their scientific productivity, which is reflected through h-index (and NOT RG score as such!). Even in schools we get evaluated how much we scored in each subjects. Why?
However, it is ironic n this discussion section, which is akin to people who are remotely a scientists try to decide who should be awarded a nobel prize. I hope they don't decide it based on RG score. Many of them do not have a published history, many have low h-index, some might not fit the category of a scientist at all! For me this is almost looks like an old Indian folklore story, 'the six blind men and an elephant'.
Please, let us leave it to the experts - the real scientists themselves! All non-scientific opinions matter nothing! These are all just personal opinions!
@ Mariano, do we really need to bring 'god' here? Can this question be answered without involving any gods?
Hypothetically, if we want to answer each and every questions ever raised in RG, certainly we could bring god as a defence. However, that is not a rationale argument and it will not lead to any meaningful conclusion. Let's assume as if I go with your idea of god. Being a born Hindu, I will assume it might be a hindu god, 'shiva'. Will you or everyone else in RG for that matter, be happy with my assumption? Wouldn't this make further meaningless arguments as which god you refer to or which god is superior? As much as it would be irrational to make any such claim, not everyone with different religious background would accept that all gods are equal. This includes highly educated scientists. There are always people ready to further defend their god as the only true god or supreme god. I have seen, many a times, these sort of arguments does happen in RG, which is supposedly a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators.
The term SCIENTIST should be considered as an accolade for achievement and not as a term representing a profession as research. Not all PhDs are scientists, there are innovators even without formal education who have achieved inventing new things. They are also scientists comparable with PhDs who have real invention tagged with real accomplishments. Mere publications and citations are wrongly mistaken as remarkable work. These are only foundations for an organized research.
@ George, as far as my knowledge goes, most of the scientists and funding agencies, selection committees (for academic promotions) follows h-index and other criteria (research, admin, and teaching responsibilities, years of service, endowed professorship, etc). It is not just scientists are involved, both administrators and scientists who eventually decide who might be the best fit for an available position or decide who rank highly.