Scientiometric evaluation is important for all of us. However, the methodologies of "measurement of our scientific publications" are varied, controversial and subjective. But then, how to address the issue of impact factor, citation index or h index?
Hi, Please see this link: https://www.researchgate.net/post/How_I_can_find_a_list_of_very_famous_professors_in_my_topic_with_H_index
For an individual researcher, Citation index is very important because it reflects the quality of work of that individual. Impact factor and h index are secondary as it a cumulative effort of the contributors of that particular journal.
It depends on what your philosophy of productivity is. I would argue that if you really want to take "moon shots" then you may have an occasional paper that garners hundreds of citations. So in that case the total number of citations is what's important, irrespective of whether the many other papers you publish have lots of citations or not.
In that case the h-index wouldn't be a good measurement of impact, as you'd need to have that many papers with that many citations to have the max citation number be your score.
All are important for a researcher but I beleive citation index is essential. One might have only a particular publication with too many citations and this is a value representing the quality of work. While his/her h-index may be only 1. About IF, although it is an important factor but it is highly dependant to the number of citations.
I think both CI and h-index are important, but maybe h-index give a better "image". In addition, both indicators depend on when your work was published. (older, the better chances to be cited).
I think it depends on the structure of incentives set at the institution/region/country the researcher develops its activity.
For more information about the h-index and its relation to other indicators, I'd recommend you the following paper :"The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level" (Costas and Bordons, 2007).
I firmly believe that already the question shows that something goes fundamentally wrong in the scientific world. The old motto “Nobody is reading, as everybody is kept busy by writing” seems to have evolved, over the 30 years I saw it for the first time, to an overwhelming and distorted reality sketch. It seems, as nobody would believe in his/her own assessment, if this is not underpinned by a quantitative measure (often off any kind and of any quality). The attempt to linearize research performance is based on a very simple parallel. It resembles the situation, where a new branch of fast universal managers, equipped with general knowledge on economic rules and on managerial processes, can take decisions and steer almost any company just by optimizing cost and benefits. It is effective, fast and often successful. Why not to bring this effectiveness to research as well. It such straightforward. However, the approach, to squeeze all quality measures to one single number (irrespective if it is the impact factor, citation or H- index) and to use this figure for sophisticated decisions is fundamentally wrong. Nothing against scientometry that has evolved to its own scientific discipline. I just pledge for against oversimplification. None impact factor, neither citation nor H-index has the uniformity of money (as an expression of value) and it cannot replace careful reading, understanding and evaluating of a research result or of a researcher. Certainly these numbers can serve as indicators or hints for further quality assessments. They can be valuable especially at the low-end. They identify easily cases where low performing research(er) that has not created any result or impact can be expected. On the other hand, these figures are often (miss)used as the only (or main) evaluation criteria. The resulting rankings then, form the basis for managerial, political and/or financial decisions. In that a way supporting measures become the main guiding or steering principle. Notwithstanding, it fails and leads to opportunistic behavior of research(ers).
One gets what one is paying for.
Publications are optimized according to the minimal needed content of information in order to maximize the number of papers. Citation indexes are artificially increased by different citation mafias. Journals adopt their policies not to serve best to science, but to artificially increase their value (citation indexes). The integrity of all players is seriously in danger.
It is forgotten that the quality of science and of the performed research is the thing that shall count most. It is forgotten that the scientific publication is not the primary result of scientific undertaking. It is the new knowledge that moves science (and society) forward. We publish just in order to contribute efficiently to the common knowledge pot. It is absolutely irrelevant how many articles or pages you wrote if you have pushed things forward.
Do we (researchers) really need such a simplistic view? Is it really important to rank a chemist against a philosopher based on a numerical index (and finance them by a mechanistic approach)? Do we want to support the ignorance of certain decision makers? Many other questions seem to arise.
(My citation index is about 26 citations per paper and recently I saw that my H-index approached 31. Is it good enough to support this my opinion? I do do not know. But I do not care either.)
The question is too broad. It depends what are you going to use the metric for; to decide what to read? to decide what to cite? to decide where to publish? or to evaluate a researcher or ...I think they are all useful if used in the proper context and if their importance is not overemphasized. If you want to use just to measure how good the output of a researcher is then, H-Index might be good as it tells us a bit of both quality and quantity.
I agree with the points made by the first person (deleted). Initially, the Impact factors were used to evaluate journals to index in SCI (by Eugene Garfield). Citation indexes were originally designed for information retrieval, and to find the linkage between citing and cited articles. Later on they are increasingly used for bibliometrics and other studies involving research evaluation.
Impact factors should be used with care. Many people have warned the improper use of IF. Dr Garfield and Prof. Balaram (Editor of Current Science) have warned in many of their essays. It should be used with care across the discipline. Many papers published from developing countries are not even cited well even when they published in high impact factor journals. Even though many discrepancies with h-index, they are slightly better measures that many people follow to evaluate individual's performance. My opinion the h-index is not a proper measure.
Guna
Vageeshbabu Hanur, Can you explain what you mean by "citation index" in your question?
The problem with indexes, factors and other numerical figures is that they seem to represent a quantitative measure. Certainly they may have a value and may be used within the restricted area for which they have been created. However, as any number can be subject of mathematical operations, and these operations are accessible to many (including pseudo-policy makers) its unavoidable that these numbers (IF, HI, ACI, CIPA, etc.) are very soon exposed to numerical equilibristic, which easily and often escapes any reasonable meaning. The original supportive information is leveraged to a the pedestal of universal truths and it is misused to serve purposes they can hardly suit. Ranking of researchers, teams, institutions, research areas are just the beginning. It is even broadly supported by the scientists themselves. To be ranked "better" than the colleague across the floor is such enticing.
To measure success in scientific competition is nowadays so difficult and we seemingly need fast and simple answers to questions: How to measure the scientific excellence, if real breakthrough discoveries become very seldom? Who is the best? Massage of the Ego is huge and scientists are often egocentric.
But also: How to know where outstanding science is done? Whom to support most? How to optimize the financing in times of restrictions and cuts? How to finance science overall? These is the type of questions politicians and often enough even policy makers ask. Simple and understandable answers are preferred. And than: Why not to use the quantifies quality measures that have been developed by scientists and that are broadly accepted also for political decisions? Sophisticated formulas have already been developed in order to convert the IF, CI and HI to dimensionless but universal measures. These do allow nondisciplinary comparisons of effectiveness and efficiency of science. It is then just in the nature of this perverted logic, that the "absolute and objective" measures are used for financing. Formula based financing is such straight and "fair".
It becomes very difficult to assess a researcher just based on the impact factor/citation/h index: At least in this era of OPEN ACCESS PUBLICATIONS
We should think and come out with something very special if we have to grade researchers
This is a simple answer without getting into the mechanics of the metrics. The most important metric is the one your boss tells you that you are measured on.
Hi
This is the classical question ongoing. I strongly agree with Jan's arguments and with Paul.
It depends who is doing the evaluation, how much power he/she has, and if he/she also knows about the subject.
As a background, I would like to refer to the link below. Three years old, still a good summary IMHO.
http://www.nature.com/news/2010/100616/pdf/465864a.pdf
LOZANO: "The index is the n papers that have more than n citations per author. It makes more sense than the raw h index".
I agree with your point. It has some meaning. But, time is another parameter. n papers by n author that have n citations at what period (time)? An average citation/paper per year is also an important parameter. I think a compound index is needed for such evaluation.
For example, if a person publish two papers in a high profile journal in 2005. Each paper has received more than 100 citation every year upto 2013, then what is his h-index?
Like that, another person has thirteen publications during 2000-2012 and got 10 citations to each paper every year upto 2013, then what is his h-index?
How one should evaluate these two people? Whose is more competent person?
Being a little flippant here but, in the future, might ones RG Score become the most important measure!!
I also agree with Jan's arguments and with Paul.
It is similar to the arguments provided by some governments which criticize rating firms.
Thanks colleagues for varied and interesting responses, though some tended to be more kinda philosophical. However, in this materialistic world, accurate and dependable method of sciencific measurement is required both for self-assessment as well as comparative assessment by third party. After perusing all responses and connected literature, still I am hazy. Whatsay?
Dear Vageeshbabu Hanur. It's very clear from this and several other related discussion threads on RG that the answer to the question is extremely dependent on the nationality, field and job situation of the individual researcher. Thus, for one person, the IFs of the journals he/she publishes in may be of extreme importance, for another person it may be the H-index which is crucial.
It simply all boils down to what evaluation criteria are being used for research funding and job applications. In some countries, publishing in an indexed journal with a good IF may give you an automatic financial benefit in terms of funding through the university system, in other countries, there are no such schemes.
In Sweden, the research councils have stopped asking for the IF of the journals, but they want to know citation frequency for each paper you've published. How they use this information, I don't really know. The H-index is of course intimately related to citation frequencies and in a way is a summary figure over the impact of your paper, but this is, of course, highly field-specific.
If you have a secure position and a secure funding, you don't have to worry about any of these bibliometric indices, but most of us are being evaluation in one way or another all the time, and then it's simply important to know what criteria are being used for evaluations.
Cheers, Thrandur
PS. Although I sill believe that the RG Score is a meaningless number, I just saw, for the first time, a junior scientist include the RG Score in the CV, together with the H-index!
Yes - it's a 'hazy' world indeed. Not sure what the best measure is - but citation metrics are here to stay - and are increasingly competitive. For instance, I note on Google Scholar my i10 metric. I take it that this metric effectively cuts in half the h-index metric. Probably i5 next and then i1. It could soon be that after your first ever citation 'you're in the club'!!
While you can calculate scores, in my experience your publications list and the journal quality (within your field of work) that they are in is the most important. In any applicant I would always look at the publications and read through some of the papers to judge for myself regarding the quality of the research!
Agreed Mark - especially when it comes to your CV for job application. On the other hand, the three; calibre of your outputs, journal quality and metrics - often go hand-in-hand (unless you are a neophyte researcher). I would count the latter as a firm last, like yourself Mark - but, if available, I would still factor it in.
In this very competitive world a thorough screening of work demonstrated by an individual by highly experienced peers only can be solution, than impact factor/h-index
I agree with Dean Whitehead. I would rank calibre of an individuals outputs, Journal quality, and metrics in that order. And yes generally they do go hand in hand. But I would give priority to 1 and 2 on this list.
Many people have criticised the method of evaluating individual's performance by impact factors or h-index. Recently, Professor Dipankar Das Sarma, from Indian Institute of Science (IISc) has commented on H-index, that "it is not only unjustifiable, but dangerous to measure research by any quantitative terms." .....
"These measures like H-Index, number of citations are nice to look at, but cannot measure the true impact of research, as number of citations tends to be more in certain fields. A research paper needs to be read and analyzed on the basis the idea underlying the research, and the innovation," he said... http://www.indianexpress.com/news/-science-in-india-experiencing-extraordinary-growth-/1189381/0
We may consider citation metrics; but it should be used carefully with relevance to the context. Although impact factors and h-index are based on citation metrics, they have lots of problems when applying to measure an individual's performance. http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
We hope to have better alternative to the traditional system (H-index, impact factors, etc.,) to be discovered in near future.
what that alternate method of evaluating researchers performance should more concentrate on will be a big point to argue on. I am very interested
It is difficult to come up with a new perfect system. H-indexes are better than just relying on journal impact factors in that they are based on the researchers output and not on a journal average. But H-indexes have severe limitations as well:
1) You do not know which part of the h-index is based on career length and which on paper impact (proxy by citations). One could argue that you should divide the h-index by the number of year elapsed since the first publication.
2) The H-index cannot fall. Someone not producing anything and not receiving citations for years can still maintain a very high h-index
3) The h-index depends on the databases used. You always have to ask: "is that your WoS H-index, your Scopus H-index or your Google Scholar h-index". Actually the best most valid h-index is a composite based on all these, calculated by hand.
4) The Google Scholar h-index can easily be gamed, manipulated, see the paper by Lopez-Cozar et al.: http://arxiv.org/abs/1212.0638
5) The H-index does not correct for the number of co-authors, adding to the problem of comparability.
6) Authors with a small number of extremely special and highly cited papers, will have a very low h-index. In his early career Einstein would never have been hired if it were for his h-index!
7) The are severe inconsistencies in the calculation rendering H-index development sometimes very counter-intuitive (see http://onlinelibrary.wiley.com/doi/10.1002/asi.21678/full)
@Ramana
You are right. It is a big point to argue the alternate method of evaluation. Altmetrics is also emerging as a new method of evaluation. http://altmetrics.org/manifesto/
http://blog.library.si.edu/2012/08/alternative-methods-of-research-evaluation/#.UnowbXCzTzs
http://www.researchtrends.com/issue28-may-2012/usage-an-alternative-way-to-evaluate-research/
Researchers should examine the tools like that and find a suitable one. There is a lot more to discuss on this issue.
If you want to read up on impact factor issues have a look at my post on why we should move away from our over-reliance on impact factors: http://im2punt0.wordpress.com/2013/11/03/nine-reasons-why-impact-factors-fail-and-using-them-may-harm-science/
Jeroen, I read the article. It makes great sense. I consider such a trend a motivation to be more creative and at the same time keeping ethical, professional and conscious to produce an output beneficial to the community at large.
There's an ongoing discussion about metrics in Polish science too :) for 2 years now researchers have been encouraged by law's regulations to achieve more points for their work. That's when the problem with scoring appeared. We started to learn which metrics method to use and it has been stated that biology, chemistry, ingeneer and other researchers from that field use IF, because they publish more in the best journals in the world - therefore get more citations. The other part of science, which I refer through my work to, are the humanists, and since we've got much less citations we tend to use H-index. Both metrics are valid in the case of applying for any grants and funds for projects.
Agata, I enjoyed reading your response as it's interesting to see how these parameters are used by research councils in different countries in order to "improve" research output.
However, what I don't understand in your answer (or in the Polish system) why IF is seen as important for the natural sciences, while the H-index is seen as important for the humanities? This, as these two metrics (as explained in many of the answers above) are so fundamentally different. I don't understand why it would be more appropriate to use H-index in disciplines where citation frequency is low.
In my mind, whether H-index or IF is used to evaluate research output, it can only be used for comparisons within discipline, not between disciplines.
The division has been made I guess temporarily because of the reason I gave - humanities researchers in Poland do not have as many possibilities as natural science's researchers if it comes to publishing in the journals with IF. So I think this is a division made for now which should lead to equalizing both groups.
There is no JCR edition for humanities. Even if there was, IF figures would be even more useless than in STM and social science, because journal articles comprise only a small part of humanities publications. SJR and SNIP are available for a few thousand humanities journals if you really metrics.
I have come out with a newer method of assessing a researcher
http://pubs.sciepub.com/bb/1/2/2/
Thanks Bosman for the detailed view on the IF issues. Very interestig. I have forwarded it to many of our researchers. Ramana, you have come out with a newer method of assessment and conceived some of my points (discussed earlier on h-index and IF) and reported in a good way. But, again all metrices are depend on citations! How do you evaluate a mission based research? For example, the launth of Mangalyaan by the ISRO, etc.? and compare with the other missions by countries. Normally they do not publish any papers. It is another important topic to be discussed.
Thanks Gunasekaran. Citation counting in my view as well is indeed a limited measure in research evaluation, for three reasons. First: citation counts are decontextualized. We do not take into account why, how and where something was cited. Experiments with measuring citation sentiments are still in their infancy. Second: publication use goes way beyond citation. Article level metrics and altmetrics go some way at capturing this and have the advantage of being available immediately from the moment of publication or archiving/sharing. Third: there are so many valuable research products not being credited for through citations. Software and code (e.g. shared through Github) spring to mind, but also devices and patents, consultancy and last nut not least research enhanced education. These points are high on the list of motivations of a group of eminent scholars in The Netherlands, called Science in Transition, that calls for a more fair, open and societally accountable science culture. They had a first conference just this week, gaining a lot of momentum and press. Perhaps the majority of scholars here support these ideas, but most are not sure where to start on the road towards a more Open Science.
Bosman: Very interesting points. I visited the website Science in Transition to know the outcome of the meeting. But much of the contents are not in English. I will be grateful if you could provide the recommendations in English. I am bit disappointed with your statement..... " Perhaps the majority of scholars here support these ideas, but most are not sure where to start on the road towards a more Open Science."
@ Gunasekaran The Science in Transition posotion paper is available in English here: http://www.scienceintransition.nl/wp-content/uploads/2013/10/Science-in-Transition-Position-Paper-final.pdf For resarchers many choices constitute a kind of prisoner's dilemma. If someone moves and others don't, they risk their career. All stakeholders must acknowledge that things have to change to give researchers confidence in changing too.
1) Do the best science you could possibly do,
2) Write every time the best manuscript you could possibly write,
3) select the journal based on suitability for your topic and readership
this way the impact of your work will have a better chance to reach its maximum
To me nothing matter except, how much benefit it is going o provide to human beings. Do we have a gauge for it?
Nowadays everything counts depending upon the judging authorities including of Citations, Impact factor/points, H-index, quality papers, research projects etc etc and many more - a single parameter is not available still
@Samuel, thanks for the good advice. I will remember your 3 points.
@Dr Syed, our research must benefit humans directly and indirectly. Really difficult to measure.
@Kuldeep, I agree entirely that everything counts. Thanks.
I have written a short article on assessing researchers based on membership of journal editorial boards. In this article I have discussed the use of impact factor and h-index on assessing researchers. Please read and comment: http://www.currentscience.ac.in/Volumes/106/09/1173.pdf
For a complete researcher all the things viz., impact factor, citation index or h index are important.
Yes Sandip - I agree. But there are counters to that. For instance, what if you have important work - but don't have any of those? To add to that, many companies are competing to produce new impact indices; where does it stop - and do you have to posses them all, or some, or perhaps none? I'm sure that there will be scholars out there that have non-discript factors, index's etc
@Ruchi Tiwari:
The quality of a paper is measured based on the citations that received. H-Index, journal impact factors, etc are based on citation metrics. Citation is the root for all these indices.
I believe that not a single METRIC can evaluate a researchers value. It must be a circle of multiple aspects measured that probably will measure years of hard work and scientific outcome
Hi, Please see this link: https://www.researchgate.net/post/How_I_can_find_a_list_of_very_famous_professors_in_my_topic_with_H_index
First of all, one must understand that the Impact factor is only to measure journals. Despite concerns about h-index it is used to evaluate researchers. It is not good. It has lot of limitations. In fact it does not work for young researchers. Citations to publication is widely used. But that alone not enough. For example, all works of a researchers do not get cited equally. Some papers get less number of citations and some are more. But Sometimes, the papers that had less number of citations have greater impact to the society or industry. So, one cannot measure researcher based on a single indicator. As Bosman said, Altmetrics will be an alternative. But it should be in the open domain with search feature.
@ All RG Members
I thank Dr. Ali Gazni, really an expert,
https://www.researchgate.net/profile/Ali_Gazni
who send me the following mail:
Dear Ouerfelli,
The following link is a video in English language investigated the effect of journal’s impact factor on the paper’s citations. It could help you as a researcher and/or a research manager to choose right journals for the publications and to know about the extent to which the quality of the journals influences the papers’ citations. Other learning objectives of these videos are to know, the Nobel prize winners publish their papers in which group of journals, and to know about the publishing strategies in top global-ranked universities.
The training video in English: https://www.youtube.com/watch?v=NYAobljYHW4&feature=youtu.be
https://www.youtube.com/watch?v=NYAobljYHW4
Best,
Ali Gazni
Assistant Professor of Library and Information Science, Regional Information Center for Science and Technology.
I think that the main goal of impact factor is to measure the quality of journal, then come research citations to show the importance of works.
is there any formula for conversion of h5 to IF, If there pls provide me
I came across this article:
Comparing the Google Scholar h-index with the ISI Journal Impact Factor
https://harzing.com/publications/white-papers/google-scholar-h-index-versus-isi-journal-impact-factor
You may find your answer there.
Regards
I think all these parameters are fake.
Think about Einstein, how he made discoveries and innovations. These are life changing.
Mendel's laws of genetics or Darwin's theory of natural selection. These all have profound effects in our daily life.
Rather than citations, journal's impact factor, H index, a thorough assessment should be based on what impact one specific paper made on improvement of scocial benefits to common man in the region, country and in the World.
If citations, journal's impact factor, H index, are resulting in grabbing only grants of millions of dollars, for individual's satisfaction and personal development till retirement, then its useless system.
For example in Biotechnology and life sciences,
Most of the research till 50's 70's or 90's (e.g. DNA model discovery to DNA Sequencing rDNA technology and PCR) were pathbreaking. Those discoveries really made drastic changes in the field of biotechnology.
Parameters should be based on, e.g. how one single paper contributed lowering disease burden,
Or
Making decisions on at least patients management of specific disease.
Otherwise all above mentioned are relative terms and not the absolute one.
Citations is for the individual article & highlights the impact of the research work . h-index is for the total research output & citations of the author . The impact factor is for the journal & not the article . A researcher would like to know , which is the best work he has done by viewing the citations of the articles & h-index to know his overall contribution . An author alone knows his best & favorite contribution & can evaluate it by these parameters
after all what is the procedure to apply to research gate for a ongoing journal to get metrics and IF?
Although this Question was posted several years ago, it still resonates.
The Impact Factor and Citation Index apply to journals.
The h-index can be calculated for both journals and individual academics.