I think the question is highly relevant as H-index is becoming the dominating metric tool used in evaluation of academic applications for positions, promotions and grants.
I think my esteemed colleagues Ian and Mitchell provide interesting, but rather flippant comments. Using bibliometrics is simply not "bean counting" and the Einstein argument against the H-index is a frequently cited anecdote rather than a "damning evidence". Everybody who understands how the H-index is calculated knows if you only publish one paper, it doesn't matter how often it is cited, your H-index will remain = 1.
As Michell points out, "journal quality" doesn't enter into the H-index equation, which I think is good. The recently published DORA manifest (see http://am.ascb.org/dora/) about "Putting science into the assessment of research" is highly critical of the use of journal impact factors, in evaluations of scientists or their publications. This criticism, put forward by the editors of some of the top scientific journals in the world should be taken quite seriously.
The H-index focuses only on two parameters, number of papers and how frequently they are cited, and nobody can deny that the "calculation" of the index is elegant in its simplicity.
I find the H-index valuable in my work as an evaluator, but it's important to recognize that there are several limitations to its use. In the end, it's only a metric ;-)
No, the most important (but difficult to measure) metric is how many people do or make something useful as a result of the paper being published! RG's download statistics seem to me to be a better proxy for that important metric. For further criticism, see http://en.wikipedia.org/wiki/H-index#Criticism
However the h-index appears to be the favoured metric for bean-counters!
See http://en.wikipedia.org/wiki/H-index#Alternatives_and_modifications
It's certainly intuitively convenient, but the most damning evidence against it is the observation that if Einstein had died right after publishing relativity, his lifetime H-index would be in the single digits.
And the h-index doesn't consider journal quality. Google and Scopus don't track the same journals so they produce different h-indices.
Pulling back, the problem is how do you define academic success? High-impact papers? Pulling in grants? Effective teaching?
I think the question is highly relevant as H-index is becoming the dominating metric tool used in evaluation of academic applications for positions, promotions and grants.
I think my esteemed colleagues Ian and Mitchell provide interesting, but rather flippant comments. Using bibliometrics is simply not "bean counting" and the Einstein argument against the H-index is a frequently cited anecdote rather than a "damning evidence". Everybody who understands how the H-index is calculated knows if you only publish one paper, it doesn't matter how often it is cited, your H-index will remain = 1.
As Michell points out, "journal quality" doesn't enter into the H-index equation, which I think is good. The recently published DORA manifest (see http://am.ascb.org/dora/) about "Putting science into the assessment of research" is highly critical of the use of journal impact factors, in evaluations of scientists or their publications. This criticism, put forward by the editors of some of the top scientific journals in the world should be taken quite seriously.
The H-index focuses only on two parameters, number of papers and how frequently they are cited, and nobody can deny that the "calculation" of the index is elegant in its simplicity.
I find the H-index valuable in my work as an evaluator, but it's important to recognize that there are several limitations to its use. In the end, it's only a metric ;-)
@Björn: You say "flippant" as if it's a bad thing. That aside, the core problem is whether people who see the H-index understand its limitations and its context. It's like the Dow Jones Industial Average or Gross Domestic Product -- only a limited view of a broader problem. As Andrew Lang once wisecracked, there are those who see statistics as support rather than as illumination.
Ian, when you talk about "how many people do or make something useful as a result of the paper" and "how well a researcher is benefiting humankind" you're obviously talking about applied research. That leaves the big question of how to gauge the impact of fundamental research which may neither be "useful" or "benefiting" except in the indirect way of answering curiosity-driven questions and thus adding to our collective human knowledge. One pragmatic way of measuring this is to see how many other scientists finds this new information of such importance that the cite it in their papers. That's one reason why we look at citation frequencies as a measure of the impact of basic research.
The short answer is probably not. Multi-parametric indices are to be preferred and their is no substitute for reading an academic's articles and coming to your own conclusions. If you're interested Mark, some of these issues an article I co-authored back in 2008 called the "Siege of Science" (available from my profile).
Dear Michael Taylor. Thanks for the paper which is extremely interesting and informative. There are several discussion threads on RG, some initiated by me, on costs of publishing, pros and cons of OA publishing, the DORA manifest, etc, and your paper is highly illuminating in these areas. I'll continue to read it, Cheers
This is really a subjective question. The importance of the H-index varies globally. The most important concern with the use of any metric is that those using them have a good understanding of the mechanics and the limitations.
@ Taylor: Thanks for alerting your paper "Siege of Science".Very interesting. It is published in 2008. Has there been any change since then, for better or worse?
Sadly, many artist's, musician's and author's worth was not recognised during their lifetimes. Even in research, sometimes a person's contribution is only recognised late in life.
The drawback of such calculation is that though you may have many papers that are hugely cited, it considers only those papers that are cited to a minimum number as your h-index and for your h-index to improve citations of your other papers should increase. Another issue with h-index was that, it takes a lot of time for a researcher’s h-index to raise even a single number. It does not consider the years of work of researchers and those who are active or inactive. It also ignores your highly cited papers.
please go through this paper
Bornmann L, Daniel HD, The state of h index research. EMBO Rep 2009; 10: 2–6
In response to Ramana comment about "drawbacks", I might add that the Swedish research councils have for some time requested applicants to include the number of citations for each paper in their publication list. Thus, it's easy for the evaluators to spot highly cited papers, even if the author only has a modest H-index.
It's true as Ramana points out that "it takes a lot of time for a researcher’s h-index to rise even a single number", but I don't see that as a drawback, it's simply the nature of the H-index to increase slowly through ones academic career. A rule of the thumb in my area of research is that it's pretty good if your H-index increases by 1 for each year of active research. I started as a PhD student in 1979, and my H-index is now 39, so I'm a bit ahead of that rule, still :-)
Evaluation factors are required to simplify the job of evaluator and find a common base for comparison among peers. Otherwise, none of the evaluation factors under discussion are satisfactory. For example, evaluators found Einstein fit for a Nobel prize for photo-electric effect. Later on, it was theory of relativity, for which Einstein came to be known. A simple paper on a hot reserarch topic is instantly quoted umpteen number of times, compared to an offbeat paper published ahead of its times. A board of evaluators must therefore read, understand and then thrash in discussion as to the real worth of a publication. There is no substitute to hard work.
I have tried to get the h-index of a scientist using Scopus and Web of Science and the interesting thing was that the h-index for the scientist from Scopus is different from WoS. I have tried it with Scholar Google which is entirely different. There are some ways and means to improve the h-index of a scientist artificially. We need some tools to measure the impact of a researcher and the same time these tools alone will not judge the merit or quality of the research of a scientist. It is my humble opinion.
It is perhaps the most important single measure for authors at the moment, for the reasons that Björn explains. It's also worth noting that Google Scholar Metrics are using a five-year variant of h-index to rank journals -- http://scholar.google.co.uk/citations?view_op=top_venues -- though this certainly couldn't (currently) be considered as important as Impact Factor. It might arguably be better; it avoids the negotiability of the IF denominator in terms of what JCR considers editorial or review, etc. and it doesn't discourage journals from publishing rigorous but unglamorous work, which IF arguably does.
I think the h-index is often misused as a measure of researcher quality, whereas it's more of a cumulative measure of researcher impact; as Björn says, it tends to go up by one or two points every year, so researchers who have been publishing for many years inevitably have much higher H than even the most stellar new researchers.
Personally, I would like to see more use made of authors' average citations per paper per year as a measure of their current quality. Unlike h, this could fall for a senior academic who starts resting on their laurels, and it might help to discourage people from salami-slicing their research into as many decent papers as they feel they can get away with, rather than fewer great papers. I would see this as complementary to h rather than a replacement though. It would be somewhat similar in spirit to something proposed alongside the original h-index, m = h / (years since first paper), but I think it is more easily interpretable. Also, if m did become of major importance, it might motivate PhD students not to publish anything in their first few years, but rather to release all their work in a flood at the end of the PhD, which would be detrimental to the research community.
All scientists wish to quantitatively measure things. The H-factor is one way to measure value of publications despite its obvious limitations. Such measurements could be changed but I doubt scientists would wish to limit or block quantitative measures of productivity or publication impact.
@Mark. Though there are certain limitations such as coverage (period), still h-index is considered to be the most important metric of publication performance nowadays. Other indices such as G-index are also coming up. But, as on date, h-index is the popular one.
The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar.
Perhaps the time has come when, there is need to add some more objective parameters for performance evaluation, rather relying alone on citation metrics. Overall Research contribution in terms of research publications, books, articles etc should be added as one of the parameters. Principal author should always get the major credit for the research publication rather distributing credit among all the co-authors proportionately, who most of the time contribute only their name in the publication. editorial contributions, review contributions, research guidance, patents etc. can be added to it rather confining it only to citations metrics.
The h-index is one of the important research metric. But at the same time it is not perfect within itself. With time hundreds of variants of h-index have been proposed overcome the limitations of h-index. And at first it was proposed as an author evaluation index but it is being applied to other research entities like authors, institutions etc. It is preferred to use a set of indicators rather than a single indicator to check the resaerch performance at any level.