In a different question thread the general consensus is coming around to H index, rather than the IF factor of the Journals an academic publish their work, as being a better measure of the individuals academics performance. But what should be the H index measure be for a Professor, Associate or assistant Professor? And is it different for each discipline? A lot has been published on H index of computer Scientists, but what about biomedicine and psychology academics - let alone social scientists?
I noticed that after posting this question a lot of interest but no answers. I subsequently realised it is not an easy question to answer. I turned to Google scholar which, if an academic registered with it, calculates the academics total citation, H-index and i10 index. I know this is much from a UK view point but I plotted the total citation of all academic staff who registered on Google Scholar and identified themselves as staff at a small number of UK universities listed (see figure). Clearly if you are part of the UK Ivy League (the UK Russell group) Universities you are likely to be highly cited (and vice versa). Interestingly a major distinguishing characteristic was how many academics had their published work cited more than 1000 times. Conversely I know staff appointed as full Professors at lesser UK Universities who have very little published and would never register on Google Scholar. Indeed a former University I worked for recently made a Deputy Dean for research a full Professor based on his one listed journal publication which has not been cited! Nevertheless, accepting the extremes at both ends and examining only non UK Russell group Universities, of 88 academics identifying themselves as full Professors in Google Scholar the mean H- index was 24 and median 20. The top 25% of Professors had a H-index of 30 or greater. There is discipline variation:
Computer Science H-index mean 23, median 21
Psychology H-index mean 26, median 19
Nursing H-index mean 20, median 18
Social Sciences H-index mean 19, median 16
Physics/maths H-index mean 23, median 22
Bio-medicine H-index mean 28, median 25
Thus, in part answer to my own question - academics aspiring to be a Professor should aim to have their work cited 1000 times and or a H-index average for their discipline.
I noticed that after posting this question a lot of interest but no answers. I subsequently realised it is not an easy question to answer. I turned to Google scholar which, if an academic registered with it, calculates the academics total citation, H-index and i10 index. I know this is much from a UK view point but I plotted the total citation of all academic staff who registered on Google Scholar and identified themselves as staff at a small number of UK universities listed (see figure). Clearly if you are part of the UK Ivy League (the UK Russell group) Universities you are likely to be highly cited (and vice versa). Interestingly a major distinguishing characteristic was how many academics had their published work cited more than 1000 times. Conversely I know staff appointed as full Professors at lesser UK Universities who have very little published and would never register on Google Scholar. Indeed a former University I worked for recently made a Deputy Dean for research a full Professor based on his one listed journal publication which has not been cited! Nevertheless, accepting the extremes at both ends and examining only non UK Russell group Universities, of 88 academics identifying themselves as full Professors in Google Scholar the mean H- index was 24 and median 20. The top 25% of Professors had a H-index of 30 or greater. There is discipline variation:
Computer Science H-index mean 23, median 21
Psychology H-index mean 26, median 19
Nursing H-index mean 20, median 18
Social Sciences H-index mean 19, median 16
Physics/maths H-index mean 23, median 22
Bio-medicine H-index mean 28, median 25
Thus, in part answer to my own question - academics aspiring to be a Professor should aim to have their work cited 1000 times and or a H-index average for their discipline.
Interesting question, thanks. Commenting on your own answer, I think it is much too demanding to say that someone who's striving towards a professorship would need to show the same bibliometrics as average for professors in that discipline. The average professor is probably about 55 years old and has been active as a professor for 10-15 years. So to ask newcomers to the professorial rank to have already achieved what the "average" professor has achieved is pretty harsh.
If you reflect on the H index data you provided, one needs to remember that in order to obtain an H index of 25, you probably have to have published at least 50 papers. For most established researchers, the number of publications cited less than the H index value are much more numerous than the number cited more than the H index. Partly, this is simply a function of time. The higher the H index, longer it takes a given paper to be cited enough times to raise the index by one.
I think the relevant question is not the average H within the discipline is, but what the H of newly appointed professors has been.
In Sweden, about 15 years ago, the academic career sytem was changed so associate professors (lektors) could apply for a promotion to a full professor. To deal with the deluge of applications, national committees were appointed. Within my field of biology/physiology, the unofficial "quantity" goal was to have at least 30 published, peer reviewed papers, to come into question of a promotion. My guess is that this would translate into an H index of 12-15 or higher.
This rhymes with the rule of thumb in biology that your H index should be at least as high as the number of years you've spent in research.
Dear Bjorn - Many thanks for your response - I have pondered on your response for many months because I believe that the title Professor in the UK (Full Professor) has become devalued because we do NOT have national committees to regulate its award.
I think it should also be a UK Government protected title and have a minimum standard set by national committees. I applaude the Swedish approach.
The problem is that every UK University has its own criteria for awarding the title and it is not regulated. This has been coupled with a major shift in who runs UK Universities and it is NOT always scholars. It is now a common view amongst the post-92 new UK Universities that you do not have to have published scholarly work, or even gained a doctorate, to be a Vice Chancellor, or any managerial Dean position - you have to be a "manager". However, to be a manager of academics you want to present an impression (however false) of academic authority. Thus, these senior managers are awarded the title Professor but with no scholarly standing. These senior mangers then regulate the award of title Full Professor to others with as equal disregard to scholarly validity and thus standards drop. The result is we see a massive UK spectrum of disparity from some new Universities to the Older "Russell group" Universities.
I have an H index of 28 but strongly suspect I would not be appointed as a Full Professor at Oxford. I respect that as I believe that high standards are needed as a target to which academic scholarship aspire in order to grow and achieve. I do not believe senior managers should be awarded the title if they are not scholars.
It is on this note that I now comment on your rule of thumb. Yes, the general rule is that you should have an H-index at least equal to as high as the number of years you've spent in research, but does that make you a Full Professor? Yes, you may have been publishing work for 12 - 15 year and doing OK as an academic; but a full Professorship should be because your continued work has had impact beyond the time you spent in research? Surely, a Professor's H-index should refelect a higher H-index than that expect for doing OK?
Wow, this is unbelievably high standard--"I have an H index of 28 but strongly suspect I would not be appointed as a Full Professor at Oxford"! Ok, I see why--your field. But still 28 is quite high, and, in my view, you deserve the promotion. Good luck.
I thank Wei-Jun Cai for his kind comments but I think my point is best illustrated by some analysis of google scholar: If you type in the email suffix of an academic institutions email it will list all scholars registered who identify themselves as genuine faculty of that university. The list is ranked in order of highest total citation. I examined the H-index of the top cited 50 google registered scholars of Oxford University (ox.ac.uk) - a UK and World first division University - and compared it with the top cited 50 google scholars of Middlesex University (mdx.ac.uk) - a UK University somewhat at the bottom divisions but claims to be a "research" University.
Of the Oxford University top 50 cited Google Scholars the highest H-index was 146, lowest 28 and median was 64. Middlesex University's top 50 cited Google Scholar - the highest H-index was 28, lowest 3 and median was 9.
I have done this for several other Universities and it clearly delineates the scholarly division to which a University should be classified and the scholarly expectation of a Professor at these respective institutions.
Can you tell me how did you search other people's H number (as in your above note)? I was preparing a promotion rec letter for a colleague and searched "What is a good H index for a Professor ," which brought me to this site.
You tap each individual and at the top of the page above their papers it has a very good table listing total citations, H-index and i10 index. Here is mine
http://scholar.google.co.uk/citations?user=mm-f1qYAAAAJ&hl=en
Hi Ray and thanks both for an insightful analysis of the H-index and a smart method for obtaining H-index data for scientists from a certain university/department!
Hi Ray, Bjorn et al,
i ran into your discussion looking for any data on comparative h-index across Biology. Of course, even better would be m-index (h/n of years after first publication) which would normalize the age related problem. Google scholar could produce this and also a list of various disciplines to present such a comparative scholarly achievement repository for various disciplines and sub-disciplines. After 35 years in science I managed to get myself up to an h-factor of 58 and nearly 10,000 citations. However, my m factor 1.657. Younger investigators could handily beat the m facotr but will need to wait many years to exceed the h factor.
Hi Bernd. Whether we calculate H-index or m-index as you suggest, we're still just playing with numbers. For a postdoc who's lucky enough to be at a large, productive lab with highly inclusive author policy, he/she may quickly gain in bibliometric indices, while another postdoc working just as hard and producing just as good science, may have significantly lower bibliometric numbers.
Therefore I think that an m-index value can be a fairly misleading number for junior scientists, and for evaluating job/grant applications by young researchers, it's very important to look at their career beyond the numbers.
For more established scientists like you and me, measures such as number of publications and citations and thus H-index give broad indications of our activity level over the long term (and indirectly, probably the funding level), and I can find them to be useful when evaluating applications for full professorships.
It is getting tougher to assess junior investigators solely on h- or m- index critieria with "omics" emergence in bio-med world. Hands down, these papers are important and highly cited contributing great length to h-index and total citation count for a junior fellow. However, it is hard to assign exact contribution to an author in the team of 20 or so people. Promotion policies is yet another can of worms. It is multi-factorial problem with individual h-index being one of many.
Dear Vukica - there are various ways but I suggest you start by registering with google scholar. The system will list all publications with your name. You must go through the list confirm which are yours and excluding those that are not ( I have found that a few huge H factor individuals - with common World surnames who apparently publish in marine botany, Human surgical pathology and market economics!). Anyway it will then automatically calculate H-and i10 factors.
Hi Vukica,
I use Web of Science webofknowledge.com in which you choose "author search", and as Ray pointed out, you then need to delete those papers published by your namesakes if any, and then create a "citation report".
There are also special bibliometric programs available, such as "Harzing's Publish or Perish" which give you the H-index and all kinds on bibliometric information such as journal impact factors etc
Moving slightly sideways I think google scholar should be used as a transparent tool to rate how Scholarly a University is. In the UK we have University rating/league tables by the national papers based on hidden (weighted) formula. - A University is, after all, a sum of its academics and not a simple collection of buildings,
I wrote up a monograph on this issue six months ago which I attach.
How Scholarly is your University link:
Article How Scholarly is your University? Ask Google Scholar
Yes, we use the m-index as well as it corrects for age. However, we use a modified m-index using a multiplier of 100 to get easier numbers. the same principle applies, m-index below 100 is not impressive, 100-200 is respectable, > 300 is outstanding.
Does the m-index take into account that older investigators have had longer for their publications to accumulate citations?
Dear Chris - Yes an m- index does take into account the age of research activity of the investigator, as you indicate, as it equals the h-index divided by the years since first publication. So an m-index of 1 is the same for an individual who has an h-index of 1 after 1 year and another who has an h-index of 20 after 20 years of publishing. Trouble is I would still choose the H-index of 20 for a faculty position.........so as a single blunt academic instrument it is also very flawed and like the h-index, has to be taken in context.
After consideration, basically, I come back to using the h-index - in the correct context - as the measure of academic achievement.
Very good discussion here. One approach to take into question both the age of the researcher and the quality of production is to divide the h index by the total number of papers published, let´s call it hip-index (h index divided by productivity). This would emphasize the importance of producing highly cited papers rather than huge number of papers which lower impact, but which yet would add up to h-index because if you what long enough also there citations might contribute to h-index. Using hip index you could really find the scientists that care about the quality and not quantity of their work.
Dear Jukka,
The trouble with this increasing complexity and correction of the H-index is that there are always qualifications an contextual considerations. For example what is included or excluded in the category of publication: review articles versus original research? What about a hypothesis paper - not original research but highly influential? Google Scholar captures books, book chapters and published conference proceedings abstracts! Social scientist value books above papers. Should we not participate in conferences for fear of damaging our academic reputation score? To be adopted widely a lot of editing of available data is needed to produce the hip- index - in my opinion.
I suggest contextual evaluation of an H - index is probably going to be the nearest we get to an acceptable evaluation scoring system for all fields of academia.
My model would due to include only the publications that are cited at all = h-index divided by number of cited publications. This would limit the formula very much to peer-reviewed publications, not matter if they are original, review or hypothesis. Of course there would be limitations on this model as well, but it would be very hard for me to think that someone would prefer not to have their paper cited because it would affect the formula.
Reason why I like the idea of hip index is that in Finland at least you get amazingly high in the academic rank just by publishing a lot without real need to make sure whether anybody truly cares what you have published. hip index would support clearly high profile young scientist.
No index is completely fair and the best one is that making us to look better. Said so, the H-index is simple and quite fair. Dividing the H by the years after the first publication is a also a simple and good way to correct the index by age (M-index) Another nice discussion about the H-index is here: http://blogs.plos.org/biologue/2012/10/19/why-i-love-the-h-index/
I think Alex Bateman likes the H-index because it is kind to him :-)
Having said that, I am inclined to favour it - mainly because it is so difficult to game. Once it rises beyond a certain level, merely publishing is insufficient to raise it, you'll need to add a paper with the potential of more citations than your current h-index and you may well not know whether that is the case for a number of years.
I would propose another metric, the X-index, which is the number of papers an author has where the number of citations exceeds the number of papers it cites. It deals with differing citation rates between fields well and, well, a good paper should surely contribute inspire more future work than it was dependent on. It also penalises gratuitous self-citation. X-indices are pathetically low for most researchers, even ones with lots of papers :-)
In promotion cases, the candidate should at least achieve the lowest h-index among the department members who already hold that promotion rank.
At the University of Manchester, I was not promoted recently even though my google scholar h index is 50. I am 42 years old, have only published about 120 papers, and have been doing research for only n=16 years, including the PhD. Also the median number of co-authors on my top 50 papers is 3 (that is 3 total authors). My h/n gives me an m-value of >3, which according to Hirsch corresponds to "truly unique individuals" and has elsewhere been labelled "Stellar". I think that that rating would be quite a long way over the top for me, but nevertheless this all does show that thinking a high h guarantees anything at a UK Russell Group university is misguided. My h-index is higher than every member who was on the School Promotions committee that looked at my case, and I put the h index in context for them, and still I was not given a thumbs up. You may think this must mean I am useless at teaching, but I can assure you it does not ! By the way, I wasn't going for Prof ... just Reader.
Joshua, what reasoning did they give you? I am not trying to make it sound like I think it is your fault or anything. Just trying to figure out how anyone could be competitive if you got turned down.
I think a lot of us are following this thread so we might be more competitive in the job market, so if we could learn anything from your experience that would be helpful. Like you said most of us think that a high H index makes us safe. If that is not the case, what else should we be doing?
Dear J. X Seaton
The reasoning was a one-liner stating that I need more "teaching innovation" and/or I need more "service and leadership" evidence, which really means management. However what is more galling for me personally is that even my research was not rated sufficiently highly (!). Well of course this comes down to one thing: money. I haven't obtained enough funding in their view. The ironic thing is that to help get more funding is my main reason for applying for Reader - it would make it easier to access UK companies and apply and obtain research council funding as a Principal Investigator with the title Reader or Professor, since they are the prestigious ranks in UK academia. (The pay rise associated with a promotion to Reader is less that £1000 per annum after tax.)
However, take heart ! I am fairly sure that in other parts of the world and even in the UK outside some members of the Russell Group, the h-index currency is better understood and appreciated; in fact I know it is. On Jens Palsberg's website, he lists 870 computer scientists from around the world with h>=40 and from a sampling I've done, a conservative estimate is that over 95% of those working in a university are full Professors. I am sure it would be much the same in other academic subjects.
For the UK, funding is always going to follow the fads and whims of a few Research Council (or EU) committees, and while I understand it is important for the economics of the University to obtain a share of this funding, so is having highly-cited authors. I am hopeful Manchester will soon start taking account properly of the h-index (which by the way they *require*) in all applications for promotion.
NB: Correction from my earlier post: One member of the School promotions committee that evaluated my case does have a higher h-index than me - he is an FREng, an OBE etc (coincidence, you think?). The other six members of the committee do not, however, by some considerable margin.
I think once a full professor has reached an h of 40 they can be considered to be leaders in their field and on track to being world-class. For someone aspiring to be a professor an h = 25 is reasonable.
This handbook from the London School of Economics has a chapter and data for the UK. ://blogs.lse.ac.uk/impactofsocialsciences/the-handbook/chapter-3-key-measures-of-academic-influence
Has anyone done a similar plot for Universities in the United States? What is a good h factor for those individuals?
Has anyone tried to compare H-index of full time clinical physicians and bio-scientists in academic institutions?
What about chemistry? What is an average h-index of chemistry professor?
I think the value of H index varies along with countries. In Bangladeshi universities, an H index of 20-25 is indeed very high, and is rare too. This is because the concept of H index is not so familiar here. If calculated, I guess a newly selected professor in the field of Biology may have an H index of 10 maximum.
Regarding Amod Gupta's question about clinicians versus bio-scientists. In academic clinical institutions (e.g. teaching hospitals) most clinicians are engaged in some research. It is my observation that many do well, if not better, terms of H-index compared with “bio-scientists” who typically have more time for research. There are many factors that come into play. First, clinicians tend to publish more clinical/applied papers than the laboratory based research output by basic scientists. Many papers in clinical areas are generated by residents and clinical fellows (more than graduate student output), and new clinical findings and “evidence” is constantly being updated. I have also observed that many clinical groups work in teams and publish as a group compared with the more individual authorship in lab based science. As with all new knowledge, citation rates depend on the “popularity” of a particular area. Many areas of clinical science have a much larger audience than most areas of basic science, and with that, citation rates are skewed.
Hello Robert,
Agree on a multi-factorial problem here. However, I find "more time for research" being hardly true for most of "bio-scientists" on a main campus e.g. not employed by a Faculty of Medicine. Teaching is quite a burden that ate up a fair share of research time. Team-approach is common in Medicine, no doubts here.
Yes Sergei, I agree that university professors "proper" have teaching (and, increasingly administrative) responsibilities. I was thinking that when i wrote previously, but forgot to add it in.
Yeah...admin roles seems to exponentially grow. They also started to have larger weight in tenure-promotion decision making as Dr. Knowles noted. This drives academic system towards proliferating bureaucracy
h -index is not everything regarding productivity. One may have a high h-index but very little first authored or senior authored publications. h-index in google scholar tends to be higher than Web of Science calculations. Scopus does it differently as well. So when using h-index for recognition or promotion a much broader perspective is required.
Here is a female perspective. When we have small children, we can not easily travel to conferences. As a result we do not make as strong of a network, and may not get as many cites, even with quality work. For this reason I suggest that H index may be biased against female researchers early career. Let's find a metric that depends more on quality than reputation. On a separate note, I agree with Dr. Ghosh above. Don't let one metric define you.
All metrics are inadequate in one way or another. Also, however much we may decry being judged on a single metric, even multiple metrics get collapsed down to a single monotonically ordered metric (aka the ranking) in the heads of whatever appointments committee as they take binary decisions on our fates (hire/fire, promote/demote). Perhaps it's less hellish to depend on a flawed open metric than the more traditional Kafkaesque "judgment" of an appointments panel.
I disagree somewhat with David Huen's conclusion that committees are better off with a "flawed open metric" than none at all. The problem is that some committees and most "triage" filtering before such committees are beginning to use ONLY these flawed metrics. H index is a boon for administrators and managers as a sorting tool. Even for true peer-review groups these metrics have become a route of least resistance to make quick decisions as opposed to having a longer discussion on the real value or impact of a candidate's research in the real world.
My "Hirsch a" on PoP is 5.3 and my 68.4 AWCRpA on PoP yields a 5.225 L-index. Is it a coincidence that Hirsch a and Belikov L are so similar in this particular case?
Thanks, I ran some other people I know and it is indeed a coincidence. Actually, my overall h-index is 18 but the "a" portion is only 5.3 on Publish or Perish website, if you scroll down to the other details of the score, including Hirsch's m-index.
Recently some of the publishers have started giving 'Views" of the articles as well as the citations.Expectantly recent papers show far more views than citations.Interestingly, papers attracting more views do not necessarily have more citations.Which one would be more creditable, the one fetching more eyeballs or the one with more citations?
Very important question, indeed! The number of citation (and h-index) were largely dependent from the discipline! Even in biology such index may be very different in Plant Biology and in Animal Biology, because if differences in the numbers of journal and papers in both area. Moreover, in some cases h-index does not reflect real contribution of the person: in some recent paper one can see up to 12 co-authors, BUT contribution of each of them is not equal, while all got same h-index.
Also, review and methodological paper in many cases may have higher citation level, to compare with original one.
Altogether these points mean h-index quite complicate and not necessary reflect real person "value" in science. But there is not better integral measurement so far...
It is an interesting topic. One of the disadvantages of h index is that it low sensitivity to increased publications and citations. For example, if you have 100 papers and h-index 10 that means you have 10 papers each of them published at least 10 times. but the other 90 papers are cited less than 10 each will not be counted. So using the Mock h-index (hm) for researcher evaluation is better than the h-index. the hm is more sensitive to changes in publications and citations. it is calculated from the equation, hm=(C2/P)1/3 C=total citations, P=total publications
I have a reasonable H index but would not like this to be how I am evaluated, nor do I think it really matters all that much (although in Europe I see a move towards metrics but mainly based on the hope that this will eliminate subjective evaluations). The question that we often ask in faculty hires is has this person moved the field vertically? Are they speakers at important meetings? And in the US, do they have funding? Metrics are but a small part of the equation.
I don't have either a PhD or a faculty appointment. My h-index is 49 and i-10 index is 95. What it means is that, it is not easy to answer this question. Also, it seems many researchers confused with h-index vs. i10-index. The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. While i10-Index refers to the number of publications with at least 10 citations.
http://pandi-perumal.blogspot.com
https://scholar.google.co.in/citations?user=-D6JXfgAAAAJ&hl=en&oi=ao
I want to know simply what are chief factors affecting H index?
Are they number of publications or time factor or impact factor......?
We need precise answer for this, as it seems somewhat confusing.
Dear Dr Abdel-Hamid
I have just copied this example from a blog by Alan Marnett
The index is a measure of the number of highly impactful papers a scientist has published. The larger the number of important papers, the higher the h-index, regardless of where the work was published.
To calculate it, only two pieces of information are required: the total number of papers published (Np) and the number of citations (Nc) for each paper.
The h-index is defined by how many h of a researcher’s publications (Np) have at least h citations each (see Figure 1).
So we can ask ourselves, “Have I published one paper that’s been cited at least once?” If so, we’ve got an H-index of one and we can move on to the next question, “Have I published two papers that have each been cited at least twice?” If so, our score is 2 and we can continue to repeat this line of questioning until we can’t answer ‘yes’ anymore. Luckily, there’s no need to block off your weekend to try to figure out your stats- the computer’s got you covered (see below).
Figure 1. Variation of the h-index between two researchers with the same number of publications.
.
If you go to this link you get his full article which is very good
http://www.benchfly.com/blog/h-index-what-it-is-and-how-to-find-yours/
GRIND-FREE won't work. It does not reward the intermediate authors who supply key services. So the EM specialist doesn't get a look in? The pathologist who screened your slides? The proteomics specialist who handled that side of the study? The bioinformatician who processed your data? Good luck trying to get any collaborative work done.
Also, the notion that a paper has to be cited in the last 12 months to have any value cannot be correct. I don't think Mendel's paper has been regularly cited for some time. Is it worthless? The Watson-Crick paper? Even Kary Mullis's PCR paper isn't cited much now although his technique is used thousands of times each hour.
H-index and its derivatives are about as good as it gets. The trouble with trying to find ungameable metrics is that the much of the citation metrics thing is a game anyway.
Dear Aleksey,
Thank you for your sarcastic reply. It befits you. I, for one, freely acknowledge that the significance of my work is not the equal of Mendel, Watson or Mullis. But my inferiority to that august group, does not, in itself, make GRIND-FREE good.
On the contrary, you are entirely dependent on the pathologist screening your slides to do the right judgment calls for you if you are analysing the corresponding samples. And that is true of other roles I listed because any major collaboration depends on a wide range of skills the first/last authors cannot supply. The correctness of a paper relies heavily on many contributors whom you cannot readily check yourself but whose errors of judgement could thoroughly destroy the validity of the joint work. If the correctness relies on their judgment, they are authors as much as you are. A sufficiently important research problem requires more skills than any one individual can supply. I think you may have an interesting experience when you eventually endeavour to set up collaborations under your schema.
As for those papers I mentioned, their current citation rate is way under the true significance of their work.
Utlimately, adoption or rejection will tell us whether others think GRIND-FREE is any good. Perhaps that will be answered by the GRIND-FREE score of your GRIND-FREE paper in due time.
Let me note here an important aspect of the issue that the actual value of one's h-index is highly dependent on both the data source being used to compute it (Google Scholar, Web of Science, Scopus, MathSciNet, etc) AND on the specific way in which the search of a given database is performed to compute h-index. So when people say "my h-index is 20", they should put a huge qualifier next to that statement.
Taking myself, someone working in pure math as a guinea pig here, let's try to see what my h-index is.
According to Google Scholar, it is 23 (I do have a Google Scholar profile page where the h-index, as computed by Google Scholar, is helpfully displayed). That's probably an overestimate. If I go to the Web of Science and perform an `author search' for my name, get the results, and then click on the "create the citation report link", I get a page with the citation data, which gives me the h-index of 15.
Wow, that's quite a bit smaller than in Google Scholar. What happened? Well, for one thing, some of my higher cited publications were published in conference proceedings and do not have bibliographical entries of WoS. So the 'author search' does not find those publications at all, and then the citation report misses citations to those publications. There are more serious issues with WoS: apparently it is EXTREMELY sensitive to how exactly a reference is formatted in someone's paper when this reference gets cited. Very often, if the reference is not formatted just the way WoS wants, it will be missed by the 'author search'.
BUT, if I perform a 'cited reference search' in WoS for my name, suddenly I get a whole lot more citations. However, the results of that search are produced by the WoS in such a way that you can't generate a citation report from them automatically. One can attempt to analyze them manually, which is very difficult because of the above problem: If WoS does not recognize a citation as that referring to a particular bibliographical WoS entry, it creates a "ghost" temporary entry that is only displayed in the results of that cited reference search. So valid citations to a given paper get displayed as split into several parcels, and one has to manually add them together. That's extremely labor intensive and my best approximation is that doing that in WoS would probably give me an h-index of around 18.
That's probably still not the correct number since many citations occur in books and those are usually not recorded in WoS.
Scopus currently gives me the h-index of 14, (most likely because there are quite a few math journals that they do not index).
MathSciNet has a very partial feature for tracking citations. Some of the math journals (but not books and not conference proceedings) that it indexes are designated as Reference Lists Journals. For them every MathSciNet entry for a paper published in that journal also includes the bibliography list, and MathSciNet tries to provides cross-reference links from those bibliography list items to the MathSciNet entries for the corresponding papers. However, even for the Reference List Journals the number of years going back for which they include the bibliography data varies widely, and the number of Reference Lists Journals is something like 1/3 of the total number of journals that they index. So the data is rather partial, but it does allow one to compute a lower bound for one's h-index. For me MathSciNet currently gives the value 16. So what's my h-index? I don't know. Perhaps it's 17.57438 today and 18.00125 tomorrow. What about yours?
My main point is that people should not treat h-index as a well-defined function whose domain is the set of researchers and whose co-domain is the set of natural numbers. At best, h-index is a multi-valued function, or a relation.
Thank you Ilya for this reasonable and informative response. We have gone metrics mad about classifying research impact in light of the fact that a citation of ones work by another does not necessarily imply very much. The citing author could have simply popped a reference in to justify his/her developing argument without having digested the cited author's work; the citation could be related to a criticism or flaw in the cited work (negative impact?); the author could be padding their own work by over citing work of others in order to display a deep knowledge of the subject area. I write all of this not as a cynic but as a current and past reviewer of medical and psychological journal submissions for over 25 years. There have been many instances where the author citing someone else's work has not sufficiently understood the work or has even misunderstood the results, yet the work gets cited. The entire endeavour of citation metrics should be approached cautiously, as you indicated, and be only one component of how an academic is evaluated.
Ilya Kapovich - Actually I don't trust the WoS figures anymore because I track my citations on a regular basis through that website as well as in Scopus and Google Scholar. In the last few months, WoS has actually been stripping some of my highly cited papers of citations - several dozen citations just vanished for no reason. I have no idea why this happened. My thinking is that WoS must be either changing their search parameters, or they are just not keeping up with the burgeoning literature and citations. So my thinking is, don't take the WoS figures too seriously and instead rely on Google Scholar or Scopus, which appear to be much more robust and reliable. Plus, it has been reported that authors are usually under-cited by organizations such as WoS, Scopus, and others, because their papers are not always recognized by search parameters, for the reasons you described.
Thank you very much, Paige Lacy, for your comment. A few more words about WoS. A peculiar feature of the WoS is that when you change the ``Web of ScienceTM Core Collection'' tab selection in their main search page to ``All databases" selection, and perform a cited references search, you get a lot more hits. Under this search WoS pulls the results from many foreign publications not indexed in the main WoS catalogue. But, when in the ``All databases", the ``Author search'' function disappears and is no longer available. One can still perform a ``Basic search", specify the author's name, change the ``Topic" tab to "Author" tab, hit ``search", and get a list of publications by a given author with citations next to them taken from "All Databases". Then clicking on "Generate citation report" link produces a citation report with a larger number of citations and a possibly higher h-index than before, when ``Web of ScienceTM Core Collection'' tab selection option was chosen.
About WoS vs Scopus. I suspect that one of the main reasons that Scupus searches are more accurate is that Scopus allows to use the author's first name in a search, but the WoS does not. In fact, I very much wonder how people with relatively common last names sift through the WoS search results trying to separate themselves from others with the same last name and the same initials. In this regard the WoS search interface is very antiquated.
Getting away from the technicalities, a more serious problem here is that all this citation/analytics data is beginning to be used or is already being used by the university administrations for promotion, tenure, hiring and evaluation purposes.
A part of the issue with that is that the university administrators often hire/use external firms to compute various citation/analytics data in these situations, and we have no idea which data sources these firms use and how they perform the searches of those data sources. I very much suspect that they usually use the WoS which is a very unstable and in many instances unreliable data source.
Our department recently underwent an external review. In most respects it was a positive and valuable experience. But, as a part of this process, the university administration hired an external company to compile various citation/analytics data for our faculty, and we had to prepare a response to the report that the company produced. A big problem was that we had no idea which data source they used and how exactly they produced the numbers they gave us. That's just a small example of what's going on...
Is there any published breakdown by research area, age, and data source (Scopus, WoS etc) ? Obviously, a 35-year old taxonomist has a different H-value than a 55-year neuroscientist. Someone suggested that the H-value should increase by 1 per year in research for a faculty members but I haven't seen any data supporting this.
Dear Peter Uetz.
That is a very interesting Question, which I would also like an answer. You can't easily research this info from Google Scholar, Scopus or WoS. However, rio ogle scholar gives the annual citations a research receives per year. Researchgate citation has a year by year plot of total citations also. Although it is not picking up all citations as google Scholar, it gives you an idea of individuals increase in citations per year. This is a surrogate for the h-index score per year change - but does go some way towards indicating the growth
I'm reminded of Goodhart's Law?
When a measure becomes a target, it ceases to be a good measure.
https://en.wikipedia.org/wiki/Goodhart%27s_law
By the way, speaking of H-index, here is an interesting article in the Notices of AMS by my colleague here at UIUC Alex Yong:
http://www.ams.org/notices/201409/rnoti-p1040.pdf
He works in algebraic combinatorics, and performed an analysis of the MathSciNet citation data for a number of mathematicians using a rather basic probabilistic model on partitions of integers. He came up with a rule-of-thumb estimate that, at least for mathematicians, gave a pretty good prediction of the value of the H-index from the total number N of citations. There are various caveats involved, and you should read the article to see what they are, but that rule-of-thumb prediction formula for h-index of mathematicians is h \approx 0.54 N1/2where N is the total number of citations.
I don't know how well this estimate formula works in other sciences and with other data sets. Butt the main point of the article is that if in fact H-index can be estimated with sufficient degree of accuracy and confidence from the total number of citations, that would undermine the main rationale for introducing the H-index in the first place: that H-index measures the impact of a scientist's work more accurately than the total number of citations.
Those scientists who have a "healthy" H-index will find favour in this measure. Those who are in niche areas (with narrow appeal) and have correspondingly lower H-index will not be so satisfied. Most thinking people can appreciate the reasons for variance in this measure, and that it does not correlate with the real value of knowledge translation. The alarming problem is that administrators and other pencil pushers believe that such factors are true and absolute calibrators. I also agree with Goodhart's Law!
My biggest concerns as specialist in Pharmacology about h index are as follows :Many professors have low h index, signifying that their rank is not commesurate with their academic achievements and those professors whose achievements are commesurate with their ranks are not many. Should that mean such achievers are the ideal professors of their respective disciplines? Therefore, i am of opinion that minimum standard of h index should be set for an Associate Professor (h index :15) and Professor (h index :20) for global relevance and competetiveness. Afterall researchers from developed world are by far non comparable in terms of academic achievements with the researchers from underdeveloped world. To bridge the gap, equal standard must be set for global relevance and competetiveness. This would definitely remove inferiority complex from the mind of professors from underdeveloped world. Because many professors of Pharmacology with high h index are from western part of the world.
A quick thought, a high h-index does not seem possible unless the published papers have been well cited. Overall, I should think that the total number of citations (and the h-factor does reflect this, though not 1:1) is what's important. As a colleague of mine once exclaimed '... there is no point publishing a paper that is never cited'. There is much truth in that...
I love the principle of h index due to the following reasons:1. I have never seen a nobel laureate with h index of less than 20 and publications less than 50 signifying that achievements of reseachers are measured both qualitatively and quantitatively over a relatively long period of time, to be on safer side, at least 20 years.2. I have also observed that a researcher has to publish in the area that is flourishing ,else he will be stagnated.
I am not an MD or a Ph.D. I am not a professor or for that matter in any academic ranking. My h-index is 50. I wish someone could offer me a professor position. ha ha ha..
https://scholar.google.com/citations?user=-D6JXfgAAAAJ&hl=en
http://pandi-perumal.blogspot.com
For you to be appointed a professor of a particular discipline at a university,you must have a PhD or MD in most cases.However, the fact that your h index is 47 suggests that you are a vibrant researcher.What is your discipline? what have you been working on?What are the benefits derived from making reference to your work? We need answers to all the questions. But be reminded that a professorial rank is the highest position of the academic ladder. Therefore, a professor is that who professes in his field, teaches established knowledge and discovers new knowledge. Hence an ideal professor is a reputable scholar. You would find many of them across the globe, especially in the western part of the world.But if a university acknowledges your contribution to a very large extent, you can be awarded not appointed "honorary professor".Mustt you become a professor?Why cann't you struggle to become a nobel laureate instead of professor. Indeed i love to become a nobel laureate much more than i love to become a professor. Afterall, so many professors are too far away from becoming a nobel laureate. May the wisdom of Alfred Nobel,the man that introduced Nobel Prize, continue to prosper! So you should have decided what you wanted to become right from onset.
@ Saganuwan, thank you for your response! I wasn't serious when I told I want a professorship. What I wanted to point everyone was, the numbers are not a big deal. for a good researcher, the number shouldn't be a limiting factor. If someone like me could achieve to this level, shouldn't they expect to achieve more? Everyone should aim high! That was my point!
Ahah! Those false gods of citation index, impact factor, H factor, and, indeed, ‘numerical indicators’ in general. You can be the best in the world in an area in which only a few groups operate, you can submit papers to journals that have one of their eyes on the ‘journal impact factor’, and be rejected without reference to reviewers because the (sometimes non-academic) editor judges it will get few citations, and you will be the recipient of mealy-mouthed statements such as ‘sorry, but no general and little specialised interest’. Scientific quality may not be significant for such editors. Alternatively you can be a mediocre contributor in a bandwagon area, and the editor will be seduced by thoughts induced by how populous the field is, and thence the potential for citation and a consequent increase in the ‘journal impact factor’, irrespective of scientific novelty or quality, particularly if blanket rather than specifically relevant citations are fashionable in that field, so an inadvertent ‘reciprocal citation’ circus may arise, giving work in particular areas artificially large factors. And the current bandwagon area may not endure the test of time. Incidentally, in the UK, promotions to professorial status are not necessarily dependent on research output.
Hi Ray- are the data on median H index you mentioned published in a professional journal. Would love to have a copy if you have it on hand. Thanks!
Excellent analysis, Ray, and this article has withstood the test of time on ResearchGate, which is interesting in the digital age when so many things just vanish off the web. Anybody notice that their citations and papers suddenly disappear from Thomson Reuters' Web of Science database? I lost about 50 citations from one of my papers overnight when some technical glitch occurred at their end. Tried emailing them for help and they were just as mystified as I was. It concerns me greatly that we are completely dependent on digitally recorded archives that could be lost because of unknown errors in files and servers. These errors impact citations for many authors. Good thing we have Scopus and Google Scholar as backups (as well as ResearchGate).
Paige -- yes, something like this happened to me on Google Scholar: suddenly a few new citations popped up on my Google Scholar profile, increasing my h-index, and then disappeared again mysteriously, just to return again after a few weeks. Looks like they are still working on their algorithm.
PS: here is an interesting blog post on the origin of Google Scholar: https://backchannel.com/the-gentleman-who-made-scholar-d71289d9a82d#.2e61113sq
Peter, Thanks for your link and I enjoyed reading it although it was somewhat sobering to see that Google Scholar did not have an easy beginning. Yes, Google Scholar is still a work in progress and hopefully it will have a more solid footing in the coming years.
Which h-index score do you trust?
ResearchGate gives me an 11 (I am a practitioner, not an academic) and GoogleScholar gives me a 17. Of course I would prefer the higher number! ;-)
Which source of h-index do people normally use?
Actually I hardly ever get asked to provide my h-index by my institution or by granting agencies, which is a bit disappointing as I think it is the most acceptable measure of performance available to us today. Usually my institution asks for the journal impact factor of papers that I have published, which is a weak indicator of performance. If I were to go with an h-index, I would provide Google Scholar because it is the most comprehensive database that covers almost all citations and usually doesn't lose them (except in Peter's case, above). Other good databases for looking up h-index are Scopus (which Mendeley uses) and ResearchGate, in that order. The h-index values agree well between Scopus and ResearchGate in my case, but they undercite relative to Google Scholar in my experience. There have been reports that Web of Science and Scopus do not comprehensively cover all citations and tend to err on the side of underciting authors. Unfortunately I can't remember where I saw those reports but if anyone is interested, I could go digging.
I am going up for promotion and tenure and I have to decide if i should use Google Scholar, Researcher ID, Scopus, or Orcid for my h-index. I do observe that Researchgate does not seem to have all citations of my publications. Any suggestions?
Natraj, I understand your concern, however i am of opinion that you should use Google Scholar that is used by scientists all over the World. Experience has shown that Google Scholar is the best, though it also has problems. Read Saganuwan's publication on Google Scholar for better understanding of what it is.
Hi Saganuwan, Thanks! Will certainly have a look at your publications on this.
Vukica (and others here): You can also look up your own h index on Research gate. Interestingly, with different sources you get slightly different answers. About five years ago I did a cross compare of Web of Science and Google Scholar, looking up my citation count for each paper, and making the simpleminded assumption that the larger of the two citation counts for a paper was correct, in that the source might miss papers but would rarely make them up. in the end, my counted H index was several larger than what either source claimed separately, meaning that in retirement I am probably a 38 or so, and still climbing.
Readers may find of interest Syd Redner's paper J.Stat.Mech.
(2010) L03005 which analyzes a group of physicists. First, the citation count gives something like the same answer as the h index, namely h = 0.5 c^0.5 to some reasonable accuracy. Redner also looked at the scatter. People with c/4h^2 < 1 have a larger h number than expected from their citation count; for these people the citation counts for their top few papers are nearly equal. People with c/4 h^2 > 1 have a smaller h number than expected from their citation count; these people typically have a few heavily cited papers.
Note that Web of Science, Google Scholar, and ResearchGate each calculate an h index, and their answers are not always the same. When I did the check on my own h index, I found my brother's one research paper (he is a practicing MD not a scientist) and I found my late father's one paper, which appeared in Science while he was an undergraduate (he never mentioned it during his lifetime). As a general statement the rule of thumb h = 0.5 c^{0.5 relating h number and citation counts is an approximation. Looking only at my own scores, it is off by 5 or so.
2 Recommendations
5th May, 2017
George Phillies
Worcester Polytechnic Institute
As a suggestion, generate a list of some hundreds of people in your field from top line to almost no publications, and find their citation counts and h numbers -- this may be a fine task to assign to students: if you cite people when you write up your work, look up c and h for them, and tabulate -- and the year of their final degree, and work out half of the answer to your question. the answer is even publishable.
1 Recommendation
7th Jul, 2017
Eungi Kim
Keimyung University
Academics aspiring to be a Professor should aim to meet or exceed whatever the average score of of the evaluation metric that the researcher's institution uses. Thank God my university doesn't use h-index nor citation counts for research evaluation. At the current rate, I will fail so badly and never become a full professor.
1 Recommendation
7th Jul, 2017
George Phillies
Worcester Polytechnic Institute
Note that Web of science now calculates H indices for you. The calculation is imprecise in that it may find extra papers, and for persons named Smith or Kim there may be challenges. As a result, someone desperate for a paper and short of ideas could go through, e.g., the ca. 185 Ph.D.-granting American Chemistry Departments and look up H indices for everyone, and then sort by specialty.
And someone competent and creative--such a person exists and is publishing--could calculate all these indices (there are a lot of them) and say interesting things about them.
1 Recommendation
7th Jul, 2017
Robert V Harrison
The Hospital for Sick Children, and University of Toronto
Warning to Dr Eungi Kim. Whilst your Institution currently does not used H index etc, think about 10 20 years down the line. When they do start using some citation metric or other (and they will) , you will have a hard time catching up!. Believe me there are many "seasoned" professors who started careers with no heed to performance metrics, and cannot reboot.
3 Recommendations
7th Jul, 2017
George Phillies
Worcester Polytechnic Institute
A good metric will work reasonably well and not be readily gameable. When you are quite junior, turning out lots of very good papers is critical.
2 Recommendations
8th Aug, 2017
Eungi Kim
Keimyung University
Dr. Harrison raises a good point. Down the line, institutions in South Korea might adapt author-level performance metrics such as h-index. To give you some context, most Korean academics publish their journals in domestic journals. Domestic publications in general don't get cited a lot. So, I doubt that citation sensitive metrics will be used by Korean institutions any time soon. If Korean institutions adapt any type of citation based metrics for research evaluation, Korean academics will find ways to raise his/her scores, and research performance evaluation system will be worse than before in the case of South Korea. Nevertheless, I should strive for a higher h-index score because no one predict the future.
1 Recommendation
1
2
Can you help by adding an answer?
Answer