To me h-index doesn't appear like a great measure for scholarly success. Let's say an author has only three publications of which one is cited 21 times, the second 20 times and the third only 2 time. The resulting h-index would be 20, right? (Because 20 is higher than 2 and 2 is fewer than 3.) It doesn't seem right to me that an author with only 3 publications would get such a high h-index. And if the third publication would increase to 3 citations, the h-index still would remain at 20, right? (Because h-index never decreases.) If that is true, then the best strategy to get a high h-index would be to try and score one good publication at the beginning of the career and not publish for a while, so this publication can accumulate citations, which would determine the h-index. I'm just trying to get my head around this and it all seems strange to me. Happy to hear your thoughts.
No Hans, it is wrong!
If an author has three publications of which one is cited 21 times, the second 20 times and the third only 2 time, the resulting h-index will be 2!
No Hans, it is wrong!
If an author has three publications of which one is cited 21 times, the second 20 times and the third only 2 time, the resulting h-index will be 2!
Thanks, Antonio Pirisi, that would actually make more sense to me. But 2 is smaller than 3. Shouldn‘t the number of citations be higher than or equal to the position in the listed publications?
The previous answer comes from Wikipedia!!!
https://en.wikipedia.org/wiki/H-index
Limitations of the H-Index
Although having a single number that measures scientific performance is attractive, the h-index is only a rough indicator of scientific performance and should only be considered as such. Hirsch himself writes:
“Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual. This and the fact that there can always be exceptions to rules should be kept in mind especially in life-changing decision such as the granting or denying of tenure.”...
https://bitesizebio.com/13614/does-your-h-index-measure-up/
It seems to me not quite fair to assess by Hirsch method. Perhaps the researcher conducted one significant study and received a lot of citations. This is a real impact and success. And the rest of the works are not so successful. Then the total impact by Hirsch method will be limited to less successful works. Can it be more reasonable to assess a researcher's contribution by the number of his citations without auto-citation? However, the number of co-authors should be taken into account by dividing the citations of one article by the number of co-authors.
I think that at this moment the H index is a good indicator to measure the impact of a given author and his research.
Nevertheless, to evaluate research quality in a good way, open institutional archives to obtain data are needed and multiple resources (Google Scholar, Scopus, Web of Science, etc.) to find the number of the citations of a given paper should be checked.
In any case, I suggest you an interesting document on this subject:
If the scientific work was done by a team, and some of the team became authors of the publication, the latter are co-authors. When referring to this publication, each of them receives one citation. It does not matter the contribution of each of them. Is this fair? It seems to me that the scientific weight of the contribution of the individual co-author should not be equal to the scientific weight of the publication. In general, it is difficult to assess which of the co-authors worked harder. Maybe someone is a sponsor, and someone is the boss, and the main performer simply decided to please his benefactors. In this case, he sold (or shared) his achievement. This means that the total contribution (for example, a citation) should be divided by the number of co-authors. This will be a fair assessment of the individual collaborator.
Otherwise, an injustice arises in relation to the single author, who took upon himself all the costs. After all, if two publications (1 author and 4 co-authors) turned out to be in the same journal at Scopus or the Web of Science, then it must be assumed that the quality of works is comparable, but the number of beneficiaries is 4 times more for the second article. We must distinguish between the scientific weight of the article and the scientific weight of the contribution of the scientist to this work.
No, it s wrong the H index will be two (2), the number of publication most cited.
Thanks, Latifa ZHOURI but the h-index is the value that's greater or equal to the position in the list of publications. 2 is smaller than the position in the list, which is 3. So the h-index must be 20.
The h-index is a number intended to represent both the productivity and the impact of a particular scientist or scholar, or a group of scientists or scholars (such as a departmental or research group).
The h-index is calculated by counting the number of publications for which an author has been cited by other authors at least that same number of times. For instance, an h-index of 17 means that the scientist has published at least 17 papers that have each been cited at least 17 times. If the scientist's 18th most cited publication was cited only 10 times, the h-index would remain at 17. If the scientist's 18th most cited publication was cited 18 or more times, the h-index would rise to 18.
Part of the purpose of the h-index is to eliminate outlier publications that might give a skewed picture of a scientist's impact. For instance, if a scientist published one paper many years ago that was cited 9,374 times, but has since only published papers that have been cited 2 or 3 times each, a straight citation count for that scientist could make it seem that his or her long-term career work was very significant. The h-index, however, would be much lower, signifying that the scientist's overall body of work was not necessarily as significant.
The following resources will calculate an h-index:
Scopus
Web of Science
Google Scholar
(Copied)
Hi Hans, your interpretation on h-index is wrong!
You don't have to take the pain to calculate the h-index manually, but most of the data bases will do it for you (e.g. google scholar, web of sciences, scopus, etc). However, you have to update your publication list. The h-index is based on a list of publications ranked in descending order by the number of citations these publications received. The value of h is equal to the number of papers (N) in the list that have N or more citations.
H-index might not be the best, yet it is the best available at the moment.
For example, one can inflate their RG scores, but one cannot manipulate their h-index. It is based on true scholarship, i.e. based on your publications.
Please go ahead and create a profile here and see it for yourself:
http://scholar.google.com
For example, here is mine:
https://scholar.google.co.in/citations?user=MO90ZXkAAAAJ&hl=en
thanks
pandi
https://pandi-perumal.blogspot.com/
In my opinion, the h-index reflects the luck of a scientific author throughout a scientific activity. When I look at a scientist’s Hirsch index, I see a characteristic of the success of scientific articles in which he was either an author or a co-author. In the case of collaboration with a large number of colleagues, success can be multiplied many times. Each co-author can take on a separate project, and then include other colleagues. And so everyone - success will multiply! But the loner relies only on himself ...
I think it is necessary to distinguish between the contribution of the scientist and his impact. The contribution is measured in publications, a useful contribution is measured in citations (probably, as the Hirsch index). Impact can be measured in the sum of all citations.
In fairness, in my opinion, when evaluating an individual author, one should take into account the number of his co-authors. So fairer comparing with the alone author...
Thank you for the informative response, Seithikurippu R. Pandi-Perumal. Could you please tell me how my interpretation of the h-index is wrong? Where is the mistake in my interpretation?
I am aware of various sites that calculate an h-index. I would nevertheless like to know how to calculate it myself, otherwise it isn't possible to understand what the h-index calculation expresses.
I do have a google scholar profile. Mine does not show an h-index like yours, however. I wonder why that is. https://scholar.google.co.uk/citations?user=V34MBIIAAAAJ&hl=en&oi=ao
Hello Hans,
for calculation, see if you can use this method mentioned in wikipedia:
https://en.wikipedia.org/wiki/H-index#Calculation
I am glad to see that you're working hard, which reflect in the number of your publications. But I must say, there is a steep increase in your publication. They are yet to be cited.
From what I saw your profile, the calculation seems to be correct. You have 3 papers, which are cited once. Out of the 3 papers cited once, if two out of three receive one more citations each, it will become h-index 2. If all the 3 of them received 3 citations each, your h-index will be 3. It goes along that line.
As wikipedia correctly pointed out, "The h-index grows as citations accumulate and thus it depends on the "academic age" of a researcher," you're still considered still young researcher.
Please remember, as you move along the ladder, the h-index grows very slowly and not exponentially. As I mentioned to you earlier, it is not easy to artificially inflate the score. That means, people have to actually cite your paper.
Of course, once can improve the h-index by self-citations. however, this can be easily calculated as well. I know it can be done and I have seen it. But this is available online somewhere.
Coming to self-citations; it is not an issue for a veteran scientist, but it might be an issue for an establishing scientist. For example, someone who is an expert in a particular field, who has taken a leadership position, would be writing papers on the same area again and again. If they are hard core researchers, they are expected to cite their work. For a new comers, they might encounter two issues if they start to cite their work randomly: a. if they cite their work excessively, it will end up citing unrelated work; b. a good reviewer will identify your intention and point out this issue. Ultimately it might even get rejected; c. If the journal has a letters to editor section, some authors might tell that their citation was deliberately ignored and the author cited wrongly in his/her publication. That is a possibility!
Assume that you have 1000 citation in 1 paper, and other 3 papers received 1 citations each, your h-index still will be 1.
I hope this clarifies somehow!
Thanks, Seithikurippu R. Pandi-Perumal, but in your example, wouldn't the h-index be 1000? Because, in the Wikipedia link you shared it says: "we look for the last position in which f is greater than or equal to the position." 1000 (number of citations) is great than 1 (position in list of publications), but 1 (number of citations), which is located on position 2, is smaller than that position (2). So the h-index in this example cannot be 1, because 1 is smaller than 2. Isn't that right?
@ Valentyn.
Jane Goodall spend over 55-years to study the sociobiology of chimpanzees in the wild. Hypothetically, let's assume that she spend her time alone, wrote her paper without any input from anyone, and publish it as a single author. She can rightfully do so! There is no question about it! However, her work didn't reflect what you would expect:
https://www.ncbi.nlm.nih.gov/pubmed/?term=goodall+j+chimpanze
Research is a collective scholarship. Many expertise are brought on to the table to conduct the research in a sophisticated manner. Someone with a PhD, doesn't means that he or she is expertise in everything that required to be successful in a narrow field of research. We seek help as much we can. We want to produce a research that impart knowledge, practical application, and translatable (from bench to bedside). Plus, the results should be statistically significant.
Let's assume, I got an offer to work with a Nobel Laureate. It is a productive lab and there is a good deal of possibility that I might be in a position to publish lots of papers. Pretty exciting, right? If I were not having the required expertise or skill set, do you think the Nobel Laureate might have offered me a position to work with him/her? or do you think, by sheer luck I might be randomly picked to work with them?
In research, there are two ways we acknowledge: 1. to give authorship (if that warrants); 2. to acknowledge them at the end of the paper.
Many funding agencies promote, intra- and inter-departmental collaborations, and national and international collaborations. For a big projects, we need the as many experts on board. Assume that you're working on one of these projects: to calculate the electromagnetic radiation from solar system, neutrino electromagnetic interaction, the Higgs boson, seismic activity under the ocean, movement of tectonic plate, or try to study the EM field of tsunami. Do you think, you could undertake such complex work without the help of others?
When such work get published, surely one would see 100s of authors on a single paper. As we all know all authors have ethical, moral, and professional responsibilities. Listed as a co-author means, they have done some intellectual contribution. Just because I am a close friend of an author who published such studies, it doesn't mean that he would offer me to be on board on the publication. Neither i would ask me to include me as a co-author in a paper I am totally clueless about what they did.
What if someone were to put me on a paper, and which eventually discovered as a fake data. Every author is responsible for either good or bad outcome. So, a sensible author wouldn't put their academic and personal life on line.
Collaboration itself is an art and science and a characteristics of a leadership trait. As a leader in the field, you must be able to lead, collaborate, negotiate, and delegate responsibilities to get the work done!
Trust me, if I wrote a lousy paper by studying 10 patients and I happen to have 10 co-authors; I will have difficult time to get it published. Not to mention that no one will cite such papers. People know when they read it!
In the space shuttle Columbia disaster (2003), the accident was caused by a breach in the leading edge of the left wing, caused by insulating foam shed during launch. Would you blame the head of program, the project manager, the technician responsible for fixing the insulating foam? The success or failure of a mission or a project is pretty much the same thing - everyone is responsible! In this particular case, the work of the technician is as important as the person who designed the entire shuttle or someone who controlled the entire mission. Until otherwise, we want to give credit to the guy who was in charge of the success of a mission, but point finger at someone if the mission failed.
h-index is a good index but not perfect. Second, your calculation for h-index is not correct. Also, there are m-index and g-index. If you want more information, please tell me to write about all of them.
Thanks, Ali Rastqar. You write my calculation is not correct. Could you tell me the mistake I'm making? The h-index is the number of citations that is greater or equal to the position in the list, right? 20 (number of citations) is great than 2 (position in list of publications), but 2 (number of citations), which is located on position 3, is smaller than that position (3). So the h-index in this example cannot must be 20. Isn't that right?
Of course yes h index is helpful, But your calculation is wrong. If a person has single paper having 20 citations, then his h index is 1.
h index basically means your n number of paper has at least n citations.
consider,
paper 1 =20 citations
paper 2= 10 citations
paper 3= 8 citations
paper 4= 4 citations
paper 5= 3 citations
paper 6 = 1 citations
rest paper = 0 citations
Then h index = 4, that means 4 paper has at least 4 citations.
you can also see i 10 index, where i10 index refers to the number of paper having at least 10 citations.
so in this case i10 index=2.
I hope it helps.
Thanks.
@Seithikurippu R. Pandi-Perumal:
But if you simultaneously participate in a variety of projects, then in a year you may have dozens of publications. Then there is a high probability that the Hirsch index will grow like a tsunami. Such a researcher may have a dozen or more collaborators and they will all receive a unit for each mention. For the year you can get a dozen or more quotes. I think it is fair to divide each quote among all the co-authors. The reason is that a lot of quotes every year!
A loner works hardly but has not such scientific network...
Thank you all for your insightful comments. I finally figured out my misunderstanding about calculating the h-index. I thought in my example the h-index is 20 because that is the number of citations. But now I realize the h-index is not the number of citations, but the number of the position in the list, so the number of the respective publication that was cited more often or equal to it. I got it. Cheers!
@ Valentyn,
On an average I publish anywhere between 10-20 publications per year. If I were to publish 20 paper in 2019, my h-index will not be improved by another 20, i.e. from 30, it will not jump to 50. Additionally, you will hardly have a chance to cite all published work just to artificially inflate your h-index. Mostly, when 5 people work as a team, if they come up with 10-20 papers, it will not reach to the number of 100 papers. Each paper requires time commitment, research, resource, publishing in a decent journal. There is a high possibility the paper might get rejected for various reasons (for example, wrong choice of journal, the reviewer didn't like the way the manuscript was written for some reason, etc). Then you have to reformat and submit to another journal. A reputed journal takes somewhere between 3-5 months for a paper to get accepted. Then it take additional 1 or 2 months to get listed on pubmed. When comes to citation of your work, the same deal. If I cited your paper in in the month of july, by the time my paper appear on pubmed, it will be the next year. So, there are several issues similar to this one, needs to be taken into consideration.
Of course is not a good measure for several reasons:
1. It not take into account the order of the authors or the corresponding one
2. It not take into account the auto-reference (this is a very important problem)
3. It not tke into account the impact of a very good research (Imagine one with more than 10.000 citations in a research with a H-index of 2. This research theoretically could be a nobel prize.
4. It not take into account which journals reference your research. Obviously is not the same that you research is referenced by a very-high impact journal than one low-impact journal
...and believe me...there are more than 10 reasons
You are right dear MA Martinez-Garcia, the first author must deserve more credits and recognition.
And what if authors contributed an equal share and are in alphabetical order?
@ Ljubomir Jacić; @ Valentyn Isaiev
The 'money man' makes the decision (i.e. the one who received the funding for the research proposal). Although in most cases, the mentor put themselves as the last (and corresponding author in most cases) and senior author, in scenarios wherein it was a breakthrough research, normally they will put themselves as the first author (althought work might have been done by a post-doc or a student of his/her laboratory). So, your argument might not be accurate for everyone at all the time. There are exception to the rule. 'Hence, the first authorship is negotiable.
@ Hans Asenbaum
If more than one individual contributed equally, all we have to do is, just put a '*' and mention that 'these authors contributed equally'. This is the common practice. Again, within this group of individuals, the precedence is set by the primary principal investigator among the group of collaborating laboratories. This is commonly practice in papers that are being published with national and international collaborators.
@Seithikurippu R. Pandi-Perumal:
You affirmed me in the opinion that in order to be unmistaken in assessing the contribution of each co-author by means of software, it is better to divide the contribution equally. And the resulting quotes should be divided by the number of co-authors. So fairer comparing with single author who takes all the costs for himself (if we take publications in magazines of one quartile)
@ Valentyn Isaiev
Unfortunately, I did not affirm your opinion. Let's assume that the publication on the discovery of DNA was not just two people (i.e. Watson and Crick), but 25 other people. Although it was a landmark discovery that lead to Nobel prize, if it were discovered by 25 people, from the personal perspective it might be an insignificant discovery if it was followed your thoughts. The same author could write a lousy review alone and take better credit.
As I pointed out, as scientists h-index raises to some extend, the impact factor will move slowly. It's not that easy! What you feel unfair, will be taken care at later stage of their development!
Additionally, the h-index is based on the citations. how will you divide the number of citations, it is a continuous process. Who will you assign the first slot and the next slot? If the paper had 25 names, and I was the last author, but I wanted the first citation counted towards my h-index, will that be feasible?
@Seithikurippu R. Pandi-Perumal
If I'm not mistaken, not all the works done by large teams lead to a breakthrough in science. And they can slip for a long time. But the articles will still be replicated. The more projects and collaborators you have the more publications.
What you are saying is not a quick process, then for a small team it is even slower. Accordingly, the probability of the appearance of quotes is too different.
And to divide the quotes of one article by the number of co-authors is not difficult. There may be an alternative h-index score for each one from the team. After all, the quotation refers to the article or the whole command and not to each person of the teams (each gets one quote).
@ Valentyn Isaiev
I appreciate your enthusiasm and proposition. However it won't work! Research is a team effort. Everyone has to be recognized equally and fully. Everyone put full effort. Let's assume there is a team doing a neurosurgery. A young technically expert surgen is doing it along with a team of 10 members. There is also a chief surgeon on the board giving instructions. How can you undermine the work of everyone by saying you only put 1/10 of your effort as you were working with 10 other people? If the chief surgeon gave a wrong info, the patient will die. If the surgeon who is performing the surgery do something wrong the patient will die. If the anesthesiologist make a mistake in turning the knob, the patient will die. So do the nurses who do the injections. The amount of contribution might vary. But everyone give a full effort. Above all, everyone read and approve the manuscript prior to the publication and take responsibility. There is nothing wrong with it.
Let's say, I did the surgery, and you were just assisting me or guiding me. My effort might be more than yours. If your argument that 10 people involved means it has to be devided by 10, what if I tell you that that is not enough. I did more work as I actually performed the surgery 9 other people just assisted me? Can I make an argument such as this one? Where is the end now? When you say, you want to divide it equally, still you're not correct, because they don't actually put similar physical effort. However, still one could make an argument as to why he needs more recognition than others.
@Seithikurippu R. Pandi-Perumal
And yet this is not fair comparing with single researcher who does 100 percent of the work, I think. With all due respect...
Hi Hans !! In my view Antonio is correct. If you have 3 publications of which number of citations are more for 1/2 articles then they will be h index articles. It does not matter how many total articles u have ; even if one but it should have some impact in it .
@Valentyn I am part of the discussion late . Can you please put your views again , if it is not a problem for you
I would like to point out that the h index is unfair generally for scientists of underdeveloped countries which most of the time are unable to publish in big journals due to the cost of publication. A lot of groups cite themselves to increase h index avoiding citing other authors which is unfair. Moreover, most of the time, young authors avoid citing the first publication in a topic, they will usually cite a review article that includes it; as a consequence, the h index of the first report may not be as high as it should be. May be statisticians can come up with a better program that would benefit reviewers and authors that are usually not cited properly. A good approached was reached with text plagiarism, I do not see why with good published the aforementioned proposal can not be reached generating fairness in citing research.
Juan,
I think there are a large number of good journals that are free of charge for paper publication. Actually the problem is the overall quality of the papers.
As for the self-citation to increase h index, I have to say that usually, only h index excluding self-citations is considered.
@Anjali, I duplicate some previous thoughts:
In my opinion, the h-index reflects the luck of a scientific author throughout a scientific activity. When I look at a scientist’s Hirsch index, I see a characteristic of the success of scientific articles in which he was either an author or a co-author. In the case of collaboration with a large number of colleagues, success can be multiplied many times. Each co-author can take on a separate project, and then include other colleagues. And so everyone - success will multiply! But the loner relies only on himself ...
In fairness, in my opinion, when evaluating an individual author, one should take into account the number of his co-authors. So fairer comparing with the alone author... If we divide one publication mention by the number of co-authors, then the impact of each of them will decrease proportionally. And if each of the co-authors is simultaneously working on his project, then the cross-inclusion in the co-authorship will also be decreased.
I think it is necessary to distinguish between the contribution of the scientist and his impact. The contribution is measured in publications, a useful contribution is measured in citations (probably, as the Hirsch index). Impact can be measured in the sum of all citations.
@ Valentin ; there is a point in what you say . H index is for citations of article referred which may be done by alone or with many contributors. Whatever may be the number of contributors thevresearch is having impact .
Dear Antonio. In our biomedical field, Nature, Science, Cell, PLOS, Trends, JCI, NEJM, Blood etc.. top journals with high score, you have to pay to publish. The journals which you do not have to pay, with high rankings are very few.
Dear Antonio Pirisi ,
What do you mean by quality of the papers?
Is there any guarantee that the article which you see in the Scopus and the Web of Science databases is useful, relevant and with scientific novelty? Is the quality of English considered the basic requirement and will not any progressive thought be lost due to this? Does the publication’s presence of an eminent scientist in the list of co-authors affect the decision to publish? Can a deeply scientific article of a novice scientist be rejected because of his obscurity in the scientific world? Does not the prestige of the article reduce the presence of someone included in the list of authors without a noticeable contribution to the work? Does the development of science limit the high fees for publishing in a prestigious journal (there are much less free magazines)? Is it bad or good to have some auto-quotes in references? Maybe something else...
@ Juan De Sanctis ,
you're totally wrong!. The journal that you listed, do not charge, but they are top ranking jouranls. PLOS one is an open access journal with a low impact factor. You have look into their submission guidelines, and then share the information. Don't randomly say things that are not true, but misleading! The point of discussion is h-index is good or not! You never said anything, but talking about the journals. People post questions to learn things, not misguided! If you do not know, it is best to follow the discussion, but not to mislead with random thoughts!
Dear Seithikurippu I was answering Antonio's point and you did not read my previous comment. I stated before that h index is not fair for researchers of underdeveloped countries, access to funding and access to journals. Moreover, in most cases, citations are not fair since young researchers prefer to cite a review that a first report on a subject. For example, I would wonder what would have been of Dr Peter Mitchell h index since his article, that lead him to a Nobel Prize, was published in an unknown journal at the time. In my opinion, how to calculate h index should be reviewed
@. Juan very truly said . I am on the same point that it doesn't matter which journal which index ; what matter is your research .
The h-index is failing on the job, and here’s how:
1. Comparing h-indices is comparing apples and oranges.
2. The h-index ignores science that isn’t shaped like an article.
3. A scholar’s impact can’t be summed up with a single number.
4. The h-index is dumb when it comes to authorship...
So what should you do when you run into an h-index? Have fun looking if you are curious, but don’t take the h-index too seriously...
http://blog.impactstory.org/four-great-reasons-to-stop-caring-so-much-about-the-h-index/
The h-index as a measure of both the quantity and quality of scholarly achievement is considered quite reliable and robust, so it has proved incredibly popular and is now applied not only to individual researchers, but also to research groups and projects, to scholarly journals and publishers, to academic and scientific departments, to entire universities and even to entire countries.
In this connection a recent report from Web of Science regarding various metrics including h-index could be of interest: https://bit.ly/2Bd8H3r
The h-index like other quantitative scales frees scientists from the need to thoroughly read research works of others and reach their own honest judgement regarding quality. It is the refuge for those who do not have the courage to make their own decisions with the knowledge that they may have made a mistake.
I feel you have not properly understood the calculation of h-Index. An h score of 20 means 20 publications have at least 20 citations each.
h-Index measures productivity, diversity and sustainability of individual scholars. To my understanding, this is the finest metrics of all available till date.
Dear Rajesh Singh, thanks for the hint. I already clarified my misunderstanding in one of the comments below. Cheers!
The h-index was proposed by Hirsch (2005). It is defined as follows: «A scientist has h-index if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each»
The advantage of the h-index is that it combines an assessment of both quantity (number of papers) and quality (impact or citations to these paper) (Glänzel, 2006). Am academic cannot have a high h-index without publishing a substantial number of paper. However, these papers need to be cited by others academics in order to count for the h-index.(Anne-Will Harzing, 2011)
A disadvantage of the h-index is that it ignores the number of citations to each individual article over and above what is needed to achieve a certain h-index. Once a paper belongs to the top h papers, its subsequent citations no longer count. Hence, in order to give more weight to highly-cited articles Leo Egghe (2006) proposed the g-index.
In mathematics and theoretrical physics the true validity and impact of a reseaech
is established by the Darwin-like selection mechanism but not by humans. Humans, as well known, are influced by many factors that are just manifestation
of the 2-nd law of thermodynamics in the scientific community. Mind the envy,
relative personal ignorance like Aristotelian physics, etc. So, any human constracted index is something apriopi false.
If you have thoughts on the quality of the publication, please share your opinion and experience here:
https://www.researchgate.net/post/What_do_you_mean_by_the_quality_of_a_scientific_paper2
Dear @Valentyn Isaiev
as for your question about the quality of the papers, I think to prepare a good manuscript for publication, a great job have be done before.
Accurate bibliographic research, strong experimental design, use of modern analytical techniques, accurate statistical analysis should be provided and finally, the results must improve the knowledge on the subject of the paper.
One of the limitations of h-index is that it overestimates the scientific contribution of the members of big collaboration groups writing papers with a big number of collaborators. So that, if 100 people have written a paper with 100 citations each of them will have 100 citations reflected in the h-index. As the result, a person who alone writes a paper with 100 citations is put on equal footing with those who write such paper in the collaboration of 100. This is how higher impacts are earned by medical/bio- sciences or by experimental high energy physics or observational astrophysics in comparison with more theoretical research based on individual or small-groups efforts. That is why, for example, the most intellectually demanding activity is pure math is always under-rated in terms of impact measured by h-index.
Yes, it shows the dependence on two factors: 1) Paper quality. 2) Paper visibility
As far as we know h-index and i10 index are used to evaluate researchers, are their any other methods best!
Why not simply use the accumulated number of total citations, Ali Al-Dousari ?
While historians of physics may argue whether Stephen Hawking or Albert Einstein was the greatest scientist, if you accept the h-index as a valid indicator of research impact and Google Scholar as an appropriate citation database then the answer is simple: Einstein has an h-index of 112, compared to 129 for Hawking. Of course understanding what it means to have an h-index of 112 or 129, or how much better 129 is than 112, is less clear. At the most simplistic level an h-index of 129 means that Hawking has 129 documents indexed by Google Scholar, each of which has been cited at least 129 times, but these numbers also reflect the different publishing and citation cultures in different times and places, the length of scientists’ careers, and the time that has passed for the accumulation of citations...
Measuring the dissemination and impact of research is no longer limited to the slow emergence of formal citations in journals and books over a number of years, but can also be traced in near real-time as a far wider audience than ever before engages with publications online. Before a journal article is even formally published the preprint can be placed in an online repository and start generating measurable interest. The repository may provide figures on views or downloads, the preprint may be discussed on various social network sites, bookmarked in online reference management software, or quickly form the basis of an experiment detailed in an open notebook...
In many ways it would seem the prevalence of altmetrics across the scholarly web is a false dawn, belying the circumspect and conservative nature of the academic community...
Undoubtedly more research is needed before we start to understand what altmetrics in all their varying forms mean, and before the academic community will fully embrace altmetrics. As Max Planck is often paraphrased as saying: 'Science advances one funeral at a time', and while this is often seen as being to the detriment of science, such conservatism has much to recommend it when it comes to altmetrics.
https://www.researchinformation.info/feature/embracing-alternative
It depends on what type of work you have . Some times your work will be the reason why people know you and sometimes it h index articles give the recognition to the scientist.
In many respects it is quite a robust measure, except for the fact that the h-index isn't useful for comparing researchers in different fields, or even different sub-fields. Some areas have quite a high publication rate, leading to more potential citations, and therefore potential better h-indices, some have low rates and low expected h values. Further, even workers in the same subfield may have different publication styles that lead to different outcomes e.g. experimentalists vs computationalists vs theorists.
IMO one of the major weaknesses in any argument in favour of the h-index is that the justifying analyses tend to be based on the most famous of scientists (e.g. the Hawking -vs- Einstein comparison above), when in fact no serious decisions would be made about such scientists on the basis of an h-index, as indeed it would not be considered for even (less eminent) established professor-level researchers.
Instead it's most likely application is for the comparison of early-career workers, whose h-indices are low, and likely do not - cannot - yet be expected to reveal any useful data on the quality and/or recognition of their work.
The h-index might be a useful tool for evaluating an entire research career, but that putative success is irrelevant for where it might actually be used - in (hiring) comparisons of early career workers. It seems to me that the only "relevant" h-indices are for researchers whose h-index cannot yet be considered a useful measure.
I think h index is a good measure and there seems to be misunderstanding regarding its value ( Hans Asenbaum ) which has been corrected by Antonio Pirisi.
As highlighted earlier there maybe limitations but as indicated for h index to reach 3 there has to be at least 3 articles which are cited 3 times or more and for it to reach 4 there has to be at least 4 articles which have been cited 4 times or more.
So you have to have sufficient number of articles that are cited frequently to attain a high h index. Flip side is 4 or 5 average articles with 5 or more citations will give h index of 5 where as one top class article with hundreds of citations will still have an h index of 1.
So this may give counter view to origin of discussion
However, every measure has pros and cons.
In summary, h index can give an idea especially regarding prolific authors and can be used as a complementary measure with other measures of research output
The H-index is a far cry from being a good measure of scholarly success. Take for instance, a sound and persistent researcher publishes 100 articles in say Scopus indexed journals and another of lower pedigree publishes the same 200 in Google Scholar indexed journals. The first are of very high scientific and technical quality, while the second are not. The world is full of more people that will have access to and also understand the contents of the lower quality publications, so they receive more reads and more citations say 100 of them receive at least 100 citations, so H-index is 100. However, only 40 of the high quality journals receive 40 and above citations and so H-index in this case in 40. The man with H-index 40 is a consultant researcher for high flying reputable establishments while the other is not recognized by any establishment because of the poor quality of his articles. But his H-index is higher. So, H-index cannot and should never be a measure for scholarly success. Besides it varies from database to database.
To me, It is the measure of scolarly success in a way. Sometimes, you don't need more than an article to be known worldwide. It all depend on how your work impact the lives of others. Sometimes, it is not how many papers you have published ; but how impactful and quality your work is.The issue I have with the H-Index is the fact that it only caters for citations within the RG family which I think doesn't really add up.
No, the h-index (Hirsch index) in your example is 2. And then it will increase to 3.
That may be unfair since the researcher has two publications with over 20 citations. But, I guess it was designed to prevent putting to much weight in one-hit-wonder.
To be clear: The h-index is defined as that the given author has published h papers that have each been cited at least h times.
it should be a mean/median etc of the number of citations and number of articles published,
This is very good resourceful new web site about H index and its variants. Many, many articles are available about H index.
The web is organized according to the following summary:
3.Standarization of the h-index for comparing scientific that work in different scientific fields
4. Some studies analyzing the indices
5.How to compute h-index using different Databases?
6. On the use of h- related indices to assess groups of individuals, institutions and journals
7. Empirical studies that use h- and related indices
8. WEB sites or journal special issues devoted to h-index
9. Bibliography compilation about the h-index and related areas...
https://sci2s.ugr.es/hindex
The H-index of this research is 2.
To calculate H index you should to order the publications from the most cited to the less cited. When the number of publications = number of citations, this is the H index.
for example
First article: 234 citation
Second article: 2 citations
Third article: 1 citations
So, H-index=2
The problems with H index are very frequents:
1. Not take into account the position of the authors
2. Not take into account a very important paper (example: a paper cited 80.000 times is the same that one cited 5000 times because probably no one have an H index above 5000)
3. Depend on the time. The higher age, higher H index
4. Cant compare different times. For example Albert Einstein has an H index of 43 and Richard Feynmann 27. They were two of the most important physics all times. My h index is 35.....and I can’t compare myself with Einstein and Feynman by very very far.....Feynman has an H index of 27 however he changed the physics 50 years ago because these 27 papers were absolutely important.
And what about an author whi has published only three papers, out of which two revolutionize the research area in which he or she is involved. Why the hell should anyone worry about ANY quantitative criterion. Only quality should be considered. Quantitative criteria the easy way out for academic decision makers. With quantitative criteria they are covered; they are safe.
Read on the Internet the story if the “McNamara fallacy” and you’ll understand the destructive power of quantitative criteria.
Then how to measure the quality? Quality can not be measured but observed or detected by the same specialists and they are rare or few’, some are believing that quality sometimes related to quantity but not alway.
As per my understanding in the above mentioned scenario the H index is 2.
Quality could be judged only by examining the published papers only.
generally, In my opinion citation is not a good criterion for measuring the scientific impact of researchers, but examining the content of published works of researchers is effective in this area.
This post answers some your questions: https://www.journal-publishing.com/blog/good-h-index-required-academic-position/
Indeed. it measures the impact of an author activity in research and the popularity of a certain article among researchers.
There are several acceptable ways to boost your h-index.
https://www.enago.com/academy/how-to-successfully-boost-your-h-index/?utm_source=enago_academy_weekly_newsletter_oct292019
Ljubomir very nicely written, it is very clear for young researchers. Thank you
Higher the h index of your publications, more numbers of the scientists are working in the same area of research, which I presume. But for a new area of research which could be an excellent but less or without h index.