Impacts of individual publications are often measured via citation rates. Average citation per publication rates differ across research domains (e.g. Mathematics versus Ecology). Is there a minimum number of citations required to call a paper important or influential, and if so, how many?
I read somewhere that influential papers at cited at least 50 times in a 10-years period.
Dear Vyacheslav,
So you just take the 5-6 most cited publications for a given author X to have an idea about the impact of author X, independent how many author X published in total, right?
Interesting question. It depends on the subject of the paper. If you are citing a newly born idea, few citations would be good enough. But to quantify exactly a number seems difficult. I have seen papers where more than 50 publications have been cited but it does not mean it would make a publication credible. I agree with @Vyacheslav Lyashenko, that "number of citations does not reflect the essence of novelty publication". The essence is the critical discussion on the cited papers.
What is the reason for this deterministic breakthrough, dear Marcel?
—g
Dear Giuseppe,
what do you mean exactly in simple words?
One could argue that when a problem is solved, research on that problem stops, so citations dealing with the problem also stop. The more citations concerning a publication dealing with problem X, the higher the chance that problem X has not been solved?
Dear Marcel,
Number of citations required to call a cited publication influential depends on the research and the domain of the research itself.
What about fundamental research problems? In evolutionary biology it is accepted that natural selection occurs or that species result from evolutionary processes including natural selection. How often is the first publication of Darwin cited or ignored in current scientific publications dealing with evolution and natural history studies? For some, natural selection is so obvious that citing Darwin in a current publication dealing Evolution, Evolutionary Biology, Evolutionary Ecology, etc.might be considered as old-fashioned..... How often Mendel is cited in current studies of Genetics or Quantitative Genetics....
Dear Marcel,
I guess 800 or 850 citations per author per article. I read it somewhere before, i dont know the reference now.
People can not only be cited in a written publication, but also during an oral presentation or a discussion? From what fraction of the objects we have at home can we cite the inventor?
If you are speaking about MY paper, then "one" citation is sufficient (for me) to be considered 'influential". If you are speaking about my colleague or friend, then "two" would be necessary. For a complete stranger, I would require "three" citations. And if you are working in a competitor's lab, you can never get enough citations! (Of course I speak only jokingly). A genuinely influential paper will change the course of research and application in that field (whether pure science, medicine, sociology, etc) for at least a decade if not longer. So it must take at least 10 years or more to determine if a published article is really considered "influential".
Dear Marcel,
The same thing as you said "being frequently cited can be either a good or a bad sign. It depends on the content of papers and why the papers are cited...." can be concluded from up-voters in RG, too!
No, it is not determinable, Marcel.
Same as for that old question here on RG, about the lifespan for a publication or a discovery, its expiration date. Who knows? Do we know how many implications could a new statement could propagate, into an open unlimited system?
We could set some indicator value — than use the consequent standardized scale.
But in the real things it is not predictable, and it should not be allowed — or, publications may propagate in derived strings, and be considered "goods" to bet on or to be used as "currency". At that point, the whole system of research would become corrupt, because a large number of sub-systems should seek to win myriads of partial bets — "fractions”.
BTW, a certain study I've read some time ago calculates that the most influent authors in the history of medicine of the last 50 years, have >500 papers published. Much less for sciences not related to industries [great philosophers of science with
But then again, we talk about quantity, not quality of citations. In another related question, people apparently prefer 'quality' more than 'quantity' of publications. Perhaps the same could be argued for citations and the reasons for citations? Quality of citations might be more important than quantity of citations.
Fictive example without judging people:
Better 5 citations from a famous experienced researcher than 200 citations from inexperienced researchers, or not? Better 5 citations that support your idea than 200 citations that criticizes your idea? Etc...
@Giuseppe Laquidara: No, it is not determinable…
Agreed!
As an indicator of the influence of a publication, citation rate will vary, depending on the discipline and the obscurity of a journal topic. For example, a paper on convex hulls or relators would be of interest to a very small community.
An overview of rankings of universities and scholarly output is given in
A. Marcus, On the mathematics of ranking universities and scientific products:
http://conference.ubbcluj.ro/competitiveness/files/marcus_math-of-ranking.pdf
Dear @Marcel, this is a good question and a controversial issue. I support the answer of dear @Vyacheslav that the number of citations does not reflect the essence of novelty publication. In many cases, there are very strong and novel methods but the people do not go back and cite them because they are heavy, deep, and dense, and, therefore, difficult for many readers.
I can't recall a reference but I have seen somewhere a statement that a Nobel-prize-level paper in physics should gather ca. 1000 citations.
Dear Vyacheslav,
A very interesting remark. We often cite articles referring to (an) author(s) without specifying why the articles were cited. Citing sentences from other articles would provide much more precision why the articles were cited and would also indicate the cited paper was read in detail (e.g. author X wrote sentence X on page X).
There is no threshold for fame. Number of citations does not directly imply influence.
Dear Vyacheslav,
easy question, but no easy answer. Probably, it will not be possible to prove a direct correlation between scientific "Impact" and the number of citations. Very often, the highest citation numbers are found for technical reports. This is reasonable, and might indeed be related to scientific merits, if such a paper describes a fundamental technique for the first time. However, very often, modifications of pre-existing techniques are those that are highly cited, just because they give a reasonable description of procedures and recipes. So, they are cited more due to convenience than to scientific originality and merits.
Many very fundamental papers are rarely cited. However, they have initiated subsequent studies considerably. After a while, especially basic findings seem to be rarely cited, just because the results from such papers are common knowledge, so self-evident and taken for granted that the underlying papers go without saying.
My recommendation: Let's not look too much on measuring scientific contributions by secondary criteria, citations and (even worse) by the amount of money that has been obtained for research, but by reading the publications and by thinking.
Interesting debate - jw
The importance of a research article is routinely measured by counting how many times it has been cited. Hence, number of citations appears to be directly related to the paper's quality of content. However, the number of citations received by a paper depends more on the aspect and time when the paper is published. Papers published early in a new field receive many more citations regardless of content (so-called "first mover advantage") than those published later on. Earliest publications in a comparatively new field normally get a head start that is subsequently amplified by the preferential attachment process and they will continue to receive citations indefinitely at a higher rate than later papers. Also papers appeared in high IF journals also receive high citation though IF varies considerably across disciplines (Math has an average of 0.9 citation per article; Life Sciences have an average of 6.2). Using the impact factor alone to judge a journal or articles published in it is like using weight alone to judge a person's health. There are other factors and considering all those, it is quite tricky to arrive at conclusive number of citation to label a cited publication influential.
This is an extract from The Scientist, Dec. 2012
"In the top ten scandals for science this year, an astroturfing organized by prof. (omissis) fabricating >10,000 citations for his new revolutionary theory."
1st comment on the footer: "Fabricating? Give citation demonstrating fabrication…"
2nd comment: "Who says the scandal deserves to be in the top 10?"
3rd comment: "Pls, could the author of the article demonstrate effectively the "omissis"…?"
—g
Interesting question to ask. However going through the answers, especially the one by @Artur mentioning 1000 citation for a physics nobel prize winning paper, I would ask if these citations happen before or after the Nobel prize. What I mean is now a days it is a bit common to see citations to an individual rather than to a work. Famous people get cited more often than the papers. However, a good work is supposed to draw attention and citations. I would think google scholar refering an index i10 which means 10 citations are also considered good enough.
I agree with @Vijay that it is a bit common to see citations to an individual rather than to a work.
I think it has to do with the way it is cited, more then the number of citations, some papers propose a novel way of seeing a phenomena or revolutionary way of solving a problem. you can see that it triggered a trend in the field, and how do you do you know that ? By looking at the relationship between the cited paper and where it is has been cited ! if you see that the cited paper was exploited somehow to create something new in the field ! that meas the paper has an influence on the field !
Number isn't always the key element to judge the impact of a paper, Cause usually review papers are one of the most cited papers in every field, but only because they are a good entry to the a field and they are good to discuss the novelty of the proposed work, and perhaps cause they need a lot of efforts and research which is something alot of researchers avoid, making Review papers kinda rare ! So they get cited alot, but this doesn't mean they have a big influence or impact of the field.
That's just my opinion !
NB : Sorry for my English.
I am not in favor of counting beans.
A fundamental understanding of the importance of citation is required.
kindly refer to the link:
https://medium.com/@write4research/why-are-citations-important-in-research-writing-97fb6d854b47
the attached table summarizing the attributes and relevance of citation, which is extracted from the above referenced link.
Dear Marcel M. Lambrechts
I think more than 10 citations in the first year after publication can show that this article is influential.
A ghost phenomenon in bibliography research:
How often did it happen that readers were influenced by a publication X without citing publication X?
Example:
Everybody knows about the announcement of the vaccine against covid-19, but where is the publication?
How is it cited, what for and whether it is triggering further research are all valuable indicators of influence. It is also difficult to weigh up individual author's influence amidst multiple authors. Impact and influence judgements are subjective and have a long term perspective as well as a short term perspective.
Perhaps there are also 'mental' citations? E.g. exploiting information from publications without citing them?
There is not such a specific number! However, the more citations the article receives, the more impact it gains.
Dear All, there are currently two RG threads on virtually the same topic:
1. How many citations does it take to indicate an academic article is influential?
https://www.researchgate.net/post/How_many_citations_does_it_take_to_indicate_an_academic_article_is_influential#view=600b095984c43d591f489645
https://www.researchgate.net/post/How-many-citations-are-required-to-call-a-cited-publication-influential
(Asked July 20, 2014 by Reginald L. Bell )
2. How many citations are required to call a cited publication influential?
(this one, asked July 1, 2014)
Can anyone give me a plausible explanation (or at least an assumption) why the first one has already received 369 answers, while the second has only 46 answers??
When the answers are clear and acceptable, why having a long discussion on a question?
It greatly varies with the academic fields. According to a study cited by the Joint Committee on Quantitative Assessment of Research (2008), the average citation of papers for some academic fields are as follows:
Mathematics/Computer Science = slightly below 1
Social Sciences = slightly above 1
Biological Sciences= 2.2
Environmental Sciences= 2.2
Earth Sciences= 2.3
Chemistry= slightly below 3
Physics= 3
Clinical Medicine = 3.2
Neuroscience = 4.5
Life Sciences = 6.2
Although it is not very clear from the study cited, it is apparently for a 2-year period after publication.
Dear Marcel M. Lambrechts seems your interesting thread has been revitalized... Please see this recent link entitled "Google Scholar reveals its most influential papers for 2020"
https://www.natureindex.com/news-blog/google-scholar-reveals-most-influential-papers-research-citations-twenty-twenty
According to this article the citation counts of the seven most highly cited papers in 2020 were in the range between 8,209 and 47,774 citations. Both articles were published in 2015.
Thus it seems to me that citation counts of 10-15 or so indicate that the paper is of interest only for very specialists.
What are your arguments that a paper that is frequently cited is indeed 'influencial' given that the underlying mechanisms of the impact on a human population will change with the scale of analysis?
Example:
The psychological impact of the content of a publication on readers might be so high that readers avoid it reflected in a low number of citations? Why should readers cite a very critical paper that is critizing the work of readers and readers cannot change the situation/methods because of X reasons?
Dear Marcel M. Lambrechts please allow me the remark that this is one of your typical theoretical questions which I don't really understand and which I'm unable to answer. As chemists, we try to prepare new chemical compounds and prove their identity by full characterization e.g. through analytical and spectroscopic methods or crystal structure analysis. If this is successful, we report the results in a research article, normally published in a peer-reviewed international journal. When we are lucky, the articles are read by specialists in the field and eventually cited. I cannot imagine what the "psychological effect" of the content of our publications on the readers could be. In the best possible case they will say "Hey, that's interesting".
Dear Frank,
I agree, but as an example imagine now that someone that is very critical about the methods of the preparation of new chemical compounds, and that those that prepare new chemical compounds with older methods are not able to change their methods in response to the critical remarks, do you think that they will cite the person that produced the critical remarks?
Dear Marcel M. Lambrechts to me this is again too theoretical. Publishing critical remarks about other researchers work is quite uncommon nowadays in our discipline, although it happens sometimes when really important findings are concerned. It was, however, very common in the 19th and early 20th century where chemists regularly published "Remarks", "Comments", "Refutations", "Notes" and "Replications" to other researchers' papers. As a typical example from the year 1886 please see the attached article taken from the "Berichte der Deutschen Chemischen Gesellschaft". The tone of these articles would be unthinkable today.
Thus, the underlying mechanisms causing the number of citations per publication will probably differ between research disciplines or cultures and evolve in time?
@ Frank T. Edelmann, thanks for your contributions. I want to find out from you the period for your estimated 10 to 15 citations. Is it for some number of years of forever?
Dear Humphrey Danso the numbers 10-15 were taken as an example from the answer given by Mohit Tiwari on November 11, 2020. I Think it means the total nunber of citations over the years. I just wanted to point out that is absolutely useless to answer the original question by numbers like 10, 10-15, or 100. No one knows if this makes a research paper "influential".
Dear Frank T. Edelmann,
Thanks for your response. I also think the idea of i-10 measure use in Google Scholar and other platforms is also important in discussing citation influence.
As mentioned in the question itself that the average citation per publication differs across research domains. Citation of papers published in the newly emerging area will be more. As far as the minimum number of citations required to call a paper important or influential, i-10 measure used in Google Scholar can be considered.
Self-citations could then also be considered 'influential' because journals and referees allowed them to be expressed in public?
Dear Marcel M. Lambrechts I have no idea if self-citations are "influential", but they are an absolutely normal and necessary part of the list of references in a research paper. They are the only way to put a researcher's new results into context with related previous findings. However, they should be used with a sense of proportion and not excessively. Thus if 50% of all cited references are self-citations, I would ask as a reviewer if this is really necessary. Please see this interesting article entitled "Tracking self-citations in academic publishing" which is available as public full text on RG:
Article Tracking self-citations in academic publishing
In this important article it is stated "When used appropriately, self-cites are equally important as cites from the surrounding community, and without tracking them it is impossible to see how scholars build on their own work."
Does the i-10 measure à la Google Scholar also take self-citations into account?
Dear Marcel M. Lambrechts please see this useful article entitled
"Handling self-citations using Google Scholar"
Article Handling self-citations using Google Scholar
The paper is available as public full text on RG. Please also see this related RG thread entitled "What is the difference between H-index, i10-index, and G-index?"
https://www.researchgate.net/post/What_is_the_difference_between_H-index_i10-index_and_G-index
Dear Marcel M. Lambrechts and Humphrey Danso personally I have never been interested in any other index or measure than my h-index on Scopus which is currently 45. I makes me a bit proud that I just surpassed the magic number of 10,000 citations on Scopus.
https://www.scopus.com/results/authorNamesList.uri?st1=Edelmann&st2=Frank+T.&origin=searchauthorlookup
Referring to the publication mentioned above, why a h-index including self-citations should be considered misleading? Here again, the h-index or other citation measures excluding self-citation also do not take into account why articles have been selected for citation, which could also be considered 'misleading'? E.g. When scientists are friendly, frequently cite colleagues and are socially well accepted by the scientific community (e.g. someone that frequently participates at meetings), you get more citations indicating that the citation index reflects a form of social integration rather than reflecting science quality per se?
Who did what in a multi-author publication and the consequences for individual-based citation indices
Useful publication?
Article Fair ranking of researchers and research teams
Perhaps the factors influencing the scores in RG would be a nice model system of how social integration proximately influences citation indices in general?
By the way, I would consider an article with 100 co-authors scientifically more robust (e.g. there is agreement among 100 brains involved) than an article with one author (e.g. only one brain involved) with the side-effect that the risk of self-citation will be substantially higher when an article is co-authored by 100 scientists (e.g. all scientists in a given field decided to write an article together) than when an article is written by one author, no?
Dear Marcel M. Lambrechts "Who did what in a multi-author publication and the consequences for individual-based citation indices": First of all, please note that in our discipline (chemistry) teamwork is the norm, so that virtually all of our original research articles are multi-author papers. For every one of these articles I could tell you exactly who did what. Only researchers are listed as co-authors on our papers who made significant and indispensable contributions.
What are exactly 'indispensable contributions'? For instance, when there are no data there is no (empirical) research, so data collection as a first crucial step in the science process is obviously an indispensable contribution, right?
In other words, why 'hand' contributions should be considered superior to 'head' contributions given that both are necessary to proceed in scientific progress?
Some even claim that 'transpiration' is often more important than 'inspiration'....
Varies as a function of discipline and utility. Some authors have 1000s due to wide use of scale or commonly cited reference (e.g., Cohen).
Do you know the R-index?
Article Dynamics of co-authorship and productivity across different ...
The write: Lower values of R (more collaboration) were seen in physics, medicine, infectious disease and brain sciences and higher values of R were seen for social science, computer science and engineering.... Lower values of R (more collaboration) were associated with higher citation impact (h-index), and the effect was stronger in certain fields (physics, medicine, engineering, health sciences) than in others (brain sciences, computer science, infectious disease, chemistry).
PS: Interesting that researchers in the social sciences have higher R-index.... I would have predicted the opposite...
Dear Marcel M. Lambrechts "What are exactly 'indispensable contributions'? ": When we report new chemical compounds in a research paper, there are several important factors to be considered. First of all, the compounds must be synthesized. Once this is done, it is generally accepted by the scientific community in chemistry that the compounds must be unequivocally characterized to prove their existence. These methods include e.g. elemental analysis, IR and NMR spectroscopy, mass spectrometry and X-ray crystal structure analysis. Each of these methods requires the contribution by a trained expert who does the measurement and interpretations. If one of these methods is missing, the paper is likey to be rejected. That's why each one of these contributions are "indispensable".
Dear Marcel M. Lambrechts "Do you know the R-index?" Actually, I don't know the R-index. The only index I have ever been interested in is my h-index on Scopus (which is currently 45). My fear regarding all those other indices is that one spends more time counting research than doing research.
Article The R-and AR-indices: Complementing the H-index
How to calculate the h-index?
Example:
In RG, I have a h-index of 43 (without self-citations), but when I look at Google Scholar I am a co-author of more than 10 publications that were cited more than 100 times....
Dear Marcel M. Lambrechts it is well established that there are significant differences in citation counts and h-index between RG, Scopus, and Google Scholar. Here are my numbers for comparison:
RG: 9,902 citations, h-index 46
Scopus: 10,011 citations, h-index 45
Google Scholar: 12,302 citations, h-index 53
Dear Marcel M. Lambrechts to my knowledge the significant differences in citation counts between Google Scholar on the one hand and RG and Scopus on the other hand is that Google Scholar counts more citations in books and book chapters. For more information about this please see this useful article entitled "Which h-index?—A comparison of WoS, Scopus and Google Scholar" which is available as public full text on RG:
Article Which h-index?—A comparison of WoS, Scopus and Google Scholar
But it does not solve the problem that the h-index significantly differs across research fields (e.g. low for mathematics versus high for medicine) and that it also ignores who did what in publications?
On the lighter note, I think even if your paper is cited once, it has some degree of influence and therefore can be said to be influential. I have some published papers that have never been cited and therefore such papers can be said to be non-influential.
A paper becomes fully non-influential only when it has never been read by individuals other than the writer?
What would be the h-index of a scientist with one publication that has been cited 50000 times?
Dear Marcel M. Lambrechts "What would be the h-index of a scientist with one publication that has been cited 50000 times?": I assume that the h-index of this researcher would be very low because the h-index considers both quality and quantity of the scientific output of a researchers. In this context, please see this very interesting article entitled "Reflections on the h-index":
https://harzing.com/publications/white-papers/reflections-on-the-h-index
In this article it is stated "As such the h-index is said to be preferable over the total number of citations as it corrects for one hit wonders, i.e. academics who might have authored (or co-authored) one or a limited number of highly-cited papers, but have not shown a sustained and durable academic performance."
The last remark dealing with the h-index indicates that the number of publications written during a whole scientific career should be considered more important than the number of citations during a whole scientific career?
0.95% of articles in emergency medicine; 1.8% of articles in dentistry, oral surgery and medicine; 0.53% of documents in education and educational research; 2.1% of articles in environmental engineering; 0.94% of documents in horticulture; 0.18% of documents in information science and library science; 0.68% of documents in health care sciences and services; 1.1% of articles in materials science; and 0.63% of documents in chemical engineering were highly cited with 100 total citations or more from Web of Science Core Collection since publication year to the end of the most recent year (It depended on the related papers. see below). An average of 10 years is needed for a paper to accumulate 100 citations. Ho, Y.S. (2021), A bibliometric analysis of highly cited publications in Web of Science category of emergency medicine. Signa Vitae, 17 (1), 11-19. http://www.signavitae.com/articles/10.22514/sv.2020.16.0091. Yeung, A.W.K. and Ho, Y.S. (2019), Highly cited dental articles and their authors: An evaluation of publication and citation characteristics. Journal of Investigative and Clinical Dentistry, 10 (4), Article Number: e12462. https://doi.org/10.1111/jicd.12462. Ivanović, L. and Ho, Y.S. (2019), Highly cited articles in the Education and Educational Research category in the Social Science Citation Index: A bibliometric analysis. Educational Review, 71 (3), 277-286. https://doi.org/10.1080/00131911.2017.1415297. Fu, H.Z. and Ho, Y.S. (2018), Collaborative characteristics and networks of national, institutional and individual contributors using highly cited articles in environmental engineering in Science Citation Index Expanded. Current Science, 115 (3), 410-421. DOI: 10.18520/cs/v115/i3/410-421. Kolle, S.R., Shankarappa, T.H. and Ho, Y.S. (2017), Highly cited articles in Science Citation Index Expanded - subject category of horticulture: A bibliometric analysis. Erwerbs-Obstbau, 59 (2), 133-145. DOI: 10.1007/s10341-016-0308-4. Ivanović, D. and Ho, Y.S. (2016), Highly cited articles in the information science and library science category in Social Science Citation Index: A bibliometric analysis. Journal of Librarianship and Information Science, 48 (1), 36-46. DOI: 10.1177/0961000614537514. Hsu, Y.H.E. and Ho, Y.S. (2014), Highly cited articles in health care sciences and services field in Science Citation Index Expanded: A bibliometric analysis for 1958-2012. Methods of Information in Medicine, 53 (6), 446-458. DOI: 10.3414/ME14-01-0022. Ho, Y.S. (2014), A bibliometric analysis of highly cited articles in materials science. Current Science, 107 (9), 1565-1572. Ho, Y.S. (2012), Top-cited articles in chemical engineering in Science Citation Index Expanded: A bibliometric analysis. Chinese Journal of Chemical Engineering, 20 (3), 478-488. DOI: 10.1016/S1004-9541(11)60209-7.
Thus, one out of 100 authors will have at least one publication with more than 100 citations?
Dear Marcel M. Lambrechts "The last remark dealing with the h-index indicates that the number of publications written during a whole scientific career should be considered more important than the number of citations during a whole scientific career?": For more than 20 years since 1976 we did exciting research without knowing anything about the citation counts. Now I can look back to a life work of 400+ papers in international, peer-reviewed journals. This is what I'm really proud of, not the citations.
Please also see this interesting link entitled "Why citation counts don’t matter":
https://medium.com/@OmnesRes/why-citation-counts-dont-matter-4e9b4c487efc
To summarize:
1) the major problem with citation numbers is that you do not know why a paper has been selected for citation;
2) the major problem with citation numbers is that it does take into account who did what in a published study;
3) ......
Perhaps the challenge is not how many papers you published, but more whether you have been able to definitely solve a problem, right? E.g. At least one paper out of hundreds that truly solved a problem in your professional career in such a way that science stopped working on it....
To Marcel M. Lambrechts
Thus, one out of 100 authors will have at least one publication with more than 100 citations?
About 1% of papers will reach 100 citations. It is about papers but not authors. Highly cited authors published not only one paper with 100 citations. Thus much less than 1% of authors have a paper with 100 citations.
For those that have papers with more than 100 citations, how many papers with at least 100 citations will they have on average to obtain much less than 1% of authors that have a paper with 100 citations?
From my previous private study (unpublished), there were 188,617 articles published by 266,649 authors in the Web of Science category of environmental sciences from 1998 to 2009. A total of 1,519 highly cited articles published by 5,331 authors with 100 citations or more from the Web of Science Core Collection. Thus about 0.57% of 266,649 authors published at least one highly cited article in the Web of Science category of environmental sciences. There were 239,569 articles published by 275,748 authors in the Web of Science category of water resources from 1965 to 2017. A total of 5,744 highly cited articles published by 12,211 authors with 100 citations or more from the Web of Science Core Collection. Thus about 4.4% of 275,748 authors published at least one highly cited article in the Web of Science category of water resources. The percentage of highly cited authors is different from category to category.
I think more than 10 citations in the first year after publication can show that this article is influential. There is not such a specific number! However, the more citations the article receives, the more impact it gains.
Dear Marcel M. Lambrechts "For those that have papers with more than 100 citations, how many papers with at least 100 citations will they have on average to obtain much less than 1% of authors that have a paper with 100 citations?" I'm sorry, but as a poor chemist I do not understand this question. Perhaps it's time to leave this thread. Some people (see last answer) seem to spend too much time with counting research than doing research.
Currently I have 15 papers with 100+ citations on Scopus (see attached), but then: So what? What shall I make of this?
Why some individuals are able to frequently produce papers that are cited more than 100 times? Is it the individual, is it the research environment, is it the research topic, is it... that makes the difference?
It's definitely the research area that is the main player. Computer scientists are among the most prominent scholars that can write papers quite quickly.
Dear Marcel M. Lambrechts "Why some individuals are able to frequently produce papers that are cited more than 100 times? Is it the individual, is it the research environment, is it the research topic, is it... that makes the difference?"
I can tell you only about my personal experiences. My list of most-cited papers contains both original research papers and review articles.
For highly cited original research papers you need:
1. A good idea
2. Luck
For highly cited review articles you need:
1. Excellent overview of the field
2. Diligence and endurance
3. Time