Imagine an article published in 2013 has been cited 12 times, is this good? What does this reflect in terms of the citation trend for this particular article? More importantly, how many citations does it take to indicate an academic article is influential?
It also depends on the context in which the paper has been cited: e.g. consider these 3 options:
1) This topic has been studied by A (2012), B (2010) and...;
2) A (2012) was the first to use this method. Our paper is based on this method;
3) Some authors (for instance, A 2012) have mistakenly assumed/proposed that...
So, to get 10 citations of type 2) is certainly great, but type 3) is something almost no researcher wants to get.
Howevermuch an article is citation, it is better. because article may be interested, may have generated new knowledge, may be informative to readers. when number of citation increases it is good for the author.
Frequent citation shows only actual interest and possibly relevance for actual projects. All authors want to be cited. I don´t know if it´s good or not. It´s scientific communication.
In general, the more citations the better ;)
But it depends a lot on the journal and a range of other factors, such as the academic discipline, the database you are using (Google Scholar, Scopus, Web of Science all count citations differently) and the type of articles/texts that cite you (are they themselves highly cited or just bachelor theses or non-peer reviewed texts etc.). In "Nature", "Science" or "Lancet" you probably need more citations for your article to be considered influential compared with another - less popular and more specific - journal, where a few citations might already make you influential. So, without context it's hard to tell whether 12 citations in 2013 is influential or not.
A good way to weight an article's influence to its journal and year might be to look at all the articles published in that journal in that year and then see in which quantile the article falls. Another way - especially for older papers - could be to see whether a paper contributes to the h-index of a journal, i.e., whether it has been cited at least as much as the h-index of the respective journal. This is what "Publish or Perish does (http://www.harzing.com/pop.htm).
The number of citations "needed" for an article to be called "influential" depends very strongly on the field. An article can also be influential without getting a lot of citations, or, as in my own case: my most cited works are books - one of which is a course-oriented overview of a field of applied math. Can I say that it is influential if it is cited as a good survey? I am not sure.
I am quite convinced that the sheer number of citations says very little about the influence of a paper, unless you can discount all these factors about the "size" and "popularity" of a field. It is very clear that parts of medicine, for example, get a hundred-fold, probably even more, citations per article, without the article necessarily being more influential. In math, on the other hand, it is quite seldom that a paper gets many citations. For example, Andrew Wiles' breakthrough paper "Modular elliptic curves and Fermat's last theorem" has less than 2000 citations.
The final questions, I suppose, is: what does actually "influential" mean? :-) I am convinced that it is not all that strongly related to citation numbers ...
@Christoph and @Michael,
I agree with your statements. How can you be influential, if you are working and researching in a field, where only some specialist can understand and follow you? I´m sure you can´t measure your influence by some however constructed number. If you are producing a brand new theory with brake through you will be cited and earning broad interest and acception. But who has ever the chance to research in this way with such path braking results?
@Prof.Christoph, @Prof. Michael, and @ Prof. Shafig Ibrahim Al-Haddad -answers are very interesting and i also fully agree with the statements.
It is a difficult to say anything about the actual number and the influence. Citations may be because of many reasons and the quality of contributions and the journal in which it is published are some of the main reasons why citations increase. The influence probably can be measured by what the theory proposed and how that theory is accepted or extended by various researchers.
I agree with above that the number of citations that qualify an article as influential or not is difficult to determine and that it depends on a range of other factors. Some very influential articles received very few citations for a number of year then achieved world fame and an exponential popularity. For example, the article of TEECE on dynamic capbilities, or the article of Hofstede on national culture.
In this respect I am more inclined to say that an influential article is the one that receives a considerable amount of citations in an increasing fashion over a specified period of time. From a management research perspective we may even include the notion of "management fashion" (Abrahamson, 1996) because such notion explains the proliferation and popularity of certain concepts proposed in popular articles that get cited a lot over a period of time and then start to disappear in the background.
It is really difficult to determine a number of citations in this regard, because it depends on many factors and variables, as my colleagues mentioned some of them.
It also depends on the context in which the paper has been cited: e.g. consider these 3 options:
1) This topic has been studied by A (2012), B (2010) and...;
2) A (2012) was the first to use this method. Our paper is based on this method;
3) Some authors (for instance, A 2012) have mistakenly assumed/proposed that...
So, to get 10 citations of type 2) is certainly great, but type 3) is something almost no researcher wants to get.
This must be discipline-dependent.
I cannot put it to a figure but I use the following as a guide:
I look at the number of highest citation of a paper and the total number of citations of a well-established expert. If someone has similar citation data, the output of this person would be at the level of an established expert in the field.
The answer to this question is difficult if not impossible! Citations, have many dimensions and are dependent of the particular field. Suppose you have ordered the journals of a field: 1srt, 2nd, .... Then if you have a citation which appear in the 1st journal, then this citation has more weight.
I think that there is some University in Israel that in Mathematics have ordered all journals and the give some points for each publication or citations. Then in order to be promoted to a rank the Department specifies how many publication points and citations points you need. This is more practical instead of looking which papers is influential.
An academic article influence metrics is hard question to discuss dear @Reginald! It depends very much on many factors! I do read about, and I do suggest some readings about! Links follow!
http://www.lib.sfu.ca/collections/scholarly-publishing/scholarly-metrics
http://scholar.google.com/intl/en/scholar/metrics.html
http://crln.acrl.org/content/73/10/596.full
I agree Ljubomir.
Influence metrics is something totally differente nowadays, and the so-called altmetrics are becoming more and more important. Actually, just think about Researchgate.
There is no definite number for citations. As much as you can, sky is the limit.
It is so difficult to specify the actual number of citations to indicate an academic article is influential. The number of citations depends on many factors including the quality of contributions, the publication year and the journal in which it is published. The influence may be measured by the theory proposed and accepted or extended by various researchers.
What would you say for a discovery research, if you have got to something new. I don't think there an realistic figure of number of citations required for a academic article to be influential.
It would have been difficult even for Edison and Newton, to cite and search like this to get anything....
The time-frame is important - another metric might be citations per year.
For me 12 citations for an article published in 2013 would indicate a level of interest in a short space of time - potentially indicating more impact than say 20 citations over a 10 year period. The unknown here of course is will the rate of citations continue to be high or will interest stop?
Also, as Mehran says the size of the field in which the paper is published is important. For example 12 citations in a specialist field (e.g, paediatric oncology) might be more influential than 12 publications in a more general oncology journal/topic. If citations could be linked with publications databases then it may be possible to measure this by working out a denominator for number of papers published for specific keywords/MeSH terms.
As several people have commented, it depends on your field. In the areas of strategy, innovation, or corporate social responsibility, where I publish, I'd say 12 citations in the first year is a good sign. But it also depends on your personal metric. Assuming that the cites are good cites (see Tiia's comment above), more is always better, but you can compare your 12 since 2013 with your other publications and how they did in their first year. If the best you've done before this was 6 in one year, then you've improved a lot -- good for you! If you normally get 35 cites in the first year, then 12 probably isn't very good. Whether you're competing with yourself or trying to persuade a tenure committee that you have earned the nod, showing that your recent articles perform better than your older ones is a big help.
How many? I think this is the wrong way to put it.
It should be expressed as follows: "Where is best place for an article to be cited -- and by whom -- if it is to be influential?
A zillion citations by nobodies on web-blogs is worthless; one citation by a famous scholar in [add your best journal in the field here] counts for a lot.
Similarly, a paper can be cited a lot for all the wrong reasons: because the paper is seriously flawed: i.e., the methodology is biased; the results unrepeatable; the literature review poorly set out, etc.
Placing the emphasis too squarely on the quantity of citations is a bit like thinking you are popular because you have a lot of Facebook "friends".
Authors with Hirsch Index h=10 according to Web of Science are in the most of disciplines regarded internationally influential, so cyclic publication of papers cited at least 10 times each may be regarded as very good result.
To be influential means to be a hub, like there are hubs in any communication or cooperation network. This is probably the most important conclusion of the work of Albert-László Barabási and his team (see links).
Influential scholars, researchers, scientists etc. are influential because they function as hubs. Citations of their published work plays a part in this, but is only one of many factors, probably not the major one.
Moreover, even if everybody knows the basic general meaning of the word citation, there is not a single well-defined operational definition of it, but rather a host of different competing ones. Thus we can answer the question only if we agree - for the sake of discussion - on one of the more than 10 non-equivalent (sic!) defintions.
Unfortunately, proposals and analyses of the different citation indices are scattered over many different journals and conference proceedings and it takes time to find them all (citations notwithstanding!). Also, there has been a proposal a new sort of citation index here on ResearchGate.
It is highest time that somebody takes the courage to write a handbook of citation measurement and analysis in which all those different approaches to the phenomenon and measurement of citation are put together, and sorted out. Different citation measures have different pros and cons, and thus should be chosen and used with care (or not at all, if it turns out to be seriously flawed). Perhaps the linked book is a good starter on the way to a genuine handbook.
http://www.amazon.com/Linked-Everything-Connected-Business-Everyday/dp/0452284392/
http://www.amazon.com/dp/0452297184/
http://www.amazon.com/dp/1843345897
Personally from my perspective, it is NOT the number of sources cited, rather, it is the sources that were cited. Having said that, many researchers are trained not to look at studies that date back more than 20 years.
From my perspective and experience, I prefer to cite those who were pioneers in their respected field of study as well as those who have added value to that original study ; or have developed new theories to contemplate.
Pioneers do publish their work more than 20 years ago, that's why they are called - in hindsight - pioneers. And usually it can be very revealing and interesting to go back to the original sources of novel ideas, sometimes more than one because several scholars got the same revelation at about the same time but at different places (universities, countries, continents).
"It is NOT the number of sources cited, rather, it is the sources that were cited." In the more advanced citation metrics, this IS taken care of by also using the citations in the cited work, and the citations in the citations of the cited work, and so on. This can be done quite easily - modulo some technical details - by standard techniques from network analysis (part of graph / network theory).
Still there are a host of other issues which are not well dealt with by any current citation metric. The problem is one of trade-off between quality and fainess of the metric and the complexity of its definition and calculation.
As citation analysis is a rather new field of research (as opposed to a field of business opportunity, cf. Garfield and his SSCI), I am sure that we will see more and more research in the next decades, and probably ever more sophisticated approaches weading out the weaknesses of current metrics and adding more interesting features at the same time. That's happening in other fields, too, why should that be different in the field of citation measurement?
Dear @Paul, I appreciate your view about citation metric! I do like your approach to the IS by applied network analysis! Hopefully, many,many new research in this area will be done which will result in new metrics!
In education, we have an often criticized practice called "teaching to the test". In research we now have something similar, it is called "write it to be cited" [to be pronounced quickly so that you get the correct "rhyme"].
What do both have in common? It is a wide-spread phenomenon which I call means-end reversal:
At the same time, the focus is tacitly moving from a productive activity, i.e. the teaching or writing as such, to the actor who is assumed to be responsible for or the source of that productive activity, i.e. the teacher or the autor.
Thus there is also an equivalent educational version of the initial question:
What can we learn from this analogy? I am not sure, but I found the analogy certainly striking enough to share it with you.
Thank you for that Paul. Your point is well taken. I never thought of it like that before. But, you make a good point!
Now that I wrote it down I can easily give more examples of this wide-spread phenomenon of means-end reversal.
For instance, in the field of Software Engineering there are dozens of so-called software development paradigms, i.e. more or less sophisticated methodologies prescribing the processes and tools software developers - formerly called programmers - should adopt in order to reduce time, costs and errors and increase quality of programming.
None of those approaches has led to the one and only silver bullet (cf. Frederick Brooks) which solves all known problems of software development. In the wake of a rather new paradigm, so-called Agile Development, a rather radical proposal has emerged which turns the usual order of processes upside down: Test Driven Development, TDD for short. Seems to work well, for some at least. And yes, it is no less than another example of means-end reversal!
Look in your own field of research and development, I'm sure you will find more examples of it.
http://www.cs.nott.ac.uk/~cah/G51ISS/Documents/NoSilverBullet.html
http://www.amazon.com/Test-Driven-Development-Empirical-Evaluation-Practice/dp/3642042872/
To decide which article is influential is similar in deciding in psychiatry which is "normal" and which is "mental ill". Normal distribution may help but finally you can draw lines that divide the normal from non-normal. We may need some fuzzy set theory combine with normal distribution to have a model of what is normal and what is excessive.
See also Allain Frances, Saving Normal.
Thanx a lot, Costas!
------------------------------------------------------------------------------------
Interestingly perhaps - although nothing to do with the subject of this duscussion thread - there is a book entitled "The Inmates are Running the Asylum" by software expert Alan Cooper (father of Visual Basic) turned into a Human Computer Interaction kind of evangelist (from Saulus to Paulus) - highly recommended:
http://www.amazon.com/dp/0062229257
http://www.amazon.com/The-Inmates-Are-Running-Asylum/dp/0672326140
LSE website (see the following link) has a report on "maximizing the impact of academic research" . It is more focused on social science academics, however.
A summary of LSE findings are:
1) Simple indicators for judging citation rates- such as total number of publications, total number of citations, and an age-weighted citation rate do not accurately capture an academics’ citation success.
2) Calculating an academic’s h-score and g-score provides a more robust picture of how much an academic’s work is valued by her peers.
3) Across all disciplines in the social sciences journal articles account for the majority of citations, reflecting the large numbers of published articles. Books account for 8 to 30 per cent of citations across different disciplines. Books may figure disproportionately amongst those well-cited entries that build h scores and the g index. Book chapters, however, are often hard to find and are poorly referenced.Network analysis can help shed light on the difference in citation rates between ‘hub’ and ‘authority’ academics at different stages in their careers, which compares the number of inward and outward citations.
4) Network analysis can help shed light on the difference in citation rates between ‘hub’ and ‘authority’ academics at different stages in their careers, which compares the number of inward and outward citations.
For more details including survey, charts, etc. please refer to the LSE link.
http://blogs.lse.ac.uk/impactofsocialsciences/the-handbook/chapter-3-key-measures-of-academic-influence/
The maximum number of situations possible is the 'number of researches already carried out in that particular area', however all may not be suitable and some may be suitable, those can be 'maximum number' that can be cited.
@ Mahmoud Omid · :
Excellent reference, I was looking for such a work for a long time but didn't find any ... a must have for both advocates and opponents of cit metrics ... Everything You Always Wanted to Know But Were Afraid to Ask
Thanks!
I would say at least 10 in Google scholar in teh first 3 years, 50 in the first 10 years
Dear @Paul Hubert Vossen. Thank you for complements. Personally I never afraid of asking and I am here to learn.
There is no fix number at all. It largely depends how much academic weightage does a writer want to put in. Actually references are meant to support the writers labor, knowledge and original views.
I is hard to say that X number of citations will be enough for an article to have an impact and then as impact itself is a variable so it would be necessary to tell how much that impact would be. Moreover, it is not only the number of citations that influence the real impact of a particular article but also who is citing it (e.g. self citations are not as valuable) and in what kind of journals the article that cite your article are appearing. So first there are number of factors that determine the impact of an article and second quantification of them in absolute terms is rather difficult.
Professor Vivarelli,
I agree with your numbers. Your numbers are consistent with the Google Scholar metric for the h-index of what it considers to be influential journals. For more information on the Google Scholar metric and what it means see the following link:
http://scholar.google.com/intl/en/scholar/metrics.html
Just to be clear, if I had a person up for tenure review who had 10 citations per article on eight or more articles in a three year period (not merely self-citations only) I'd award tenure to that person.
It all depends on the context, novelty and idea/results been published in a paper, and also in the area/discipline it has been published, as well as having potential results / hypothesis to make researchers appreciate and adapt / utilize it in different ways. More citations means more adaptability and practical value of the paper. In my opinion 5-10 citations in initial years may be considered worth and of high value.
Dear Kuldeep: I fully agree with you. For all researchers, 5-10 citations of their papers will be great! Publishing in good journals help the citations of our articles. Some of the indexed journals "advice" authors to add at least 5 citations from their journal in the last 2-3 years, and such practices not only boost the IF of the journal, but chances are that our paper may be also cited by others working in the same / related areas.
With best wishes
Sundar
It's important to write and convey to others what you want to..never bother whether it gets cited or not.
Apart from citations, how many times an article has been downloaded must also value, since mostly we download some preferential / interesting / useful articles
>Downloads & views both should be taken into account.
In terms of ResearchGate, what is the difference between a "download" and a "view"? I typically find that the number of (PDF) downloads tends to be higher than the number of views for my papers. Thus, a "view" cannot simply be a click on the article title, because you would have to do that in order to get to the PDF, such that the number of views would always be higher than the number of downloads.
I'm missing something, right?
Citations are very relative. For example a difficult to read paper (with very high quality) it is expected to have less citations. On the other hand a paper easy to read and compatible the the existed mode, should have more citations! This does not mean that the second paper is better than the first! All these things are very much complicated!
@Costas,
not only your dependencies. It´s also a matter of the researching field. Please try to estimate the number of citations of Peter Higg´s Boson paper before the famous experimental result and the Nobel prize.
You are right dear Hanno. Chemist have a mean value of year paper about 30 or 40. Pure mathematicians have 2 or 3 per year. Thus in order to compare we should take into account what field we are talking about and within a field there are differences.
Let me add something more about ALTMETRICS! I have mentioned in my previous answer, comparing to other metrics available."This free Web site is a central hub for information about the growing altmetrics movement, which it defines as “the creation and study of new metrics based on the Social Web for analyzing and informing scholarship.” Cofounded by prominent figures in the world of bibliometrics, such as Jason Priem and Heather Piwowar, altmetrics. org maintains links to new online tools for calculating impact. Other prominent features include an altmetrics “manifesto” that argues how altmetrics can improve existing scholarly filters."
http://altmetrics.org/manifesto/
http://crln.acrl.org/content/73/10/596.full
There are certain institutions which demand more time of the teaching faculty into academic works and other administrative ones, where the teacher has less breathing space of time to research (case like our friend @ Miranda Yeoh)
So, measuring on citations is irrelevant. Hence, citation measurement can be a customized yard stick only.
Equivalent scale needs to be created.
The quality of the article, the academic wisdom of the author(s), its contribution to the current challenges in the society as well as the accessibility of the journal where the article is published determines the rate of its citation. For instance, articles published in open access journals have higher rate of consultation and citation. To measure the traffic of citation of a particular article demands that all should have equal chances of being consulted by researchers.
If your paper was cited by Obama, Xi Jinping, Merkel and Putin - it is influential paper. So, in some cases 4 citations could be good enough.
I doubt anybody can answer in terms of absolute number. It depends who is citing you? If you get 30 citations by students being published in their department's student journal it would not be equal to 3 citations by renowned scholars.
So it depends on followings:
WHO is citing the article - Repute of the citing author
Diversity of citations in terms of journals and countries
Time Span - for how long a study remains influential in a given tradition
It really depends on the discipline; in economics, ISI citations are more important than Google Scholar ones; let say that to score less than 10 ISI citations in the first 10 years after publication is a failure.
Dear friends, Citation analysis across disciplines: The impact of different data sources and citation metrics is valuable resource."If one considers the traditional performance indicator (ISI General Search citations), academics in the Sciences out-perform academics in the Social Sciences and Humanities. When using a more comprehensive data-source and correcting for career stage and the number of co-authors, Academics in the Social Sciences and Humanities out-perform academics in the Sciences...."
http://www.harzing.com/data_metrics_comparison.htm
Factors which influence the value of the citations contributing to the repute of the academic article are:
1) Repute of the citing author.
2) Diversity of citations in terms of journals and countries.
3) Length of time/ time span for which the study remains influential in a given tradition.
Hi Reginald,
Please check the nature article. Though it will not precisely answer your question, but i feel the findings of this paper are relevant to the topic.
The title is:
The top 100 papers: Nature explores the most-cited research of all time.
Thanks,
Ram
http://www.nature.com/news/the-top-100-papers-1.16224
Dear @Reginald, The most-cited articles of the 21 st century! The resource is good for this thread as previous one.
Conference Paper The most-cited articles of the 21 st century
For a new article (within 3 years) 15-20 citations would be enough. In case of older article (more than 10 years old) minimum 100 citation required.
Figures are based on group discussion in my academic group.
I agree that being cited as a positive contribution is valuable and being cites as a misguided or poorly executed effort is a negative reflection on the author.Nevertheless , it is useful to note that the majority of citations reflect positive regard ,or at least recognition of the importance of one's work..
Empirical papers are generally cited in relation to prior impact of the journal where the publication appeared.Conceptual , theoretical papers take longer to get cited. They almost wait to be discovered by an influential author.. Once acknowledged in an important paper, citations can grow exponentially.
Self citations can also enhance visibility of a paper , but they are discounted by some scholars.Invited chapters in edited books are not a good bet for getting cited. Invited contributions to handbooks have more prestige, but still receive limited citations. Some papers have a long latency and are not "discovered " as important, until the problem studied gains popularity.
These observations reflect my own experiences in a long career.
Colleagues, I came across this very interesting article in a literature review for a study I am currently undertaking concerned with Google Metrics and citations of business communication articles; my article is available on ResearchGate at:
https://www.researchgate.net/publication/280655748_Citation_Differences_between_ABC_Journals_and_Related_but_Unaffiliated_Quality_Journals
Furthermore, Walters (2011, p. 1629) who analyzed the annual citation counts for 1172 articles published by 13 American Psychological Association journals says:
"When the sample was divided into four categories of impact using the total citation counts for each article—low impact (0–24 citations), moderate impact (25–99 citations), high impact (100–249 citations), and very high impact (250–1763 citations) —the yearly citation counts of low to high-impact articles peaked earlier and displayed a steeper decline than the yearly citation counts of very high-impact articles. Using 5 or more citations a year, 10 or more citations a year, and 20 or more citations a year as markers of moderate-impact, high-impact, and very high impact articles, respectively, and using the most cited articles in a journal during the first 5 years of the follow-up period as indicators of high impact and very high impact showed promise of predicting impact over the entire 25-year period."
Reference
Walters, G. D. (2011). The citation life cycle of articles published in 13 American Psychological Association journals: A 25‐year longitudinal analysis. Journal of the American Society for Information Science and Technology, 62 (8), 1629-1636.
Research Citation Differences between ABC Journals and Related, but, ...
I also forgot to mention that many of your posts are consistent with the empirical evidence found by Walters (2011) I just cited in the above post. Thanks to everyone for answering my question!
One more thing, I was able to find the PDF of the Walters (2011) article in ResearchGate:
Here is the link:
https://www.researchgate.net/researcher/70733292_Glenn_D_Walters
Thank you Reginald,
Your providing a reference should be very helpful to those of us mentoring graduate students.As in everything quantitative and qualitative ( including experiential ) data offer alternative , but also converging insights.
As many as possible,if an article is good people will definitely continue to cite it
I can add that publishing in journals that are not indexed in library subscription databases or listed with Google Scholar has reduced visibility of some of my articles. Like George, some of my older articles are now getting some notice because they are available to the world via ResearchGate.
I believe it takes about 25 years to know the true value of an article's influence. I completely agree with George that articles need time to get some notice and to be cited.
"Citations are a proxy measurement for attention which don't completely reflect the impact of a work..." Good discussion follow!
What are the drawbacks of citation count?
http://www.quora.com/What-are-the-drawbacks-of-citation-count
Surely it will also depend on the size of the discipline or rather the specialization within the discipline? A paper published in a discipline with a 1000 researchers might get more citations than one with only 100 researchers. Another issue is that if there is inaccurate data or "funny" ideas, there might be many citations resulting from people pointing this out or disagreeing. Making a controversial statement is likely to increase the number of citations!
I would say it depends on why you ask. If it's for self-evaluation, I would say (and I have done this for myself): first name someone in you immediate field who you have great respect for and whom you wish to emulate. Then name someone in your immediate field that you have some respect for but would consider your career less than fully developed if you were to have accomplished what the have accomplished at their current career point. If you can, name three or so for each category (try and get pairs on either side with similar "career points"). Gather some statistics (Number of publications, number of citations, Number of publications with over 50 citations, etc. ) Compare your record to those data. Do some statistics to try and account for age and number of years as a researcher and so-on.
Then be honest with yourself. More than likely this will only tend to confirm your prior belief.
I suggest that process because think the level of activity in your your field can make a difference, maybe a profound difference, in the number of times an important paper is cited (BY OTHERS -- always subtract the number of times an author cites themselves). Not to be curt, but consider that if there are only a couple hundred people in the world doing the kind of work you're doing, it will be hard to get to 10,000 citations. So can agree that N can't be the be all and end all, provided we agree that important work can be done on subjects that don't have broad appeal.
If you do what I suggest, you will find that "researchers" who write review papers can be very strong on paper. Now you need to answer the question "Do you want to be good or look good?"
I always wanted to be good, and I never figured out a good way to quantify how good I really was. (I use the past tense because I'm just recently partly retired) I spent enough time on the problem to convince myself I was good enough to carry on, but not so much time that I regretted having wasted time on self-aggrandizement.
However, if you want to do this to bull-shit your boss, things become much more simple. If you tend to publish alone or with only a few coauthors, normalize citations by number of coauthors. If you concentrate on original content, account for that (in come complex-sounding but potentially fully meaningless way). For example, you could compare the number of citations for a "good" review paper to a "good" original contribution publication and scale by that number), This exercise may be necessary to pay the bills, but try and keep the time you spend on this crap to a minimum. I think, in the end, you will be proud to have taken the high road, if you can.
Phillip, this is the best bit of advice I've come across. Numbers, ultimately, are all smoke and mirrors. Its how you honestly feel about yourself.
Citations here at Researchgate are inaccurate and they may be a subject of manipulation!
Not all self citing is self promotion. If you are researching a specific subject over a number of years it might be necessary to cite your own work as you are building on previous work. Particularly if your work is important or if there are few people publishing in your specialised field. Generally, as someone who publishes in both science and the humanities I find that the latter tend to do more detailed citing than the former. Any comments on this?
I tend to agree with you Professor Jacobson. If an author's own work is not good enough for him or her to cite their own work, why should it be good enough for others to cite?
Your argument is strong. There is ample proof that in the more esoteric fields authors self cite a good amount.
Hi
Citations is a symbol Quote like RG, Is not invalid and not by value
I agree with Mr. Leon Jacobson and Mr. Reginald L. Bell. It can also happen that one idea remains long misunderstood.Citations are then rare.
It is seen that somebody does not believe in "misunderstood ideas" :-)
It takes some time after publication to start acquiring citations. Publishing in better known journals tends to increase citations. I like the principle of open access and did publish some of my high quality work in open access journals , but they do not get nearly as many citations as do the ones in more traditional journals.From practical experience 20+ citations after two or three years is a marker of a well cited article. Good luck everyone!
Twenty citations of a scholarly article after two or three years is indeed a well cited article!
Thanks Professor Kahana.
Great question/comment Reginald (and others above)! Now a variation: what are average or typical number of citations for a scholarly article over that 12 year period you mentioned (or 10 year period)? looked and looked but can fInd no paper on typical number of cites for a journal ARTICLE. I see ways to see cites of JOURNALS and even cites of AUTHORS but cites of ARTICLES? I would think 20 cites for any article would be good even for ten years, but am I right? Somebody MUST have studied this, over and beyond the urban myth that 90% of all papers never get cited.
John,
Here is an excellent study (Walters, 2011) that I have cited in the past which comes close to answering your earlier question:
Walters, G. D. (2011). The citation life cycle of articles published in 13 American Psychological Association journals: A 25‐year longitudinal analysis. Journal of the American Society for Information Science and Technology, 62 (8), 1629-1636.
How many citations does it take to indicate an academic article is influential? - ResearchGate. Available from: https://www.researchgate.net/post/How_many_citations_does_it_take_to_indicate_an_academic_article_is_influential/8 [accessed May 16, 2016].
I am sure that the life cycle of a paper also depends on the type of content. For example, a paper citing original data might be most relevant for decades whereas a theoretical paper either becomes less relevant as theory changes or else it becomes mainstream and is no longer referenced directly, ie, how many physicists directly cite Einstein's papers these days althogh they use his theory?
I agree with you Leon.
There are seminal works in every field, and even they decline in citations frequency over time because theory changes might not pivot in the direction of the seminal works, or the contributions of these works are completely integrated and accepted in the methods of a particular field.
Seminal works are often cited on a perfunctory basis as a show of respect and admiration for the guru, ie, Cronbach (1984) for scale reliability or S.S. Stevens (1946) for scales of measurement; the contributions to research methods are so integrated that these gurus need not be cited in most cases. The contributions remain, while as time goes on citations of seminal articles decline because citations of seminal articles justifying the broadly accepted methods are often unnecessary. Imagine how often R.A. Fisher would be cited if he was cited each time a researcher used ANOVA.
http://www.britannica.com/biography/Ronald-Aylmer-Fisher
References:
Cronbach, L. (1984). Essentials of psychological testing. New York: Harper and Row.
Stevens, S.S. (1946). On the theory of scales of measurement. Science 103 (2684): 677–680.
Quite! The ultimate value of a published paper will occur when it is referenced in a history of that discipline.
US office of immigration considers 19 citations of which 16 are independent citations are evidence of worldwide attention given to the work and the authors influence on researchers well outside of the authors’ own circle of collaborators and superiors.
Citations, Brilliance and Persistence!
"Scientific brilliance applied to a research project and any articles resulting from it is often rewarded by recognition, citations and prestige. This is as it should be given that being clever and rigorous in theoretical concepts, methodological design and interpretive discussion can mean producing an exceptional article based on excellent research. Such an article is likely to be highly cited (though there is no guarantee that this will be the case) and thereby earn its author(s) desirable positions, generous grants for future work and the best postgraduate students to help with that research. The trick, however, is to sustain this high level of scientific accomplishment through further articles, into a book or two perhaps, and, ideally, over a long and successful career of scientific publishing.
Persistence and sustained effort are therefore required, and they can be more difficult to attain and maintain, though it is no doubt a consolation to many scientists that they can be achieved by those who may not think themselves the most brilliant among their colleagues. Obviously research excellence, the desire to influence the thought and work of other scientists, and a presentation that is accurate and engaging must be sustained throughout a single article or research project, no matter how long, but a scientific author must also persist in achieving these qualities of successful professional writing in every article, chapter and book he or she publishes. Given that even the most brilliant article will usually create a buzz and generate numerous citations only for a limited period of time, a body of excellent work created over a number of years is a much more reliable tool for earning a continuous stream of citations in the writing of other scientists..."
https://www.linkedin.com/pulse/citations-brilliance-persistence-rene-tetzner?trkInfo=VSRPsearchId%3A577811291466151784430%2CVSRPtargetId%3A8812213049638122744%2CVSRPcmpt%3Aprimary&trk=vsrp_influencer_content_res_name