Talking about IF we should always remember the following caveats:
1- Citations are only a proxy measure of the actual impact of a paper – your paper could have an enormous influence while not being cited in academic journals
2- Impact doesn’t only occur in the two years following the publication of the paper: in slow moving fields, in which seminal papers are cited five or ten years after publication, these late citations won’t get counted towards the impact factor so the journal’s impact factor will be smaller than justified
3- The impact factor measures the average impact of papers in the journal; some will be cited much more, some not at all
4-There are ways for journals to ‘game‘ impact factors, such as manipulating types of article so that the less cited ones won’t be counted in the calculation
5-The methods used for calculating the impact factor are proprietary and not published
6- Averages can be skewed by a single paper that is very highly cited (e.g. the 2009 impact factor of Acta Crystallographica A)
7-Although impact factors are calculated to three decimal places, I haven’t seen any analysis of the error in their estimation, so a difference in half a point may be completely insignificant
8-New journals don’t get an impact factor until they have been publishing for at least three years.
if impact would be "REALLY" the thing to go, Google Scholar and Co. should be on top of most journals. what good is an article for if it is not accessible due to (too) high subscription fees for libraries and individuals?
There may be some truth in it..I think the citations are related to the more visibility of the article and if the journal that you publish is the most reading journal by the researchers in your field then there are more chances for good citations though the IF of that journal is low. In another case, though the IF of the journal is high and if that journal is not being read by the researchers in your field, then the chances for citations will be less. I think the key point is we have to publish in the journals where the results will be more relevant and where it can reach the right audience.
I am also 100% agree that IF is not a perfect measure for research paper. Because IF is a qualitative measure of Research Journal. It does not mean that a research paper publish in high IF journal is a very good research paper or a research paper publish in low IF journal is not a good research paper.
Talking about IF we should always remember the following caveats:
1- Citations are only a proxy measure of the actual impact of a paper – your paper could have an enormous influence while not being cited in academic journals
2- Impact doesn’t only occur in the two years following the publication of the paper: in slow moving fields, in which seminal papers are cited five or ten years after publication, these late citations won’t get counted towards the impact factor so the journal’s impact factor will be smaller than justified
3- The impact factor measures the average impact of papers in the journal; some will be cited much more, some not at all
4-There are ways for journals to ‘game‘ impact factors, such as manipulating types of article so that the less cited ones won’t be counted in the calculation
5-The methods used for calculating the impact factor are proprietary and not published
6- Averages can be skewed by a single paper that is very highly cited (e.g. the 2009 impact factor of Acta Crystallographica A)
7-Although impact factors are calculated to three decimal places, I haven’t seen any analysis of the error in their estimation, so a difference in half a point may be completely insignificant
8-New journals don’t get an impact factor until they have been publishing for at least three years.
I agree with the points by many people in the discussion. I like the Sinjab's view. Acta Crystallographica A is a good example for impact factor fluctuation. The rise of the IF of the journal happened after a paper called “A short history of SHELX” was published by the journal in January 2008. Impact factor of a journals is not matter than the quality of the work.
Besides, I would like to say some glimpse about impact factors.
Many of you may know that Citation index were originally designed by Henry Small and Eugene Garfield for information retrieval. Garfield conceived the idea of impact factor of journals in 1955. Garfield and Irvine H Sher introduced the term “impact factor” in 1963 [http://garfield.library.upenn.edu/papers/jamajif2006.pdf]. At that time, it was used for journals (threshold) to include in the SCI database. Latter on, it was increasingly used for evaluation of individual's performance, career promotion, hiring of scientists, funding or awards etc. Even, Garfield has warned the improper use of IF in many of his essays. For example: Garfield, E. The Impact Factor and Using It Correctly, Der Unfallchirurg, 48(2) p.413, June 1998. Available at: http://www.garfield.library.upenn.edu/papers/derunfallchirurg_v101(6)p413y1998english.html
Recently, San Francisco Declaration on Research Assessment (DORA)—a document drafted last December at the annual ASCB meeting and posted online today—the scientists write: "It is … imperative that scientific output is measured accurately and evaluated wisely." Their 18 recommendations urge the research community to "eliminate" the use of journal impact factors in funding, hiring, and promotion decisions.
Signatories include Science Editor-in-Chief Bruce Alberts (see his editorial); AAAS, Science's publisher; dozens of other editors, journals, and societies; as well as the Howard Hughes Medical Institute and the Wellcome Trust, which are major research charities.
Thomson Reuters responded to the DORA in this statement, agreeing that: "No one metric can fully capture the complex contributions scholars make to their disciplines and many forms of scholarly achievement should be considered." The company notes that the impact factor "is singled-out in the Declaration not for how it is calculated, but for how it is used."
You can see the editorial by Bruce Alberts, Editor-in-Chief of Science on “Impact Factor Distortions” http://www.sciencemag.org/content/340/6134/787.summary.
See further: The end of journal impact factors? http://stevenhill.org.uk/2013/05/20/the-end-of-journal-impact-factors/
In my view, the original question posed was on the relationship between journal impact factors (IF) and citation frequency (CF) of individual papers was perhaps not that interesting (I've no idea if this relationship has changed or not), but the discussion which has evolved from the question is.
I've always seen IF as an extremely overrated bibliometric tool when used in research assessment, and I thank Guna for pointing out the DORA document in this context. This document basically SCREAMS - DON'T DO IT! - neither funding agencies nor institutions should use IF for research assessment when determining funding or hiring of scientists. There are several reasons behind this recommendation. Below is the DORA document, explaining these reasons and more - an important read!
Research suggests that there are certain factors that favor some types of publications to be cited. For example,there are specific areas of knowledge which are cited more than others: the impact of science is usually higher than of the social sciences and humanities.
@Pandya, It is true that there is no relationship between article citations and the impact factor of journals. As you said one paper in a journal can boost up the IF of a journal.
For example, Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature’s 2010 IF is 36.104. The rise of the IF happened after a paper called “A short history of SHELX” was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012).
I'm absolutely no proponent of the journal IF, but to claim that there's NO relationship between the IF, quality and number of citations as Devang and Gunasekaran do is simply not correct.
The example used to argue this is a rather amusing one, but also quite exceptional. How many papers will get cited over 26000 times?
However, most importantly, the example also illustrates that there ARE relationships between the parameters mentioned. Although quality is a subjective concept, if a paper is cited thousands of times, it must be said to possess some qualities, in this case being easily cited methodological summary. I I think many agree that citation frequency of a paper is a useful scientometric measure of quality. The, the fact that the IF of that journal did rise due to the many citations of this single paper clearly illustrates that there IS a relationship between citation frequencies of the papers published and the IF of the journal. One could possibly question the algorithm used to calculate the IF, when a single paper can have such a dramatic effect on the IF, but that's another story.
We can continue to criticize the value of the IF, but let's stick to the facts ;-).
"I think many agree that citation frequency of a paper is a useful scientometric measure of quality. ..."
Impact factor of a journal changing year to year based on their articles citations. It is true globally accepted that the quality of a journal is measured based on its citations. Citation of papers is the basic elements to calculate IF of a journal. It is a part of IF.
The article, “The weakening relationship between the Impact Factor and papers’ citations in the digital age,” by George Lozano and others, was published in the October issue of the Journal of the American Society of Information Science and Technology (JASIST) found that there is a weakening relationship between the article and the journal. The authors also claim that highly-cited articles are increasingly being found in non-highly-cited journals, resulting in a slow erosion of the predictive value of the journal impact factor. (http://onlinelibrary.wiley.com/doi/10.1002/asi.22731/abstract
Their manuscript can be also be found in the arXiv. http://arxiv.org/abs/1205.4328)
They have empirically proved it.
Generally researchers believe that the papers published in high IF journals will receive more citations than it published in a low or medium IF journal. Not all the papers are equally cited. But it is absolutely based on its own merits. One or two papers a journal get large number of citations (extraordinarily) and that lead to raise the journal IF.
You have mentioned that the algorithm used to calculate the IF, when a single paper can have such a dramatic effect on the IF, but that's another story.
"I'm absolutely no proponent of the journal IF, but to claim that there's NO relationship between the IF, quality and number of citations as Devang and Gunasekaran do is simply not correct." I totally agree with this Björnsson;s viewpoint, Note the concept of rank-normalized IF proposed by Pudovkin and Garfield (2004), and its application example in this paper in my profile:
Rank-normalized journal impact factor as a predictive tool. Archivum Immunologiae et Therapiae Experimentalis 2009, 57 (1), 39-43.
Dear all I must say now with available exposure to the publication process and the post publication citations of papers, journals have their own ways of improving citations and impact factor : it does not directly indicate the quality of the journal
I am of the opinion that IF should not be given much priority to evaluate a scientists' contribution to a field or rating his/her quality of research as there are a number of chances / opportunities to increase the IF of a journal using various gimmicks. So many such efforts had already been highlighted by the scientific community. So, complete dependence of IF to measure the quality of journal is not correct.
We should all agree on a point that more than numbers (Impact factor/citations/h-index), the quality of the paper/research work, its applicability to public/scientific community/advancement should be more significant things that we look in the researchers
There are a good number of journals which are publishing high quality research papers which are not coming under the ambit of IF due to various reasons viz., vernacular publications, traditional knowledge, etc. So, IF is not the only criterion to judge the quality of a journal, I strongly believe.
I agree that it should be a weak correlation between the two factors (IF and citation). Simply, there are many papers which have published in open access (with no IF) but are with good quality and more interset to be citated due to some times first to be reported whereas there are some papers with very detailed results in very specific area and published in high IF but not citated too much due to limited interest.
Fathi M Sherif stated "I agree that it should be a weak correlation between the two factors (IF and citation)". However. have you indeed studied the correlation?
I above quote my 2009 article ( see my RG profile), that shown a real application of the rank-normalized journal IF as a proxy of real citation frequency and, accordingly, as a predictive tool in the a priori qualification of recently published publications is a rational time- and cost-saving alternative (or at least a significant supplement) to traditional informed peer review. So, blanket criticism of IF is at least partly exaggerated.
Note also discussion on positive aspects of IF (and other citation indices) as an objective evaluation system of science and scientists, especially in countries behind the scientific leaders, presented in the attached 2003 paper from Nature. Lomnicki claimed – the system is wrong and unjust, but other systems are much worse. As well, critics of the evaluation based on citation indices propose only "a utopia with high moral standards”.
Impact factor, yes still i believe has not lost the sheen in terms of its use but there has been a lot of discussion on using Impact factor/citation index/h-index and lot many as to which parameters rightly defines the researchers impact/strength.
Arguably, IF, H-factors and other indices are uni-dimensional, whereas the success of any scientist is multifaceted. Indeed, using indices is an easy way for policymakers and granting agencies to get some - not necessary a thorough - idea about a scientist's quality. Improvements can easily be made I think. For example, to judge the true impact of a given paper, one could divide the impact factor of the journal by the number of citations the paper accumulated. This will show whether a paper has positively or negatively contributed to the IF. In this way, a paper in an high IF journal that was cited only once or twice, will weigh less on a CV than a paper in a low IF journal that was cited many times. In this way, one can better judge how much the really contributed to the advancement of the field.
The answer for this actually is the emergence of OPEN ACCESS PUBLICATION. It is your research that should speak.
If you publish your research work in any journal its content and standard will decide its future and not the Journals Impact.
I personally believe that some journals/publishers have formed a nexus to improve their impact. Why i formed this opinion is because
Journal of Medical Microbiology that has a history of more than 50 years has a less impact (2.3), where as new journals/journals with only 15 years history have high impact (3.03=BMC infectious diseases) OK
I agree that the former is not completely open access but some publishers some how maintain increasing impact very quickly
I agree with Samuel Arba Mosquera. Impact factor and citation may not result in a correct evaluation merit of scientist and researcher, impact factor measures the usefulness of a journal to only those who read and cite the paper in their publications. A large number of other researchers may have not published but yet benefited from the research findings of a paper published in that journal. I find recently, most journals have gone online, with open access, and it is very easy to keep track of the number of visitors to the journal's website and most down loaded papers etc This may be important to use the number of views of a paper as a measure of its impact and popularity.
In general I agree that citations received by a paper is a better measure than the IF of the journal where that paper was published (although these two measures have clear correlation). I also agree that with new trends in open-access publishing individual papers may become more important in evaluating a scientist in terms of its impact. However, I also think the very idea of founding the evaluation of a scientist's impact mainly on citations is a misleading one. 1. If somebody's original idea is stolen from an earlier publication (supposing for various realistic reaons that this idea has not been duly detected in that publication and/or rewording it in a way allowing a good disguise), the "thief" never cites that original paper because it makes it easier for the readers to discover this theft. 2. In many cases a published paper is thoroughly discussed in lab seminars, but for various reaons not much cited, or it is used in preparing and studying for thesis, but also not cited. 3. There are many possibilities how a person gets quite many citations and publisching record in journals with high IF value, but he/she never has had any substance-ralated contribution to those papers -- e.g.,either just doing a couple of experiments in the lab without any participation in suggesting the basic idea, writing up the paper, revising it or just having found the finances for wotk without any real participation in developing the paper, etc. 4. The current system of counting the impact is not scaled in etrms of number of co-authors. Somebody who has participated somehow in some published papers with, say, 41 authors in most cases has much less impact than an author of a signle-authored paper. (Moreover, the more there are co-authors, the more there will be self-citations, citations by good colleagues who have received that paper, etc.) 5. There are scientists trusted as very good, highly qualified reviewers for journal manuscripts in pre-publication peer review, but they may not be very active in writing papers or do not care much about pushing a paper especially to high-IF journals. 6. There are scientists clever and skilled in joining high-impact scientific collectives (getting at their band-wagon) and becoming co-authors without much effect on the substance of the theory or methods of the published work. 7. Citations, and especially downloads and clicks in open-access publishing may be obtained not because a paper is novel, exceptionally high quality or theoretically deep and important, but for many other possible reasons (curiosity, size of the friendly network a person belongs to, affiliation with an institution for which publication record is more frequently monitored, etc.). 8. Some view or idea may be many years ahead of its time and might get the due share of citations when that author has already departed fro the big friendly society of fellow scientists. I could continue here, but, ... let us agree that IFs and citations should not be the only or main criterion in evaluation of scientific impact. However, they should not be abandoned because there is rarely a scientist who has strong impact, but never publishes and gets no citations.
IF and paper citations are 2 different things - even a paper published in a low IF journal can be highly cited or even a paper published in a journal with no IF can attain much citations
You are correct when you say that there are many papers published in a low-IF journal that get more citations than some papers published in a high-IF journal. But you are not correct when you say that journal IF is not related to citations an average paper published in the journal gets.
I repeat with additional comments on my previous statements:
"You are correct when you say that there are many papers published in a low-IF journal that get more citations than some papers published in a high-IF journal." This is consistent with saying that for a particular, specific paper its citation rate does not depend directly on the IF value specific to the journal where this paper was published. But the statistical distributions (and also means) of the number of citations received by all papers published in journal X is strongly correlated with the IF value of this journal X. Basically, IF of a journal is calculated as based on citation statistics. Thus, "But you are not correct when you say that journal IF is not related to citations an average paper published in the journal gets." Expectation for the number of citations a paper ultimately receives is strongly correlated with the IF value of the journal where this paper is published. Which means that, on the average, papers published in higher-IF journals get more citations than papers published in lower-IF journals. So when you expect to get very many citations by publishing in a low-IF journal the likelihood of your disappointment (because of getting too few citations) is higher than it is when you publish the same paper in a high-IF journal. (The fact that some citation classics have been published not in the top-notch journals do not overthrow the general statistical regularity.)
I would like to strongly support the above Bachman’s statement. You are indeed correct when you say that "a low IF journal can be highly cited" (and vice versa) but PROBABILITY of such case is low (and vice versa). See my simple testimony based on a real set of 378 Polish publications in:
In particular, the established error levels in the prognosis of expected citation success versus failure, for the extreme IF quartiles as an evaluation tool, is 12.5% (the lowest-impact journals) and 5.7 % (the highest-impact journals). So, as discussed by Bachman, increased probability of citation success in high−IF (= mostly renowned) journals is a statistical rule, and relationship between the IF and individual paper citations is far more clear than you and many others think.
Article Rank-normalized journal impact factor as a predictive tool