Both are commonly considered good - publications with journals with high IF and a high number of citations of these publications. However, neither of these two indicators is perfect nor does it reflects undoubtedly the level of the research of a given person. There are many reasons for this (I have had previously the opportunity to answer to similar questions by other research gate members with more details). Here I would like to mention only two reasons, which in my opinion, are probably the most important ones: IF varies significantly between research areas and can be a real measure to compare research performance only within a given research area - i.e. it is not an universal indicator; next as to the citations - of course, a high number of citations (excluding self-citations) is a good indicator that that a given paper has provoked an impact in the research community; however, there Is no guaranty that the impact has been positive. A high number of citations may be also due to a paper, which contains wrong statements and has been cited to be criticized! Therefore, if you want to be sure never rely only on numbers (IF, number of citations, H-index, etc.), but read the publication and try to evaluate its impact by the comments and the contribution it brings to the underlying science.
If your paper regardless of whether its in a high or low impact-factor journal is cited many times, it indicates that it is being read by many people in the field. Hence, this is probably a good indication of the 'impact' a particular published paper has made in the field. Impact factor is really referring to the Journal itself. Even though theoretically a Journal's impact factor indicates how much its papers gets cited on average, it really varies with each individual paper. Getting a paper published in a higher impact journal tends to be in my experience harder than in a lower impact factor journal, hence it is often used as a parameter to gauge the quality or perceived impact of the study published. In the end, I always believe that a good paper describing well-executed experiments with novel findings will always be well-perceived by others, so that should always be every researchers' aim. When I do my experiments, I never think about what 'impact factor' its going to get me.
This is a recurrent question. It exists plenty of indexes used for evaluating the research and the researchers. They do not measure exactly the same thing.
In your case, the "impact factor" provides an idea of the influence of a journal on the scientific community. It is based on the number of citations of articles in the journal.
The number of citations of one of your publication indicates the influence of your specific publication on the scientific community.
But it is not as simple as that... If you publish in a journal with a high impact factor, this (generally) reflects a good peer-review process and suggest that the article is of good quality. Moreover, scientists will access your publication more easily and therefore cite it more easily. So both are linked. If you want to be recruited, they will look at the ranking of the journals where you published but also your H-index, which is an indication of the level of citation of your articles.
I advise you to read the (numerous) editorial articles that have been published to deal with these indexes and their influence (bad or good) on science.
Both are important. It is debatable question, as well. As Thomson Reuters has all the rights of issuing IF, it seems its getting commercialize now. On the other hand, counting the citations also relies upon data bases/ indexing. For me, a paper published in low IF journal with many citations (off course, other than self-citations) contributes more to scientific community.
Both are commonly considered good - publications with journals with high IF and a high number of citations of these publications. However, neither of these two indicators is perfect nor does it reflects undoubtedly the level of the research of a given person. There are many reasons for this (I have had previously the opportunity to answer to similar questions by other research gate members with more details). Here I would like to mention only two reasons, which in my opinion, are probably the most important ones: IF varies significantly between research areas and can be a real measure to compare research performance only within a given research area - i.e. it is not an universal indicator; next as to the citations - of course, a high number of citations (excluding self-citations) is a good indicator that that a given paper has provoked an impact in the research community; however, there Is no guaranty that the impact has been positive. A high number of citations may be also due to a paper, which contains wrong statements and has been cited to be criticized! Therefore, if you want to be sure never rely only on numbers (IF, number of citations, H-index, etc.), but read the publication and try to evaluate its impact by the comments and the contribution it brings to the underlying science.
There is a rather new interesting initiative - The San Francisco Declaration on Research Assessment (DORA) - to improve the ways in which the outputs of scientific research are evaluated.
The impact factor of a journal reflects the citation per article published. It is the consensus that this is a measure of the journal's worth. Publishing in a journal whose impact factor is large does not reflect the worth of your work whereas the citation of that work provides that measure. Bottom line, it is the number of citations that define the worth of your article. However, we assume that the citations are positive in nature, and of course if an article is plain stupid, it may generate a huge number of citations, etc.
Today, the impact of one's lifetime work is defined by the total number of impact factors accumulated. RG uses this as a measure of one's lifetime work.
Here is some points about IF. http://www.sciencemag.org/content/340/6134/787.full
I think that one should look not only for IF of the journal, but how reproducible this work. And what was the subject, and waht was the contribution of the given person. Sometimes person made only minor experiments, but the other author get more than 50 % of the results. Both are authors and both have similar IF!
For the citation, it is alos strongly dependent from the kind of the paper: some review may have more citation, to compare withj expreimental work. So, it is not simple answer what is better reflected "scientific face" of the person: citation or IF.
This is an interesting debate in which I find few question the origins of ‘metrics’ such as Impact Factors, Hirsh Index, SJR, etc.
How can a Journal’s metric be a de facto measure of quality or impact or significance for an individual manuscript?
Metrics are invented by publishers as a marketing exercise (publish with me - I give you the greatest impact – although my readership/subscription base may be minuscule), and promulgated by funding bodies to justify or demonstrate productivity. We who practice science have been sucked into this commercial rhetoric..
How is Darwin’s ‘Impact’ measured? How is Mendal’s ‘Impact’ measured?
Among the available metrics, I suggest that you consider “ranking in the field”.
For example you might interpret my manuscript published in Tissue Engineering (JIF 4.065) ranked 25 of 159 (Q1) nin Biotechnology has less impact than my manuscript published in Applied Mathematics Letters (JIF 1.501) ranked 33 of 247 (Q1) in Applied Mathematics, yet the latter was ranked in top 1% publications - all disciplines in 2004, ranked 4th most highly cited publication in Mathematics 2004 (Essential Science Indicators).
You tell me, which of these similar manuscripts, which report alternative analyses of the common data, is a reliable measure of my research outcomes?
This white paper at Journal Metrics is worth reading.
What if only a very limited number of researchers work in your research area? Then the number of citations of your paper may be low yet you contributed significantly to the area.
I think the most important thing is to be passionate about what you do and do a good damm job at it. Impact factors and citations are just by-products. People can measure you with whatever yardstick they come out with every other day, but they don't define you. You are your own definition.
The impact factor is important in that to publish in a high IF journal, you are required to have done tremendous work. However, sometimes this might not be true as we all know that paper acceptance might depend on politics. This is where the number of citations become a good metric to measure your input. A good work will always be recognized whether it is published in a low IF journal. Therefore, the two metrics should be used together to judge research inputs!!!!
Undoubtedly, the impact in the society or in the scientific community of the result of a research is the most important element to be taken into account in any scientific paper published in a scientific journal. However, if in addition to that the published work is often cited by other researchers in his/her published work, then that is certainly an indication that the main content of the paper is welcome within the field of research involved.
Generally both will be valued at the same level. For the scientific quality it will count the impact factor and for the scientific impact it depends on the citation.
NHMRC these days weigh more for citations. Honestly if its a good paper in a food journal people will definitely cite them.
It depends on who is doing the measuring. I side with Seglen and Garfield that the IF should never be used as a surrogate to evaluate its authors. It is a journal level metric and was never intended to be a measure of its component parts. Read my free article on exactly this problem: http://onlinelibrary.wiley.com/doi/10.1002/rcm.6133/abstract
Ultimately it is the impact on society you wish to gauge. That is does the research contribute to solving problems of humanity or to answering questions that man has? That is a long term evaluation. There are attempts to get this effect into the equation, e.g. through altmetrics and expert review and narrative statements of societal impact. Citations are only half the story (but yes, better than just IFs).
The impact factor is indeed important, but only because we choose to make it important, not because it says a lot about an article. The exact same article can be submitted to and accepted by journals with vastly varying impact factors. I discuss these fundamental problems in my post on impact factors:
Both are important but can't be judged as the only measurement of a good research. On the other hand, a good paper should be cited by researchers. That means. it should have some impact. At the end of the day, a paper in a no IF journal or no citation is hard to give any value.
Impact factor measures the quality of the journal. Citations measures the quality of manuscript. Some low quality manuscripts (with numerous errors) have been published in high impact journals. Also, some excellent manuscripts have been published in low impact journals and received numerous citations.