The impact factor (IF) of journals depends on several parameters, but essentially depends on the number of citations gathered by the articles in the journal. Obviously, the number of citations depends on how many researchers are working on that field. For instance, there are few researchers working on pure mathematics, compared to other fields such as physics, chemistry or biology, and consequently the impact factor of journals specialised in pure mathematics is lower. Ironically, the IF measures more accurately the size of the research field, than the actual "impact".
As a personal opinion, I believe our increasing obsession with IFs is starting to damage science. Now it seems we only want to publish in high impact journals, which tend to be generalist (e.g. Science, Nature...). Consequently, these journals have become heavily biased and they blatantly favour certain topics which they know will gather lots of citations (because there are lots of researchers working on it, not necessarily because of the scientific relevance). By doing this, we are giving the journals and editorials the power to steer research in the direction they want. For instance, researching in a topic with few people working on that and so that it doesn't gather many citations will make it very difficult to publish results in high impact journals. As a consequence, it will become more difficult to have a competitive CV to secure funding, making it even more difficult to continue having exciting results and thus publishing in high impact journals and it becomes a snowball effect. Eventually, researchers are forced out of that topic and forced into more popular topics such as materials science or biomedicine, where you can sometimes see boring bandwagon research with IF over 20.
We are in need of a change in mentality and we should stop converting science into a business ruled by IFs. If we don't, the whole society will probability suffer the consequences in the long term.
The impact factor (IF) of journals depends on several parameters, but essentially depends on the number of citations gathered by the articles in the journal. Obviously, the number of citations depends on how many researchers are working on that field. For instance, there are few researchers working on pure mathematics, compared to other fields such as physics, chemistry or biology, and consequently the impact factor of journals specialised in pure mathematics is lower. Ironically, the IF measures more accurately the size of the research field, than the actual "impact".
As a personal opinion, I believe our increasing obsession with IFs is starting to damage science. Now it seems we only want to publish in high impact journals, which tend to be generalist (e.g. Science, Nature...). Consequently, these journals have become heavily biased and they blatantly favour certain topics which they know will gather lots of citations (because there are lots of researchers working on it, not necessarily because of the scientific relevance). By doing this, we are giving the journals and editorials the power to steer research in the direction they want. For instance, researching in a topic with few people working on that and so that it doesn't gather many citations will make it very difficult to publish results in high impact journals. As a consequence, it will become more difficult to have a competitive CV to secure funding, making it even more difficult to continue having exciting results and thus publishing in high impact journals and it becomes a snowball effect. Eventually, researchers are forced out of that topic and forced into more popular topics such as materials science or biomedicine, where you can sometimes see boring bandwagon research with IF over 20.
We are in need of a change in mentality and we should stop converting science into a business ruled by IFs. If we don't, the whole society will probability suffer the consequences in the long term.
Besides an excellent answer of Martí Garçon, thank you for it, there are other factors and information that we must openly speak about. Let us look at the problem globally.
Science is a very dynamic and complex creature that can transform its whole fields from one year to the other one. This is difficult to predict.
It is very complicated or impossible to judge, which research is important and which is not in the long run. Hence, people developed or found out measures (like IF) that are replacing hard work of decision makers at any level including science. Group leaders, institute or department heads, deans, the ministry of education employee, grant agencies employee.
As a person working on a novel, next-generation technology, I know how difficult it is to find out money to support a research group working on it. When you cannot provide immediately high impacted journal publications, you are in great troubles with funding such research. It is a kind of deadlock. Without results, you cannot get funding, and without funding, you cannot make results.
The strategy is to have solved or at least pre-solved the problem, which you are asking money for. In this way every level of science management is sure that there will be HIGH IMPACTED PUBLICATIONS. This strategy is in long term nonsensual as no one wants to do risky research.
It is a very harmful strategy as we, humanity, are restricting ourselves from great possibilities due to temporary insecurities. Every 'sane' researcher is doing doable research and no one is taking risks of failure.
This is a way in the direction of mild, conservative research that is not pushing development much. This is why groundbreaking research discoveries are often coming out from young, risking, undistorted researchers.
Contemporary research becomes more like hunting for publications and less a passion of the discoverers. We are losing an important segment of the ecosystem of all researchers, especially those who are capable to go beyond imagination. Unluckily, the society is not informed about this huge distortion, about this huge internal censure, of researchers themselves as they are afraid to enter risky water of real research.
Papers published with or without IF Journals are upgradation of knowledge and teaching. However, it may or may not be innovation or R & D. Systematic experiment at the same time as both basic and applied research to target at yielding solutions to existing problems or problems constructed or producing new goods and knowledge. R&D may result in proprietary rights of intellectual property such as patents that should be the real measurement of quality of research. Some of the key points raised by Martí Garçon are worth noting:
1. " Ironically, the IF measures more accurately the size of the research field, than the actual "impact".
2. "Science is being converted into a business ruled by IFs."
3. " These journals have become heavily biased and they blatantly favour certain topics which they know will gather lots of citations (because there are lots of researchers working on it, not necessarily because of the scientific relevance)."
Amen! It is a vicious circle. Even Journals are after money and, in last analysis, most of that money comes from the same sources as the original research appropriations. It is a not-so-subtle way in which politics and bussiness steer R&D - and even basic Science - into channels they can manage (plan) in the ways they are used to. But it is not that there are some nasty conjuring individuals behind it. It is a blind behavior that got build into the system.