The IF was devised by Paul Garfield, and (therefore?) has lead to lazy behaviour of such organizations as committees reviewing serious researchers' achievements.
But @Neha wants to know about the impact of journals not individuals. And I wonder why? A journal is a journal. And we are we. Don't go for IF, if it isn't necessary for your career. Instead, go for honest Open Access with a sound review process. People blame OA journals of misbehaviour, but you can do the same for 'respected' journals that pimp their IF with review articles, which often are cited more often than plain studies.
Another problem with IF is that it relates much on the field of study you want to publish for.
There is only one impact factor (the one available from Thomson Reuter's JCR), the other metrics are not the impact factor but others and should not be confused with the IF.
They will have different calculations and count different journals.
The IF was devised by Paul Garfield, and (therefore?) has lead to lazy behaviour of such organizations as committees reviewing serious researchers' achievements.
But @Neha wants to know about the impact of journals not individuals. And I wonder why? A journal is a journal. And we are we. Don't go for IF, if it isn't necessary for your career. Instead, go for honest Open Access with a sound review process. People blame OA journals of misbehaviour, but you can do the same for 'respected' journals that pimp their IF with review articles, which often are cited more often than plain studies.
Another problem with IF is that it relates much on the field of study you want to publish for.
The Thomson Reuters IF (the official source is Journal Citation Reports) still is the most widely used IF rating. We all realize it has flaws, but in reality it still is the one most used. How important it is to you depends on many things. My colleagues in in the biological sciences in Europe and Asia often have to publish n journals with specific ranking based on IF or on the actual IF itself to "get credit for" a publication. In my own university some departments use IF to determine promotability of faculty whereas others find it completely useless.
You ask which is best, but that cannot be determined without knowing the circumstances. Most impact factors, regardless of what they are, look at citation. In some fields that is critical, but in others citation is irrelevant and use/application (often in industry) of the data and results is far more important than citation by other researchers.
it is like currencies: which do you prefer. Which does your employer prefer. If you want a job in the West, you should go with Thomson Reuters. It is also the most extensive and most robust (having said that, it is also far from perfection).
A real study of the impact would need significantly more dedicated study than just a few papers (here-and-there) observing (more-or-less !) correlating aspects across some disciplines. It is like that IQ factor ... it makes those with "Slick Willy" abilities stand in front. A type like Bohr who was always slow, would never score high. Yet there is no reward for (1) perseverence, which is an important aspect of research, nor for (2) long term and long range correlations (the "sink-in" type of understanding that Bohr had).
Also these "factors" fall under the "utilitary-humanity" mercantile view, which I find lacking achievement and evidently morally questionable