Here are two that seem to have been missed so far: eigenfactor and article influence score. See http://www.eigenfactor.org for details.
Sorry to add a point off topic but I can not resist. A big problem is not that metrics are accurate or inaccurate but that their introduction effects behaviour. In particular some now direct their research in order to optimise impact factors. There is an outline of some of the problems here http://en.wikipedia.org/wiki/Impact_factor
Plea of a referee: To anyone thinking of using impact factors and other metrics in this way please bear in mind that while metrics can be used to help you avoid publishing in bad journals - you already know the good ones. They contain the articles that you actually value and use in your own research. Editors and referees time is being increasingly wasted by those playing the metrics game which strains an already overloaded system. We should try to submit our papers to the most appropriate journal for that research not hit a list in order of decreasing impact factor.
(if you found this post patronising or too off topic please accept my apologies)
"Publish or perish" is a nice tool (based on GoogleScholar though) giving you many metrics and their definitions and some discussions on their interpretation.
Here are two that seem to have been missed so far: eigenfactor and article influence score. See http://www.eigenfactor.org for details.
Sorry to add a point off topic but I can not resist. A big problem is not that metrics are accurate or inaccurate but that their introduction effects behaviour. In particular some now direct their research in order to optimise impact factors. There is an outline of some of the problems here http://en.wikipedia.org/wiki/Impact_factor
Plea of a referee: To anyone thinking of using impact factors and other metrics in this way please bear in mind that while metrics can be used to help you avoid publishing in bad journals - you already know the good ones. They contain the articles that you actually value and use in your own research. Editors and referees time is being increasingly wasted by those playing the metrics game which strains an already overloaded system. We should try to submit our papers to the most appropriate journal for that research not hit a list in order of decreasing impact factor.
(if you found this post patronising or too off topic please accept my apologies)
The answers to the topic question are complete, thanks to the contributors, but the puzzle is now more complicated. Which of the many indexes is the best? Each of them has benefits and limitations, but Thomson Reuters (ISI) is covering a wide number of journals (17283) and is one of the first (if not the first) to try to sort the plethora of journals based on their impact, using a simple formula.
There are many collateral questions which stem from the initial one:
- Why national scientific forums use mostly T.R. (ISI) in evaluating scientific merit? Why not the many others?
- Which are the side-effects of using such an indicator? Mark pointed out the index oriented publication of researchers and the editor’s effort due to the flood of bad papers. I could add the increase of self-citations and “mutually friendly” citations altering ISI index of a journal. Thomson had to eliminate some journals from their list after such practice has been discovered. But other side effects have to be mentioned: emerging journals in many countries are struggling to survive since most quality papers form these countries are published in journals of already high ISI factor!
- Is the merit of a paper measured by the impact of a journal, if the paper itself had no citations?
After all these questions, what can be suggested as alternative?
Thank you Mark and Mihai, very interesting. My opinion is the same but governments and universities can't understand your point. they only know something called ISI and nothing else.
Number of citations a paper acquires has more weightage than all national / international impact factors - that is the real scientific impact of one's paper
unfortunately the problem of metrics goes beyond what we can discuss here and what we can do. Besides being related to the field one works in, metrics can in some case give severe bias/errors. Someone not working in the medical/bio field cannot access journals with stellar IF.. so what? Moreover someone citing a paper that contains severe flaws contributes to increase the IF for the corresponding journal...
So beware of metrics on an absolute scale.
So I completely agree with Mark: research should get back to the origin and not being strained/guided by bibliometric indices. We are too many, there are too many journals and the publishers have to earn. That's the problem behind all of this.
Read, read, read and read again and think more about leaving a sign in your community and being known for what you do. Remember that usually, just 5% of the total papers usually build up to the IF of a journal like Nature. So be sure you're in that 5% or be somewhere else. Cause with this run for metrics, one day people will start checking if your contribution to a journal is above the IF (i.e. you published something valuable for the community) or not... and at that point I would not like to be the one who published a lot of fried air on Nature!
There is another one becoming common, SCImago journal rank (SJR) indicator.
However if we go to publish all papers in high impact or high SJR journals as a first choice, naturally problems will arise. One needs to look for most appropriate journal where similar work is published. However, one must agree no body wants to take a chance with a low rated journal. Though it may be equally rewarding as far as announcement or claim of a new result is concerned.
But not all human are creatives, are they ? So, how can all of them get high impacts? here is my suggestion ? just leave all metrics ........and don't use it , who agree with me ? if no let us know why, regarding what we discussed before..
the h-index and the IF are different things. Hirsch index refer to the citations of the papers of an individual (in his life), the IF to the citations received by a journal in a fixed time frame
In my opinion, I think the vision of each institution informs the choice of journal rank indicator. in my institution for example, ISI Thomas Reuters is the most popular and the school is driving that seriously cos of the long term plan they have. However, other impact factors should be seen as credible or have them in grades, then people can make their choice.
Thomson Reuters JCR report is not clear. Who prepare the report and how the citations are calculated? Does anyone know?
Secondly, only Thomson Reuters can not be only the evaluator of all journals of the world. There can be other agencies also.
It is the old agency, only for that reason people believe.
Mostly Thomson Reuters indexed journals are not fee and all articles are paid. Why they can not be made open access.
The thing is to make money only.
Following factors also should be considered.
1. Publisher of a journal
The authors can get an idea for the publisher. Old and renowned publisher usually produce articles of good quality.
2. Reviewers and editorial board
If the journal has qualified and experienced editorial board members and follows a peer reviewed process, the journal will have good quality.
3. Indexing in various databases
The journals indexed in various good databases and universities have good quality.
4. Article published per year
5. Whether the journal is Free or Paid
It is not always necessary that a paid journal does not publish good quality research. There are many paid journals that publish very good quality research. The journals that are free, of course from the author’s point of view thy are good but they earn millions of dollars by selling the articles. Free journals do not necessarily means good. It depends on the review process and the reviewer’s comments or whether the article has been reviewed or not.