As we know, there is no perfect measure to evaluate an article /a journal/ a researcher. We should be aware that many variants of impact factors are fake. Most of us receive weekly spam emails advertising their journals with fake variants of impact factors.
http://www.harzing.com/pop is an informative cite I recommend.
In addition to the various simple statistics (number of papers, number of citations, and others), Publish or Perish also calculates the following citation metrics for evaluation of researchers:
Hirsch's h-index
Proposed by J.E. Hirsch in his paper An index to quantify an individual's scientific research output, arXiv:physics/0508025 v5 29 Sep 2005. It aims to provide a robust single-number metric of an academic's impact, combining quality with quantity.
Egghe's g-index
Proposed by Leo Egghe in his paper Theory and practice of the g-index, Scientometrics, Vol. 69, No 1 (2006), pp. 131-152. It aims to improve on the h-index by giving more weight to highly-cited articles.
Zhang's e-index
The e-index as proposed by Chun-Ting Zhang in his paper The e-index, complementing the h-index for excess citations, PLoS ONE, Vol 5, Issue 5 (May 2009), e5429. The e-index is the (square root) of the surplus of citations in the h-set beyond h2, i.e., beyond the theoretical minimum required to obtain a h-index of 'h'. The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns.
Contemporary h-index
Proposed by Antonis Sidiropoulos, Dimitrios Katsaros, and Yannis Manolopoulos in their paper Generalized h-index for disclosing latent facts in citation networks, arXiv:cs.DL/0607066 v1 13 Jul 2006. It aims to improve on the h-index by giving more weight to recent articles, thus rewarding academics who maintain a steady level of activity.
Age-weighted citation rate (AWCR) and AW-index
The AWCR measures the average number of citations to an entire body of work, adjusted for the age of each individual paper. It was inspired by Bihui Jin's note The AR-index: complementing the h-index, ISSI Newsletter, 2007, 3(1), p. 6. The Publish or Perish implementation differs from Jin's definition in that we sum over all papers instead of only the h-core papers.
Individual h-index (original)
The Individual h-index was proposed by Pablo D. Batista, Monica G. Campiteli, Osame Kinouchi, and Alexandre S. Martinez in their paper Is it possible to compare researchers with different scientific interests?, Scientometrics, Vol 68, No. 1 (2006), pp. 179-189. It divides the standard h-index by the average number of authors in the articles that contribute to the h-index, in order to reduce the effects of co-authorship.
Multi-authored h-index
A further h-like index is due to Michael Schreiber and first described in his paper To share the fame in a fair way, hm modifies h for multi-authored manuscripts, New Journal of Physics, Vol 10 (2008), 040201-1-8. Schreiber's method uses fractional paper counts instead of reduced citation counts to account for shared authorship of papers, and then determines the multi-authored hm index based on the resulting effective rank of the papers using undiluted citation counts.
More and more open access journal starts, thus the downloading can give an other aspect about usefulness than IF. Perhaps,a comhination of these measuring tools would give more correct result.
"Are there any other alternatives for the Journal Impact Factor?" - an old and never ending debate... Certainly, there are many other metrics and/or indicators; however, no one of them is perfect and universal... In my opinion, the more indicators we have the better becomes the comparison (...and the conclusion) on the one´s life´s scientific achievements. The Journal Impact Factor is not enough, but it within the current paradigm is certainly an useful indication (in average of course) about the quality of a given paper, published in that journal... Undoubtedly, it is better to publish (if accepted) your research in top IF journal instead of doing this in an open access or domestic journals. There will be of course always some exceptions, but within the margins confirming this general trend...
Open Access journals will be high IF if the journal is peer-reviewed, because good papers will be presented, and the more time has been since starting the higher citing will be available due to easy downloading and using the papers published in OA journals. javascript:
The citation index is the most proper index to judge the importance of an article. If a work is good, the article concerned would be cited by others in the process of extending the idea published in the article. Level of the work does not really depend on the journal in which it is published.
Impact Factor of a journal does not necessarily reflect the citation index of any article published in the journal. Not every original work gets published in journals with high impact factors. On the other hand, not every article published in a journal with a high impact factor must ultimately get highly cited.
'Nature' is a journal with a very high impact factor. Some of the articles published in this journal have never been cited!
In my eyes, not just Impact Factor, even those indices that are supposed to be equivalent to it, are being given unnecessary importance.
Occupants of a beautiful house may actually be ugly, while a very beautiful person may be found in an ugly house!
In the Assamese language, there is a saying. Translated into English, it would be: 'Tigers are found in small jungles only. In big jungles, you may find all other sorts of animals, but not a tiger! '
Even in citation there is problem. Just as down load can be inflated as rightly hinted by Vladimir so to is citation !!. Citation index is not free from self citation, citation by relatives, ....
I do not think that there is an alternative method for counting the citedness of a journal. In a digital environment, the equivalent of cites are tweets, likes, shares, backlinks etc; but not downloads or accessing. The former acitivities can be done even without reading the content, can be outsourced (imagine Amazon's Mechanical Turk or Fiverr flooded with requests of sharing, tweeting, reviewing or posting about scientific articles), blurring the boundaries between accrediting a work for its relevance/quality and content marketing.
@Lijo, there are many alternatives, here is "The Global Impact Factor (GIF) provides quantitative and qualitative tool for ranking, evaluating and categorizing the journals for academic evaluation and excellence. This factor is used to evaluate originality, scientific quality, technical editing quality, editorial quality and regularity of a journal."
In scholarly and scientific publishing, altmetrics are non-traditional metrics proposed as an alternative to more traditional citation impact metrics, such as impact factor and h-index. The term altmetrics was proposed in 2010, as a generalization of article level metrics, and has its roots in the #altmetrics hashtag.
I will repeat my comment from elsewhere on GIF - Please do not consider GIF of any value!
GIF appears to be a bogus impact factor. The website of the 'factor' talks about the "arbitrary group of individuals" apparently referencing TR, but then they have arbitrarily selected a group of individuals, primarily from two countries and some with questionable credentials, and claim they are going to do an intensive review of many aspects of journal quality. While TR has many faults, an independent group without any clear guidance from any reputable scientific group is not an improvement in any way.
What would happen if WE ALL consult Beall's list of predatory journals and refrain from the possible, probable or potential predators? - ResearchGate. Available from: https://www.researchgate.net/post/What_would_happen_if_WE_ALL_consult_Bealls_list_of_predatory_journals_and_refrain_from_the_possible_probable_or_potential_predators/41 [accessed May 14, 2016].
Here is my second comment on GIF already posted elsewhere:
Looking at the GIF (Global Impact Factor) further, it apparently is not an Impact Factor but instead an evaluation of websites in hundreds of sub-disciplines by a fairly small group of individuals. This of course means that they really cannot do the evaluations they are claiming they are doing. What they would like to do would be wonderful, but an almost impossible job. Half of the (TQCF) score is based on 'Originality of Research' which is important, but not the basis of many critical types of studies and/or journals. Several issues should be noted (and there are many more). First, citations of the articles do not affect of the scores. Whereas the number of articles published by the journal does impact the score, so the more articles published, the higher the score (definitely not an Impact Factor based on the articles) and this could lead to journals publishing more low quality articles just to improve their GIF score. Third, 20 points (TQCF %) is on Peer Review - but how does this evaluator get the Peer Reviews and are they accurate (most predatory publishers claim a peer review process, but do not use it). It would be nice if some really good group, could develop a much better method than GIF and then have it properly implemented and thus create a second evaluation tool of the journals.
What would happen if WE ALL consult Beall's list of predatory journals and refrain from the possible, probable or potential predators? - ResearchGate. Available from: https://www.researchgate.net/post/What_would_happen_if_WE_ALL_consult_Bealls_list_of_predatory_journals_and_refrain_from_the_possible_probable_or_potential_predators/41 [accessed May 14, 2016].
Journal Impact Factor, Global Impact Factor, Universal Impact Factor and Copernicus Index are all listed as false impact factors by Beall who is very well acquainted with the Predators and there many means of trying to scam people and damage science for decades.
The Altmetrics site and info on comparable sites from Wiki look promising as an adjunct (not just alternative) to the IF. One big problem with all metrics is that for many of us who do research in small fields, that are not 'hot areas,' are doomed to both get low citations (not much publishing in these areas) and low metrics of other types. While good, and sometimes great, articles in these less glamorous areas of science do not get much recognition they may also be of much higher quality and in the long run of much greater significance. As mentioned in the Wiki article, retracted papers (ie. either with wrong info or fraudulent) actually get very high almetric scores because a lot of people 'talk' about them.
As we know, there is no perfect measure to evaluate an article /a journal/ a researcher. We should be aware that many variants of impact factors are fake. Most of us receive weekly spam emails advertising their journals with fake variants of impact factors.
http://www.harzing.com/pop is an informative cite I recommend.
In addition to the various simple statistics (number of papers, number of citations, and others), Publish or Perish also calculates the following citation metrics for evaluation of researchers:
Hirsch's h-index
Proposed by J.E. Hirsch in his paper An index to quantify an individual's scientific research output, arXiv:physics/0508025 v5 29 Sep 2005. It aims to provide a robust single-number metric of an academic's impact, combining quality with quantity.
Egghe's g-index
Proposed by Leo Egghe in his paper Theory and practice of the g-index, Scientometrics, Vol. 69, No 1 (2006), pp. 131-152. It aims to improve on the h-index by giving more weight to highly-cited articles.
Zhang's e-index
The e-index as proposed by Chun-Ting Zhang in his paper The e-index, complementing the h-index for excess citations, PLoS ONE, Vol 5, Issue 5 (May 2009), e5429. The e-index is the (square root) of the surplus of citations in the h-set beyond h2, i.e., beyond the theoretical minimum required to obtain a h-index of 'h'. The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns.
Contemporary h-index
Proposed by Antonis Sidiropoulos, Dimitrios Katsaros, and Yannis Manolopoulos in their paper Generalized h-index for disclosing latent facts in citation networks, arXiv:cs.DL/0607066 v1 13 Jul 2006. It aims to improve on the h-index by giving more weight to recent articles, thus rewarding academics who maintain a steady level of activity.
Age-weighted citation rate (AWCR) and AW-index
The AWCR measures the average number of citations to an entire body of work, adjusted for the age of each individual paper. It was inspired by Bihui Jin's note The AR-index: complementing the h-index, ISSI Newsletter, 2007, 3(1), p. 6. The Publish or Perish implementation differs from Jin's definition in that we sum over all papers instead of only the h-core papers.
Individual h-index (original)
The Individual h-index was proposed by Pablo D. Batista, Monica G. Campiteli, Osame Kinouchi, and Alexandre S. Martinez in their paper Is it possible to compare researchers with different scientific interests?, Scientometrics, Vol 68, No. 1 (2006), pp. 179-189. It divides the standard h-index by the average number of authors in the articles that contribute to the h-index, in order to reduce the effects of co-authorship.
Multi-authored h-index
A further h-like index is due to Michael Schreiber and first described in his paper To share the fame in a fair way, hm modifies h for multi-authored manuscripts, New Journal of Physics, Vol 10 (2008), 040201-1-8. Schreiber's method uses fractional paper counts instead of reduced citation counts to account for shared authorship of papers, and then determines the multi-authored hm index based on the resulting effective rank of the papers using undiluted citation counts.
The website for the metric is nontransparent and provides little information about itself such as location, management team and its experience, other company information, and the like
The company charges journals for inclusion in the list.
The values (scores) for most or all of the journals on the list increase each year.
The company uses Google Scholar as its database for calculating metrics (Google Scholar does not screen for quality and indexes predatory journals)
The metric uses the term “impact factor” in its name.
The methodology for calculating the value is contrived, unscientific, or unoriginal.
The company exists solely for the purpose of earning money from questionable journals that use the gold open-access model. The company charges the journals and assigns them a value, and then the journals use the number to help increase article submissions and therefore revenue. Alternatively, the company exists as a front for an existing publisher and assigns values to that publisher’s journals
List of missleading Metrics
This is a list of questionable companies that purport to provide valid scholarly metrics at the researcher, article, or journal level.
AE Global Index
Advanced Science Index
African Quality Centre for Journals
.............
Please see the complete list, updated in April 2016 by Jeffrey Beall at:
Yes. The alternatives to Thomson Reuters Impact Factor (IF) are SCImago Journal rank indicator (SJR) and Eigenfactor Score (ES). Thomson Reuters IF measures by citations within it's JCR database, but SCImago measures based on a different citation universe provided by Scopus database. ES uses the similar algorithm as Google’s PageRank. For calculating ES an iterative method is used and journals are considered to be influential if they are cited more often by other prestigious journals. For a detail comparison please see the attached link.
P.S.: Maybe in future G-score and RG-score are good alternatives, too!
Mahmoud - Each of these ranking methods has good features and features that may not truly give scores based on the true quality of a journal or a paper. High citations still seem to predominate, but that means only 'popular' or 'highly supported' areas of research get good rankings. If factors such as number of citations divided by the total number of papers in the field were used, it would lower the ratings of many journals, and raise those in very small fields of study (where authors know and cite almost all relevant articles).
What is the G-score that you mention in the PS? RG-score is not a good ranking as it can be and is manipulated by many (look at the scores of those who post a lot of very small comments that are not very helpful to the discussion).
Robert- "What is the G-score that you mention in the PS?"I was referring to Google Scholar Metrics. While most academic databases and search engines allow users to select one factor (e.g. relevance, citation counts, or publication date) to rank results, Google Scholar ranks results with a combined ranking algorithm in a "way researchers do, weighing the full text of each article, the author, the publication in which the article appears, and how often the piece has been cited in other scholarly literature".
Thank you Mahmoud! I wanted to make sure I knew what your were referring to, and was hopeful it was not the GIF. If we all keep to using the abbreviations meaning the same thing then our discussion will be enhanced. So 'G-score' it will be to mean 'Google Scholar Metric' but we do need to have a group do an evaluation of the metric.
Dear @Lijo, you were absent for a quite long time from Researchgate. In a meantime, we have had many related threads about impact factor and other research metrics. I will bring some here.
I am also bringing some good resources about responsible citation analysis, as well as Journal Citation Reports...!
Why to worry about impact factor at all? Knowledge dissemination should happen in a a seamless and unconstrained way ( of course plagiarism controls are not ruled out here). Why people should pay and buy our articles. Open access publications are the best.
Downloads and citations could be manipulated, but peer reviews are a more secure proof of the interest and originality of the article. The possible fruits it could give are very complicated of measure.