Is the impact factor and its calculation method fair and equitable?
What is the best way, to publish in journals online or manually? Is there an effect to the factor of the journals increase or decrease?
Note that there is a lot of criticism in the way the impact factor is calculated.
Your opinion is very important.
Please see:
http://en.wikipedia.org/wiki/Impact_factor
A possible answer is in the thread below, dear Raheem!
https://www.researchgate.net/post/On_which_basis_you_choose_a_journal_for_publishing_your_scientific_research
Dear Marwan
I have seen ?
my question concern the following cases:
The Fist case: there are many journals online and free for using researchers?but is not free from fees of publications for the authors????
The second case: there are many journals free from fees of publications for the authors?? but is not free for using must pay the price articles? may be 40 dollars or more that?
I think the first case have a good citation from researchers? and increase the impact factor of journalsو regardless of the quality of the journal or paper or publishing house, or the domain of paper? the impact factor of mathematics journals is like chemical journal?
because the ranking of universities depend on scientific publications and impact factors of journals ?
The citations in this methods is fair and equitable?
Regards
It seems to me that a correct answer to this question can be "Moderately yes". How is that? We all agree that in any branch of science there exists a multitude of journals, that some are highly respected and some not so much. Accordingly, some system of ranking the journals should exist, and the impact factor attempts to do it. But, in a very mysterious way, related to the number of copies printed. Adding to the "mystery" is the fact that conference proceedings do not have the impact factor! But they have a counter which counts the number of times a paper is downloaded.
Over rated usage of impact factors complicate and abuse researchers and research doing. It creates a dog eat dog world of research and ruthless competitiveness among researchers, which to me is a misguided, unhealthy and abusive phenomena. .
If you follow the standard Journal may be online or offline , the impact factor is fair one.
The impact factor is calculated on the basis of citations of the articles of the journal.
One must publish papers in the Journals indexed on SCOPUS and ISI Thomson Reuters.
Beware of Journals those are not authentic!
Please check the Journals of the following groups:
1. EMERALD Group
2. Taylor and Francis
3. Wiley online
4. Science Direct
5. Springer
6. Inderscience
7. Sage
8. Etc. ... .
Dear Dr. Raheam
Personally I feel that human beings have a tendency to set certain standards, benchmarks, rankings, targets etc. in all fields or all walks of life as a set of guideline(s) for like minded people to follow. Once a standard, mathematical calculation is accepted, it is widely followed until a new or better standard is proposed and accepted. Impact factor of a journal, citation index of a researcher are some of the scientific metrics accepted currently as benchmarks for judging the merits of journals and scientists. These metrics have been questioned and debated but, they still continue to be the benchmarks. Instead of discussing the merits and demerits, do any one has a better method to assess the quality of a journal or research contributions of a scientist that is applicable across disciplines?
Again, personally I am against publishing in paid journals. Everyone who pays the price naturally wants his/her research published and cited frequently. I published in some local journals those papers which I wanted to reach the targeted local readers, in my case the farmers and the perfumery industry. These journals do not have any impact factor, but my papers had the desired effect! and me the satisfaction. The scientist has a choice and I think we should be happy about it.
As per paying for the reprints of papers published in subscription based journals, I would like to add that we are living at a time when there are no free lunches. Is it not true that scientists are getting paid for what they are carrying out?
Best wishes.
In my opinion, the impact factor has been invented for commercial (and not scientific) purposes. The IF has been utilized very well by those who work hard on preventing scholars from promotion (to keep their salaries as lowest as possible). Had the IF "problem" been agreed upon worldwide, then my view will be different.
There are some advanced mathematical papers, which are accessible to very few people! As an example Wiles proof of Fermat's Last Theorem! I imagine that the impact factor of this work is very small, although it is the most important work! On the other hand "fuzzy sets" is addressed to a very large group of scientists, and thus the impact factor is high!
From these two examples you can not make a sensible decision!
There are lots of papers which have less citations or self-citations, I fully agree with Mario Vincenzo Russo that the only important thing is the content and the scientific value of the work done by the researcher.
Fundamental flaws in citation metrics that goes right to the heart of such data collection, before one should even discuss more superficial metrics such as h-index or the impact factor.
How Scholars Hack the World of Academic Publishing Now, You can form a cartel. Or you can ignore it all together.
http://www.theatlantic.com/technology/archive/2013/08/how-scholars-hack-the-world-of-academic-publishing-now/279119/
http://allthatmatters.heber.org/2014/07/29/citations-and-the-problem-of-capturing-impact/
It is a matter of a general rule, not because it is the most efficient, but because is a simple mathematical formula.
Dear all
Thank you so much for your information and opinions
Best Regards
Dear all
Thank you so much for your information and your opinions are appraised
Best Regards
I think that the issue of abstracting, indexing, ranking becomes more confusing.
So if you want to judge someone's work you have to ask specialists about his papers and those metrics that are for comercial purposes must not be considered for ranking researchers.
We have to propose other ranking methods and metrics based on open access far from those commercial organisations.
I think that the most important thing is the quality of your work, no matter where it is published, this quality will payback one day. the impact of your work is not the number of citation (any citation), those citations can be manipulated and mentioned in a research paper without having any real influence to the citing work, the real citation that should count is that paper which is built upon its contribution to lead to better contribution and push the science forward, not the blind citation, so we need smarter measures to find the papers with real impact on the citing paper, not just blind counting, have you heard of the super author: http://www.harzing.com/esi_highcite.htm
Most of these IF metrics twisted to be used money wise rather than science wise, for example some of the American Accounting Association journals ask for a non refundable $400.00 before acceptance, if we look at their rejection rate say 50% or 60% times the number of submitted papers times 400=????
is this business or science?
Dear @Raheam, I do find that the impact factor and its calculation algorithm is neither fair nor equitable! Commercialization and abuse in the domain of co-authorship and citations and many other problems convince myself that IF is very problematic! In my next response, I will supply You with some good readings on this issue.
I will be free to attach some threads on this issue about IF an one of mine about fake journals which had IF!!!
https://www.researchgate.net/post/How_is_impact_factor_calculated
https://www.researchgate.net/post/Impact_Factor_What_this_really_means
https://www.researchgate.net/post/Is_it_possible_to_publish_a_noteworthy_paper_in_a_Journal_Why_should_one_do_it
Dear @Raheam, as I have promised, I am attaching some good articles on different metrics, comparisons of different metrics, comparing metrics over different disciplines....! For example, the article "The Agony and the Ecstasy — The History and the Meaning of the Journal Impact Factor" is fine reading. On Scientometrics and Journalology and many more issues!
"“Impact Factor is not a perfect tool to measure the quality of articles but there is nothing betterand it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty.”
Yes, a better evaluation system would involve actually reading each article for quality but then this entire congress is dedicated to the difficulties of reconciling peer review judgments. When it comes time to evaluating faculty, most people do not have or care to take the time to read the articles any more! Even if they did, their judgment surely would be tempered by observing the comments of those who have cited the work. We call this citation context analysis. Fortunately,new full-text capabilities in the web make this more practical to perform."
http://wokinfo.com/essays/impact-factor/
http://garfield.library.upenn.edu/papers/jifchicago2005.pdf
http://www.harzing.com/data_metrics_comparison.htm
http://www.citefactor.org/journal-impact-factor-list-2014_I.html
http://www.elsevier.com/editors/journal-and-article-metrics
I do follow the same ideology as @Dejenie A. Lakew, we must be careful and well awre of this cut throat competition, that is for just to get published. Thanks
Although impact factors (I.F.) do directly benefit a number publications and publishers, and hardly are indicators of the scientific (or societal for that matter) merit of any one individual paper, they seem also to be a characteristic of the way the scientific field has stratified itself both due to its internal disputes and science funding policies, particularly in countries with well established academia.
From this perspective, they are an element of the field of forces that actuate in the technical-scientific field, in the lucrative scientific publishing market, but also of the configuration of forces in the field where policy-making and politics related to the technical-scientific field are actually made.
I d think that it is very difficult to measure the impact factor e.g. what was the impact factor of the gernal published the relative theory of the genius Enstein?
DearAll
thank you for your opinions and all explication
Best regards
Hello,
I think the term impact (and an impact factor - IF) should reflect more than the quote. However, I recognize the difficulty of establishing a standard, especially if the ambition is a global coverage criterion.
The quotes reflect in my view only part of the impact. What other variables could take? Degree of innovation present in published works, unpublished character in articles, social influence (quotes, work of others whose starting point was a particular article in the journal, for example), the time of influence caused by this article (if a article published 20 years ago continues to be cited, for example, his relevance (and publication) should take this into consideration) if the published articles contributed to a paradigm change, etc ...
What I wanted to show with this simple exercise, in fact, we can easily find good, fair and relevant criteria for judging the publications, in addition to what is considered today.
I think that in a global society and the available information technologies, online publications can help in fast indexing and agile articles and magazines.
Anyway I do not consider the IF factor in current patterns, a fair factor because it works with variable low as to actually measure the impact of an article, to answer your question.
Regards,
S.
Dear Brenda Jacono
you have reason because the method of calculation is very simple formula?\please see:
http://en.wikipedia.org/wiki/Impact_factor
regards
Rob GEE has written a nice story where Impact Factor happened to play the major role. It is a story where the previous cartoon comes from originally! "... But it got me to thinking about the correlations between my conversation about academic publishing, the dropping of the impact factor to oohs and ahhs, and the effort to correct the problem of reporting agencies who have an interest in what they’re reporting. Not unlike Wall Street, its the publishing companies and the journals themselves who compute and announce their impact factors. And because the impact factor has, well, an impact in so many disciplines they are behooved by manipulation of the calculations and, more insidiously, the editorial content of the journal itself. Do we really think that there are not cases of over-citing tangentially relevant material in an effort to drive up the impact factor?..."
http://stillwaterhistorians.com/2012/11/03/factoring-impact-upon-an-evening-with-the-board-of-visitors/
The impact factor is calculated on the basis of citations of the articles published in the journals. Whose citations are more, relatively impact factor will be high.
There are many online Journals those show impact factor before publishing even first issue and first number of the Journal. Such practices questioned the fair and equitable calculation of impact factor.
One must publish papers in standard Journals indexed on SCOPUS and ISI Thomson Reuters. Such journals calculate the impact factor scientifically.
Dear Subhash C. Kundu,
thank you so much for your opinion and the information
SCOPUS and ISI Thomson Reuters avoid the following ?:
Criticisms
Numerous criticisms have been made of the use of an impact factor. For one thing, the impact factor might not be consistently reproduced in an independent audit.[5] There is a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). In short, there is some controversy about the appropriate use of impact factors.[6]
Validity as a measure of importance
It's been stated that impact factors and citation analysis in general are affected by field-dependent factors[7] which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline.[8] The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences.[9] Thus impact factors cannot be used to compare journals across disciplines.
The impact factor is based on the arithmetic mean number of citations per paper, yet citation counts follow a Bradford distribution (i.e., a power law distribution) and therefore the arithmetic mean is a statistically inappropriate measure.[10] For example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the importance of any one publication will be different from, and in most cases less than, the overall number.[11] Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally.[12]
This problem is exacerbated when the use of impact factors is extended to evaluate not only the journals, but the papers therein. The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published.[13] The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103).[14] The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations.[15]
Finally, journal rankings constructed based solely on impact factors only moderately correlate with those compiled from the results of expert surveys.[16]
It is important to note that impact factor is a journal metric and should not be used to assess individual researchers or institutions.[17][18]
Reliance on integrity of authors
A.E. Cawkell, sometime Director of Research at the Institute for Scientific Information remarked that the Science Citation Index (SCI), on which the impact factor is based, ″would work perfectly if every author meticulously cited only the earlier work related to his theme; if it covered every scientific journal published anywhere in the world; and if it were free from economic constraints.″[19]
Editorial policies that affect the impact factor
It has been suggested that this section be split into a new article titled Citation manipulation. (Discuss) Proposed since November 2014.
A journal can adopt editorial policies to increase its impact factor.[20][21] For example, journals may publish a larger percentage of review articles which generally are cited more than research reports.[22] Thus review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields.[23] Some journal editors set their submissions policy to "by invitation only" to invite exclusively senior scientists to publish "citable" papers to increase the journal impact factor.[23]
Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles (such as case reports in medical journals) that are unlikely to be cited or by altering articles (by not allowing an abstract or bibliography) in hopes that Thomson Scientific will not deem it a "citable item". As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed.[24] Interestingly, items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.[25][26]
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica, with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor.[27] The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports.[28]
Coercive citation is a practice in which an editor forces an author to add spurious self-citations to an article before the journal will agree to publish it in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor.[29] However, cases of coercive citation have occasionally been reported for other scientific disciplines.[30]
Responses,
Because "the impact factor is not always a reliable instrument", in November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes".[6]
In July 2008, the International Council for Science (ICSU) Committee on Freedom and Responsibility in the Conduct of Science (CFRS) issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20.[31]
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor".[32] This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).[citation needed]
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor".[32] This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).[citation needed]
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May of 2013, DORA has garnered support from thousands of individuals and hundreds of institutions who have endorsed the document on the DORA website.
Other measures of impact
It has been suggested that this section be merged with Citation impact. (Discuss) Proposed since December 2013.
Related indices
Some related values, also calculated and published by the same organization, include:
Immediacy index: the number of citations the articles in a journal receive in a given year divided by the number of articles published
Cited half-life: the median age of the articles that were cited in Journal Citation Reports each year. For example, if a journal's half-life in 2005 is 5, that means the citations from 2001-2005 are half of all the citations from that journal in 2005, and the other half of the citations precede 2001[33]
Aggregate impact factor for a subject category: it is calculated taking into account the number of citations to all journals in the subject category and the number of articles from all the journals in the subject category
Source normalized impact per paper (SNIP) is a factor released in 2012 by Elsevier to estimate impact.[34] The measure is calculated as SNIP=RIP/(R/M), where RIP=raw impact per paper, R = citation potential and M = median database citation potential.[35]
These measures apply only to journals, not individual articles or individual scientists, unlike the H-index. The relative number of citations an individual article receives is better viewed as citation impact.
It is, however, possible to examine the impact factor of the journals in which a particular person has published articles. This use is widespread, but controversial. Garfield warns about the "misuse in evaluating individuals" because there is "a wide variation from article to article within a single journal".[36] Impact factors have a large, but controversial, influence on the way published scientific research is perceived and evaluated.
PageRank algorithm
In 1976 a recursive impact factor that gives citations from journals with high impact greater weight than citations from low-impact journals was proposed.[37] Such a recursive impact factor resembles Google's PageRank algorithm, though the original Pinski and Narin paper uses a "trade balance" approach in which journals score highest when they are often cited but rarely cite other journals; several scholars have proposed related approaches.[38][39][40] In 2006, Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel also proposed replacing impact factors with the PageRank algorithm.[41] From their paper (based on 2003 data):
for more details see
http://en.wikipedia.org/wiki/Impact_factor
Take for example the impact factor of an RG member: It is simply the sum of impact factors of each formal journal publication that you have. So, it is supposed that you 'gain' the whole of the mean reference value of each journal. Is it true? Of course it isn't true, since you may have less or more citations than the X Journal where you published your Y Article... So, don't panic...
Dear Demetris Christopoulos, Sayed Zaheen Alam,
Thank you so much for your opinions
Best Regards
Dear All
I invite all of you to visit the following links and share your wisdom.
Best wishes.
https://www.researchgate.net/post/Is_citation_index_a_true_reflection_of_a_scientists_success
https://www.researchgate.net/post/Is_publishing_in_high_impact_factor_journals_necessary_for_scientists
https://www.researchgate.net/post/Publish_and_perish_Patent_and_prosper_What_is_your_opinion
https://www.researchgate.net/post/Is_research_influenced_by_funding_agencies
Dear friends,
The IF in the current form is a simple, but the question is, IF is actually fairly and meets the most desirable requirements for the issue that is proposed. In the cartoon posted by our friend Talib one paper that is the example of oranges can receive a high IF while the original paper can actually receive a lower IF. By current criteria it is not hard to do because it gives banks for that.
There currently exist a better one criterion does not mean that we should accommodate and leave it at that. This is not wise, especially.
I do not agree that is the best we can do for now. I think we can be more creative and fairer than that.
Anyway, our friend Mario Vincenzo, among others, criticizes quite relevant to the current form of IF.
But if we invented fire and we got the internet, maybe we can solve that too.
I think the key may be in a greater number of variables to frame the articles. Some suggested in my previous post but it was just an exercise done effortlessly responding to the post. So I think an international community can do something much bigger than that ... I think the editorial boards can have a very important role to sort the papers submitted to journals. After that the scientific community can take care of increase or lower the initial ratings. After all we are a social network as well. The current criteria can compose new variables also, they need not be abandoned.
Perhaps a system that also allows the quotes allows whoever cite join rankeamento of articles and consequently the magazines. We are networked is not it?
Regards,
S.
Although I am an early career researcher, I sincerely think that the way in which IF is used to determine the quality/value of research is not fair always. Although T. Reuters provided the way in which IF is calculated, my concern is that IF puts undue pressure on researchers and engender unhealthy competition. Researchers worth and CVs are rated based on IF even among those who may have not read their work. As someone mentioned above, people who know how to play the politics of publication get their papers published in some journals with a high IF even though such papers may not have a higher quality than papers published in journals with lower IF. My argument is not that IF should be discarded completely, but we need a better way of determining research outputs.
Dear Emeka W. Dumbili
Thank you for your opinion
Best Regards
Dear @B.R. Rajeswara Rao , I do suggest You to share your questions with RG members and/or selected researchers, as offered by RG. In this case, the visibility of your threads will be much better and You will get many responses! Otherwise, Your questions will remain without answers, as it is the case now! Regards!
Hi Raheam A Mansor Al-Saphory
Though the regulatory bodies of higher education and research and the policy makers are using the Impact Factor and h-index for rating the quality of scientists and publications, I am of the opinion that these two alone cannot not judge the quality of scientific communications. Already so many journals were exposed how they have improved their IF using unethical ways and means. As pointed out by our fellows in this posting, even an ordinary paper can get good citations while a really good paper cannot. It is the same case for journals also. So, academicians, scientists and policy makers should be very careful in these aspects.
Hello:
I agree with Dejenie A. Lakew , the IF is being abused and many researchers believed this is the only measure of quality.
The impact factor is a methodology based on Bibliometric laws.. The way it is computed for the ISI Citation Indexes (now Web of Science) is by taking into consideration the times an specific article in a journal is cited in the next two years of its publication, for examle an article published in 2004, is cited in 2005 and 2006. The maximun IF is 1.
According to Eugene Grafield creator of ISI Indexes, the highest citation rate a good article might receive is between 3 to 4 citation. I recomend that you read the Evaluation of the Science Citation Index done in 1976 for NSF entitlted "Evaluative Bibliometrics.... by Francis Narin available on Internet. Among other things it confirmed what many research papers said "that ISI Indexes are biased toward the very top journals in English Language of the United States" it also says that up to 1976, the source excluded about 80% of the world literature. Furthermore the evaluation done by Narin is very accurate and balance stating many porblems of the source that people often overlook. This is not to diminishing the value of the source because it is excellent in covering US literature published in the best US journals, Kudos!! for US. WE can not asked that the source fulfill the need of every country or for that matter every literature. However, there are similar tools that are coming out that are more promising in striking some balance. These are, SCOPUS, Scmago, SCIELO for Latin America, and GOOGLE Scholar Rank. What is important to keep in mind is that one size does not fit all. The best way to go is to used multiple source to gatner the evluative information.
no matter if you like IF - it is used.
No matter it should be used to rank journals - it is used (also for ranking scientists)
No matter it is skewed - it is used.
So, if you like in the world where IF is used, or you conform, or you are out of game.
PS. I personally do not like IF
Hi'
The abundance of answers on this subject shows how controversial it is. The impact factor is a measure of journals and disciplines productivity and quality. The correct way to calculate it is by dividing the citations to a discipline in the year 2014 (for example) by the number of publications in this discipline in the last two years (2012, 2013). This measure neutralizes the type and size of journals published in a discipline. True; the measure is not a perfect evaluation of journals and disciplines, but what is the alternative ?
The years of publication are called "windows". The Journal Citation Report (JCR) measures the impact factor on an aggregate level of 5 years "window", too.The Scopus uses a 3 year "window" for its impact factor.
Hi: For your question, Is the impact factor and its calculation method fair and equitable? - My answer would be "NO, NOT ALL TIMES". Regards
Not only the IF has been abused to serve the interests of some businesses, but the citations also have been played with. An ongoing game, between groups in several countries, is as follows: Cite me & I'll cite you. Each time you do it for me, I'll do it for you... ! Qualifies for a song, isn't it ?!
Impact factor is someone's creation but it is blindly adopted. Researcher knows how important his research is. IF is misused and hence creates RUCKUS.
IMPACT FACTOR IS THE ONE OF THE IMPORTANT CRITERIA FOR EVALUATION OF THE RESEARCH.
Dear Dejenie A. Lakew,
You have reason some of impact factor journal accept to consider your article for reviewing and after passed time 8 months or more than you receive an apologizing (your paper is not meet the scope of our journal).
My question? What is the role of editor and the researcher wait a good reply after passed this time?
Dear Subhash C. Kundu
You have reason the impacr factor is an important criteria but in some time is misused
How to ensure that does not misused?
The use and misuse of citation analysis in research evaluation! On overuse of citations :"Much of the earlier discussion concerned selective use of citations. Quite a common problem is the reverse: providing a long list of citations to support a single statement when fewer would be sufficient. If it is important that the work of the authors of all the various works be acknowledged, or if the intention is to provide a comprehensive review, than a long list of citations is appropriate. Otherwise it can make a paper unwieldy and the rule of thumb of selective citation described earlier could be adopted..."
http://link.springer.com/article/10.1007%2FBF02458392#page-1
http://www.who.int/substance_abuse/publications/publishing_addiction_science_chapter4.pdf
Dear Ljubomir Jacić
Thank you so much for the interested information
Impact factors become a money making criteria. For example, a journal with high impact factor (say more than 4) charging US$1200 to US$ 2500 as APC of a paper with less than 10 pages and it is more for papers with more number of pages; It is a one year slary for people working in Indian Private Colleges. Those people can not publish eventhough they do a good research. People should judge the quality of the work by looking into the problem addressed, methodoly and its applications, not by a mystry real number, which is not real in fact.
.
A Broken System: Nobel Winner Randy Schekman Talks Impact Factor and How To Fix Publishing: " What people need to do to evaluate the impact is to read a scholar’s paper, not use surrogate measures. I think there’s no substitute for reading the content and having active experts in the field judge whether the work has meaning and impact..."
http://lj.libraryjournal.com/2013/12/publishing/a-broken-system-nobel-winner-randy-schekman-talks-impact-factor-and-how-to-fix-publishing/#_
Hi Raheam!
The impact factor business is a delusion as it accounts for quantity and not quality. As editor of an international journal I wrote an editorial on this issue which you find attached. I think it clarifies much of your concerns.
Regards, Burg Flemming
I do agree with Ljubomir Jacic's observation. Surrogate measures like impact factors are bound to be misleading in some situation. Particularly in developing countries where social sciences lack recognized platform for publications many many good papers are published in unknown omnibus journals. Hence these do not get properly noticed and evaluated. Surrogate measures need to be substituted by some more direct mechanism of evaluation of content. Mant a good content in social sciences remain burried in unknown omnibus journals in poor countries.
Hi everyone!
As pointed out by me above (and detailed in my editorial), impact factors as indicators of quality are a sham. It is easily demonstrated that journals covering many scientific disciplines by implication have much higher impact factors than single discipline journals because the overall readership is very much larger. Let's take 'Nature' as an example. It has an impact factor (IF) around 30. What is completely overlooked is that this impact factor is the sum of the impact factors of the individual disciplines it covers. Let's assume Nature covers 10 disciplines (in reality probably more) and let's then further assume that all the disciplines have the same IF (which they of course have not), then the individual impact factors are 3. This is not better than the IFs of other popular journals covering just single disciples. The high ranking allocated to journals such as Nature is thus completely unjustified.
The disturbing aspect of this is that supposedly intelligent people (among them many scientists, institute directors and officials of science funding organisations) have (uncritically) been misled to believe that IFs are a measure of quality.
Best regards, Burg Flemming
Hi,
Considering the huge amount of literature published this days, it is inconceivable to read each paper published, and the impact factor gives some indication as to the quality of the journals where the paper is published. Besides, there are additional measures that can be used, the article influence score, for example. Also, other databases deviced their own impact factor (Scimago, Scopus). If the impact factor does not seem reliable, one could use other measures instead. There are various.
@Dejenie A. Lakew: Brilliant reply! Today's researches shifted from research orientation to impact factor orientation..thanks to all these "numbers"!
What do you think, why Researchgate has introduced H - factor instead of IF!
https://www.researchgate.net/post/Why_the_impact_factor_metric_in_RG_has_been_disabled
Dear Burghard W. Flemming
I have read your article and you have reason