I go through so many journals, web sites. I look at the quality of journals in all publishers like Wiley, Springer, ACS, Taylor and Francis, Elsevier etc. Elsevier journals have more impact factor than other journals. Why?
Not at all. I have published in a spectrum of journal impact factors ranging from less than 1 to more than 30. The quality of the reviewing and editing process, and the care that these people put into their jobs, is essential to the quality of the journal. In short, I know of some very good low impact journals (which will remain unnamed) that have very high standards, and as a result the quality of the research published therein is generally very high - i.e. careful, error-free, well-documented, etc. On the other hand, I also know of examples of very high impact factors journals that have actually asked me to review my own submission, or that of my colleague across the hall. The higher impact journals have such a large mass of submissions that the care in editing and selection of referees is often lacking.
The impact factor of a journal reflects the frequency with which the journal's articles are cited in the scientific literature (Saha et al., 2003). It’s a reasonable indicator of quality for journals. However, it has strengths and weaknesses. Indeed, others factors can determine the standard of a journal: the Index Copernicus, citation half-life and immediacy index.
I am pleased to send you a link about Journal Impact Factor:
One of the most important reasons why Elsevier generally have higher IFs is its high "visibility" - Elsevier is the biggest publisher of scientific journals in the world (all items ~2000). For comparison, the second biggest: Taylor and Francis has ~1000 items. About the quality, one should consider the following points: The IF of a given journal is an indication for the citation number of an "average" hypothetical article published in that journal. It is considered as an indicator for the quality of the journal as a whole, but cannot be directly linked to the quality of an article published in this journal because a specific article can have more or less citations than the IF (or even no citations). When going to the topic of the quality of this article, the situation becomes even more unclear. Suppose an article is cited 100 times; however, this does not necessarily mean that its quality is good (regardless of the journal IF). It could have been cited so often to be criticized! Therefore, never rely only on the times an article is cited as considering it good or bad.
Its not generalize that Elsevier Journals have high impact factor. Wiley, Springer and taylor and Francis journals have also high impact factors. Simple calculation of IF of one journal; PROGRESS IN CRYSTAL GROWTH AND CHARACTERIZATION OF MATERIALS for 2011-
Journal Impact Factor
Cites in 2011 to items published in:2010 = 2
2009 = 44
Sum: 46
Number of items published in: 2010 = 3
2009 = 5
Sum: 8
Calculation: Cites to recent items 46/Number of recent items 8 = 5.750
IF is based on the ratio of number of citation and number of published articles. Then I have a doubt. If the editor of a journal compelled all authors to cite the articles of the same journal means its IF will increase. Isn't it? Is it correct or not?
Ok, Elsevier has a lot of advantage on other publishers. First of its presentation style is better. Title of the journal and symbol of Elsevier above the article title itself increase reader's interest. That is good looking. But others will focus only the article title. Generally I will prefer Elsevier to read.
Regarding your doubt: Yes, hypothetically if an editor compels the authors to cite articles published in the same journal, this would increase the journal IF. However, in practice, such an "action" is highly improbable because first there is a peer review process (usually 2 or more external experts) and second it would spoil the editor reputation...
As to the IFs: of course, there are journals with high IFs from other publishers. When I wrote "generally", this is because in general for most scientific areas, the Elsevier journals represent "on average" higher IFs, Certainly, there are exceptions for some areas, where journals from other publishers have higher IFs.
The impact factor is one of several measurements available and, in many instances, misleading if you want to assess the absolute quality of a paper by the IF of the journal. You may easily find interesting studies on this, openly available in the Net.
i agree con Pratik. IF does not necessarly indicate the quality of an article. It is very important to take into account the number of cites of the article (and analyze the journal where the article was cited.
IF is a approximation about the impact (visibility of the journal), nothing more. And it is very different in the various knowledge areas.
Impact factor in general is accepted as an indicator of journal quality. People these days appear being really crazy for IF. But, IF is simply an indicator only. Some journals force authors directly/indirectly to cite articles published in that particular journal. Once I came to know it, I realized that one should not rely on IF alone to assess quality of a journal. In addition, open access publication has become a wonderful business these days in my observation. It is worrisome in terms of quality If money plays greater role on publication of scientific research articles.
For some journals, yes, like nature, BMJ and others with very high impact factor. If you are an author of a manuscript, how to choose your journal. Of course, anyone will wish and go for higher IF at least the present time. Citation is also a variable but still a dream for some people to publish is very high IF as nature without caring of citation.
Not at all. I have published in a spectrum of journal impact factors ranging from less than 1 to more than 30. The quality of the reviewing and editing process, and the care that these people put into their jobs, is essential to the quality of the journal. In short, I know of some very good low impact journals (which will remain unnamed) that have very high standards, and as a result the quality of the research published therein is generally very high - i.e. careful, error-free, well-documented, etc. On the other hand, I also know of examples of very high impact factors journals that have actually asked me to review my own submission, or that of my colleague across the hall. The higher impact journals have such a large mass of submissions that the care in editing and selection of referees is often lacking.
The impact factor of a journal reflects the frequency with which the journal's articles are cited in the scientific literature. It is derived by dividing the number of citations in year 3 to any items published in the journal in years 1 and 2 by the number of substantive articles published in that journal in years 1 and 2 [1]. For instance, the year 2002 impact factor for Journal X is calculated by dividing the total number of citations during the year 2002 to items appearing in Journal X during 2000 and 2001 by the number of articles published in Journal X in 2000 and 2001 (Figure 1). Conceptually developed in the 1960s, impact factor has gained acceptance as a quantitative measure of journal quality [2]. Impact factor is used by librarians in selecting journals for library collections, and, in some countries, it is used to evaluate individual scientists and institutions for the purposes of academic promotion and funding allocation [3, 4]. Not surprisingly, many have criticized the methods used to calculate impact factor [5, 6]. However, empiric evaluations of whether or not impact factor accurately measures journal quality have been scarce for more info read this article
Impact factor: a valid measure of journal quality?
Somnath Saha, M.D., M.P.H., Assistant Professor,1 Sanjay Saint, M.D., M.P.H., Associate Professor,2 and Dimitri A. Christakis, M.D., M.P.H., Assistant Professor3
As Peter Burns pointed out above, the effective care applied during the review process might be an additional and even more helpful metric describing the added value for the author.
Generally, life sciences have high IF, humanities and especially interdisciplinary issues lower IF (I work in both).
Originality of thought, might remain unmeasured, however ...
In 1-2 book chapters (Editors from India ;-) I tried to outline ways of quality assessment for interdisciplinary research:
* Education and literature for development in responsibility
* Quality assurance in transnational education management.
Dear Nidheesh, The Impact Factor (IF), like any other attempt at measuring quality has its advantages and disadvantages. As a solution to the problem of effectively measuring scientific contribution, IF is definitely the most popular method of doing so, but like any other such measure, it should not be considered as the best and most effective method of its kind. In the last few years, IF's shortcomings have become more visible and a few other methods have come up. The following article might help shed some light on the problems of and alternatives to IF: http://www.currentscience.ac.in/Volumes/104/10/1267.pdf
There are so many journals having high impact factor than Elsevier also. For example Energy and Environment from RSC having an impact factor of 9.9. One of the ACS journal having an impact factor of 21. But most of the Elsevier journals having high impact factor. This may be due to high visibility as mentioned by Prof. Svetlozar Velizarov. I am having same opinion with Prof. Peter Burns. There are so many low impact factor journals maintaining high quality. For example, Water Environment Research having an impact factor of 0.89, but the quality of documentation and all very nice. Because this journal was started in 1923.
I endorse the views expressed by Prof.Peter Burns. However, for young researchers it will always remain a challenging issue where Journals are vigorously engaged in promoting the IF concept. An informative document is attached for perusal of all. With best wishes-