What is your opinion on impact factor.
To measure the journal quality or only the journal popularity or else.
Share your experiences, thoughts, opinions ....
Here is some other opinions :
http://news.sciencemag.org/2013/05/insurrection-scientists-editors-call-abandoning-journal-impact-factors#.UfBnRNWpia0.facebook
What is Impact Factor :
http://en.wikipedia.org/wiki/Impact_factor
In my opinion any evaluation "mark" on journals cannot be a standard, because once it has been used and the rules are known, the editorial choice will adapt to the rules than to the scientific quality of the papers. If you know the foundation trilogy by I. Asimov, the rules of psycohystory must be unknown. But if you hide the rules of evaluation there is no transparencies.
Moreover, the citation system is based on the "rate of citation" considering only citations in the last 5 years. This privileges "trendy" fields, with many peaople working on that and neglecting arguments with fewer people. But some of these papers can be very popular few years later. However, the original paper is often forgotten, considering only the recent ones that renew the interest in the field.
Also the paper citation index is not a good criterion, also because citation is "money" in the research system and many authors hide the contribution of other researchers for obvious reasons.
I just want to point out the danger in using this parameters in research evaluation making the example of what's happening in Italy, where politicials uses these parametes to evaluate institutes and scientists. Changing the bibliometric parameters people at the top can pass in the bottom of the list and viceversa.
The only way to evaluate research paper is to read it!
Impact factor is a number game, which has both +ve and -ve aspects
+ve: atleast there is a quantative method to show the popularity and quality of a journal
-ve: some journals try to increase there IF by self citations, limiting no. of articles to be published, accepting only those articles having chances of high citations etc.
Any how, it is still a useful way to access the journal quality and popularity.
Acceptance rate and circulation could be useful in addition to IF.
I agrée with you. Should consider the acceptance rate in the calculation of the impact factor.
Scientifically it would be more accurate but it could be a problem for journals and publishers.
Impact factor is not actually the true representation of journal quality, but still high impact factor journal research papers quality is good rather than low impact factor journals.
Impact factor of the journal is, of course, a major factor for everyone journal and author but basically for the authors citation is more important.
Well I think about is obvious that exists a relationshipbetween the quantity of citations and the importance of certain article, but many times you can find very good articles without so much citations, and it does not imply it is bad, and when you have free access to one article is possible it exists a relationship between the quantity of citations to (althought it seems a little silly...) even when the article could be not the best but is widely extent.
Nowadays most of the scholarly articles are online and the number of views (N) of every article can be counted. Dividing N by the number of citations of A (subtracted by the number of self citations) could give a rough measure of impact of an article. That's not for journals, I admit, but who is interested in that? I think, the impact of individual articles are far more important.
In my opinion any evaluation "mark" on journals cannot be a standard, because once it has been used and the rules are known, the editorial choice will adapt to the rules than to the scientific quality of the papers. If you know the foundation trilogy by I. Asimov, the rules of psycohystory must be unknown. But if you hide the rules of evaluation there is no transparencies.
Moreover, the citation system is based on the "rate of citation" considering only citations in the last 5 years. This privileges "trendy" fields, with many peaople working on that and neglecting arguments with fewer people. But some of these papers can be very popular few years later. However, the original paper is often forgotten, considering only the recent ones that renew the interest in the field.
Also the paper citation index is not a good criterion, also because citation is "money" in the research system and many authors hide the contribution of other researchers for obvious reasons.
I just want to point out the danger in using this parameters in research evaluation making the example of what's happening in Italy, where politicials uses these parametes to evaluate institutes and scientists. Changing the bibliometric parameters people at the top can pass in the bottom of the list and viceversa.
The only way to evaluate research paper is to read it!
there are a lot of other measures besides the impact factor: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006022
i think the impact factor should be used with caution and not as a catch-it-all metric of impact. furthermore, we should consider altmetrics (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0064841 http://firstmonday.org/ojs/index.php/fm/article/view/2874/2570) and forms of impact beyond the scientific community.
I suggest reading this article http://www.mendeley.com/research/download-vs-citation-vs-readership-data-case-information-systems-journal-3/
The impact factor, a number calculated annually for each scientific journal based on the average number of times its articles have been referenced in other articles, was never intended to be used to evaluate individual scientists, but rather as a measure of journal quality. However, it has been increasingly misused in this way, with scientists now being ranked by weighting each of their publications according to the impact factor of the journal in which it appeared. I have seen curricula vitae in which a candidate annotates each of his/ her publications with its journal impact factor listed and finally mentions the cumulative impact factor. Even some universities or institutions weighs the candidate performance with the impact factor instead of their work and citation. Some researches have good citations but that journal doesn't have impact factor. Does it mean that the work is worthless? So I can say here is impact factor should not be considered for ranking the authors. They can take their h-index for calculating this. h- index is important for authors as it counts the citation of their papers and not the journals they published. Impact factor is only for journals for making rank in their fields.
impact factor is one of the method to measure the contribution and effect of the publisher in a research environment. It is use to determine where to publish a good and challenging research. It is also help to know the contribution of ones research in the society. Though, There are many publisher that don't have impact factor but reputable because they are institution based types.
Index copernicus is still better.
Citation of the journal for a research indicates significant impact of the such research.
H index is also a measure journal quality. H index gives the citation of the journal without considering the self citation. Since many journal increases their impact factor by forcing the author to use more articles of their publications.
The impact factor (IF) usually refers to the average citations an article attracts within the first two years after its publication. Therefore, the IF is strongly biased towards "hot" topics. I like more the 5-year IF, which accounts for the citations within the first 5 years after publication. Another good indicator for the journal´s current quality is the trend of the IF. A journal with continuously decreasing IF over the last 5 years or so is probably doing something wrong. A good journal has a continuously increasing or stable IF over the years.
Aha, the impact factor thing. Let's use RGScore as well as number of viewers an article attracts per a stipulated period - 1, 2 or 3 years and comments/rating by the common folks.
In my country and countries of former Yugoslavia, many researchers publish their "result" in Technics Technologies Education Management Journal in Bosnia and Hercegovina! I had seen many papers that do not deserve to be published even as a student's paper. It is shame! Please, take a look, the link follows:http://www.ttem.ba/index.html
I know that many of them will read this, attack follows!!! In order to persuade, take a look in some published papers ! Pay and get IF raised!!! What is the outcome of existence of such journal (s) where subjects are technics, technologies, education, management, architecture, urbanism, art etc!!!!!!!!!! Thanks for comments!
Here is the link to search for papers in archive:http://www.ttem.ba/archive.html
Dear all,
I would like to let you know that I issued a related question months ago. No convincing answer up to now.
I am strongly convinced that all is fake.
best regards, H.-M.
@ Dr. Gopalakrishnan, h-index is not the number given for number of citation from a journal excluding self citation. The actual impact factor itself excluded self citation. It is for authors only.
Yes h-index is very good in measuring the quality of journal. h-index is very specific and accurate method, but not so popular
Evaluation via indexation is a part of everyday life, whether we like it or not. We index or rank celebrities, sports teams, job applications, perceived dangers and even hairstyles. Human Resources departments of large companies often filter job applications by unspecified index ationcriteria. The issue is perhaps not so much the indexation, but its mindless use in situations where it is not warranted, or simply inappropriate.
How often does something bad happen when an administrator or responsible person or body excuses their liability with the argument that they ticked all the boxes as directed and the answer came out OK?
Ranking of journals can be done in a reasonably meaningful way:
http://www.scimagojr.com/journalrank.php?area=3100
Here the rank is given according to a number of different criteria, and you can sort and combine rankings to make your own index that will reflect your preferences (but you need to download the table to an Excel spreadsheet to do that). That's even interesting. The point about this table is that it shows how different criteria lead to different rankings having different interpretations.
The issue of ranking individuals, as with the "H-Index" or one of the many variants, is more divisive, and the ranking of applications for job applications or for funding using the H-Index is equally misguided. I even doubt that the use of indexation as a "preliminary filter" on the grounds that the decision makers cannot possibly address all applications is wrong, but with hundreds of applications per grant or per position what is the alternative?
Ranking individuals across fields is totally silly (though it has been tried) and even within a given field it is problematic. In Astronomy there are now experimental papers published having 100's of authors. These cannot be reasonably compared with, say, theoretical or technical papers having only two or three authors.
If you are to play this game you have to compare like-with-like. But even then I would suggest there are serious doubts as to whether such a comparison is a proper basis for drawing important comparative conclusions, especially when it comes to jobs and grants where the decision can be life-changing.
Sadly, ranking systems are here to stay - so let's try to urge those who use them to use them sensibly.
Yes. There other options to measure journal quality such as university based (academic environment) from a public and reputable university, reputation of the editor and reviewers, and professional based journal with high integrity in that environment may be considered as quality journal.
As scientists, we should perhaps ask ourselves whether we need journals at all, especially in the light of the excellent e_print archives.
Nowadays we do all the publishers' typesetting for them (LaTeX) and the e-Prints look identical to the final products. Moreover, with the papers in the hands of publishers, access to articles (and in particular access by taxpayers who have indirectly supported the research) costs $30 - $50 just for the privilege of downloading a copy to read!
This is at least an argument for open journals, and, happily, we do see a trend here in that many journals are become to a greater or lesser extent, "open".
So what are we missing by forsaking the journals? The answer is, obviously, peer review. We should then ask another question: what is the purpose of peer review? A fairly clear answer to that is that provides a benediction by one's peers which in a high standing journal, is thought to be of value. In fact I find peer review extremely valuable when the reviewer is diligent in pointing out errors or misconceptions, and references that need citing.
If I post a paper on the e-Print archive, people who pick it up will send emails recommending such corrections anyway, and, if people find what I write useful or interesting they will cite, it. The article databases note citations to papers in e-Print archives so it is still possible to do your citation count.
I am not saying we should do away with journals! I am saying that we should ask ourselves whether the system needs changing and ask what is the role of the journals in the light of the internet.
If the purpose of the journals were simply to provide an additional parameter for calculating an index of worthiness of an individual (the H-index), that would worry me since I am even more concerned by the abuse of such indexes than I am by journals that charge people for access to "their" material.
There are many other factors which may measure the the journal quality such as review process (how rigorous it is?), citation in text book (which will make the research familiar with student community) and commercialization of the research.
To me there are two distinct issues. One is how we measure the quality of journals and the other is whether such information should be used to assess the quality of a researcher's work.
Journal quality is difficult to assess, especially given recent proliferation of new journals in some discipline. While not accurate, IF still represents the easiest way to gain an indication of journal quality. If we are not happy with this, then we need to develop a new quality criteria which will include rating of editorial board, review assessment quality and criteria, publisher quality, etc. The problem with employing such broad based criteria approach is that it is difficult to establish a truly objective assessment measure for each one. That is why we have been stuck with IF for a while and until some can come up with a similarly simple way of more thoroughly assessing journal quality, we cannot expect a change for sometime. Having said that I think there is an immediate need to normalize the IF values allocated to some journals. It makes no sense if the IF for a top journal in a discipline is rated at 50 while the one for another discipline is rated at 2. Unless, these numbers are normalized on the basis of what is known about the different disciplines, this will look like comparing apples and oranges. Also, this will continue to raise questions about why we need to have journal quality index.
The question of whether the IF of a journal should be used to assess the quality of a researcher's work is a separate issue. I agree with DORA that it is wrong to use IF to judge the quality of a researcher's work. The quality of such work is better judged by individual assessment which may include citations and other disciplinary criteria. But for this to be useful it must be objective, not subjective. As we know there are many papers published in high IF journals that do not get much or any citations. So obviously in those cases, the IF of the journal is no reflection of the quality of the published work.
As a suggestion for discussion, if it was possible to come up with a new normalized ratings for all journals, it is possible to derive a new approach for assessing quality of research papers, defined as paper quality index (PQI) as follows:
PQI = Journal Rating x Paper Citations x Discipline Significant Index
This is a very simplified approach for assessment of paper quality which places emphasis on citations and disciplinary significance of the research topic. It could be argued to leave journal rating out entirely and this could still work. To ensure fair comparison, threshold could be set to reflect length of years of publication. For example, the threshold value for PQI(1) for papers published within the first year will be different to that of PQI(5) for papers published 5 years, etc.
@Sam
Nice! - simple and constructive too.
Something like what you suggest could certainly work in any one field and of course it may be useful for some purposes. One of the issues in my field, astrophysics, is that indexation across sub-fields makes little sense: it's the 500 author papers versus the 3-6 author papers. Both impact in different ways. The 500-author pieces tend to have immediate impact (like the Higgs boson papers). The 3-6 author papers have a long-term impact lasting for many years and sometimes decades. At the 1-2 author level it's even more extreme: Einstein is frequently mentioned, but never gets any of his papers cited in the bibliography!
Some other opinion here:
http://theconversation.com/quality-not-quantity-measuring-the-impact-of-published-research-18270?utm_medium=email&utm_campaign=Latest+from+The+Conversation+for+19+September+2013&utm_content=Latest+from+The+Conversation+for+19+September+2013+CID_565d8eef834cd381a6be494022f7db30&utm_source=campaign_monitor&utm_term=Quality%20not%20quantity%20measuring%20the%20impact%20of%20published%20research
@Noor
I am certainly not against open journals - quite the contrary, I would do away with for-profit journals that charge money each time someone wants to consult an article. In the comment that you cite there is the comment that authors my opt for the peculiar privilege of publishing their articles with '“gold” access, which \* allows *\ readers to freely obtain articles via publisher websites' (my emphasis), Personally I don;t get it.
That commentary focuses on the biological/medical industry and the conclusions reached there would be entirely inappropriate in, say, physics, mathematics or astronomy - the "pure" sciences, by which nowadays I mean "non-commercially supported". In the arXiv e-print archive and other similar archives we grant free and open access to publishing and access to that published work. There is no "deal" between the publishers of the learned journals and the community that allows or disallows posting of e-print versions of articles.
This is not to belittle the role played by the publishers of the journals. Their role is changing with time as they themselves address this issue and ask themselves how they can maintain profits in this open world. What are they offering in exchange for the exclusivity they claim on scientific knowledge? It is up to individual scientists and their Institutes to answer that question for themselves.
It's often only the University administrators who demand evaluation by indexation and demand work that is ratified by external agencies and indicators. As a scientist I find it more useful to read the works on which candidates base their applications for positions than to rely on ad hoc indexation schemes that reflect little of any real value. There is a fine article about this by Colin Macilwain, published in Nature (August 15th), and it's free access (!):
http://www.nature.com/news/halt-the-avalanche-of-performance-metrics-1.13553
Thoroughly worth a read.
In writing my book on the early universe I have avoided, wherever possible, citation to articles that are not freely available to readers. There are important exceptions, like a few articles published in Nature and similar journals, but they are in the minority. With the disappearance of libraries from university departments, citing text books for didactic reasons is an area of difficulty. Conference proceedings are another problem area where there is often no free access to articles - but that's where the e-print archives come in.
Budgets for research decrease: we need open access. I am equally sure that we do not need indexation.
@Noor
You started the discussion and a number of diverse and interesting opinions have been put forward - well done. The question is now: can you say what YOU think now!?
About four weeks ago, I had written about "scientific journals" where You pay and get Your paper published, no matter what quality of paper is. Just submit, pay and get your paper printed with false impact factor! Two days ago, in BLIC, referent newspaper in Serbia, an article, large one, was published about this "scientific" phenomena, saying that milions of euros are paid by insitutes and universities every year for printing the scientific papers! The prices are in range of 300-1000 euros!!!
Three "Journals" were treated, namely METALURGIA INTERNATIONAL, HEALTHMED and TTEM!
It is very sad and dissapointing, isn't it! :(
I do know that I can expect downvotes now from people who are member of RG but publish in these journals, but I had to make ALERT!
Regards,
Ljubomir Jacic
Assure Yourself by using following links: http://www.healthmed.ba/ , http://www.metalurgia.ro/ , http://www.ttem.ba/
Regards,
Ljubomir Jacic
@Ljubomir
I agree with your remark about "pay-to-publish" journals. There are a few in my field but they contain little of any particular merit (insofar as I have looked!).
It is important to distinguish between "page charges" and "Pay-to-publish".
If you get a paper published in, say, Physical Review Letters, there are indeed heavy page charges - but the first problem is to get past the referee! The standard of refereeing there is generally excellent and very tough. You cannot publish without the referee's agreement, no matter what you pay. That's why it is among the premier physics journals. So this is not "pay-to-publish".
The problem is that the "reputable" journals need to earn money to survive and (perhaps cynically) their publishers need to pay their share-holders. It is essential that they have page charges in one form or another. Electronic publishing helps: there are no page charges but it brings us different problems (like refereeing).
We also need control (and I do not mean "censorship") over what gets published: there is already too much good stuff to read out there without diluting it with stuff that is hardly worth glancing at. The referees and journals thereby play a significant role, acting as a filter. If it's in Physical Review Letters and in your field, it's probably worth reading and studying, and if it should turn out that you don't like the paper, you are free to ignore it.
But referees do make mistakes. I believe the Crick and Watson paper announcing the double-helix, certainly one of the most important papers of the 20th century, was rejected (and I am sure someone will correct me if I am wrong!). But it did get published and the world changed. A scientist can always ask for an alternate referee or even turn to a different journal, or simply post it on the arXiv e-print site and not ever publish it in a journal - that is not rare, and some of those "non-published" papers on arXiv are truly excellent .
We have not yet solved all the problems that come with electronic publishing. Who is allowed to edit a Wikipedia page? Why are some archives restricted to "professionals"? and so on. Whatever we do it will be a compromise.
@Bernard Jones, I do agree with Your comments.Yes Your statement is very true:"It is important to distinguish between "page charges" and "Pay-to-publish"!
@M.M. Noor , You had started a good discusion!
Good Luck!
Ljubomir Jacic
Hi everyone, This is an interesting conversation :-) And, of course the h-index has been raised more than once. May I suggest that you check the database record data for your papers & your own page. I have seen a fair few errors over time - mistakes in authors' names & papers mis-attributed on author "summary pages" in databases. And, if you haven't registered yourself with ORCID (www.orcid.org), it's definitely worth a look.
@Sandra
Thanks.
I followed your link and perused some of the arguments in favor of such an "organisation". It seems to be a "library/publisher thing" more than a "science thing", but, like Researchgate, it may have some value to some scientist in some fields.
As a publishing scientist and commercial entrepreneur I personally do not think I would see value in joining. But I might point out that the goals of the site are not explicitly stated - it took some time to get to that and much of the information is only available to those who join.
It seems that the goals of 'orcid 'might be at variance with the views I personally hold of what "open" should mean (but then again, it's not a very explicit site). I think there may be better ways of achieving "open collaborative science" than that. Most likely it depends on the field of research and other factors. I confess that my feelings about this may also be driven by a personal phobia I have about sharing information as requested on that site.
If I may make a suggestion - why don't you start a discussion on Research Gate on this very topic? You would then have the opportunity of listening to widespread opinion and of clarifying some of the questions people may have about this way of promoting science or the individuals who devote their lives to it.
Of course, I am old-fashioned: I "do science" because it is something I enjoy. it's my life and I love it! I do not do it because it boosts some banal ranking system that purports to "value" me, my colleagues, or what I write. But that's perhaps a topic for your hopefully to-be discussion topic on this site!
@Ali
Yes, that one came up in a similar discussion on another group.
As you see from that website, the h-index correlates with other rankings, but the dispersion is huge. You can improve the situation by focusing on specific subject areas, like "physics", but even then it is not a one-to-one relationship.
But publishing scientists already know which Journals are important - that's why they submit their papers there, and why the ranking of the journal improves. They don't need an index to tell them that Reviews of Modern Physics is a good journal in which to publish.
I think such indexes have more impact on the Journals themselves, and in particular on the publishing companies. A high rank in the journal attracts a lot of money from subscriptions and so is good profit for the shareholders. I don't think they care what is published as long as the ranking remains high.
It's no more than a sports league table - all publishers strive to get their rankings to move upwards.
Cynical? - perhaps, but I still need to see a strong argument why Open Journals with free publications costs and free access are necessarily a bad thing. OK - we may not get the standard of refereeing - but I feel I can evaluate papers in my own field by reading the paper and talking about it with my colleagues. The standard of refereeing in major journals is in any case far from uniformly good and is certainly not free from bias.
Thanks Bernard for (always) good opinion sharing.
THES (Times Higher Education Supplement) still using citation as 30% of their marking scheme for the university ranking. 30% is very high portions. That show citation still very important for universities or researchers and citation is IF for the journal. At time being, there is no other best way to evaluate the quality other than citation. Do you agree or disagree or any further opinion from any of us.
This is THES criteria to evaluate worrld ranking:
The essential elements in our world-leading formula
Underpinning the World University Rankings is a sophisticated exercise in information-gathering and analysis: here we detail the criteria used to assess the global academy's greatest universities
The Times Higher Education World University Rankings 2013-2014 are the only global university performance tables to judge research-led universities across all their core missions - teaching, research, knowledge transfer and international outlook.
We employ 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, which are trusted by students, academics, university leaders, industry and governments.
The methodology for the 2013-2014 World University Rankings is identical to that used since 2011-2012, offering a year-on-year comparison based on true performance rather than methodological change.
Our 13 performance indicators are grouped into five areas:
Teaching: the learning environment (worth 30 per cent of the overall ranking score)
Research: volume, income and reputation (worth 30 per cent)
Citations: research influence (worth 30 per cent)
Industry income: innovation (worth 2.5 per cent)
International outlook: staff, students and research (worth 7.5 per cent).
Further reading here:
http://www.timeshighereducation.co.uk/world-university-rankings/2013-14/subject-ranking/subject/engineering-and-IT/methodology
@NOOR - great point!
This highlights the real issue: what is the purpose of the rankings and are they effective in achieving that purpose?
NOOR's example of the THES ranking of Universities is a good example where, through careful application of clearly specified criteria, they achieve a ranking that many would consider to be relatively meaningful in that it achieves a result that is roughly what one might have expected. ie: validation by accord with general informed opinion.
Not all vice-chancellors (presidents etc) would agree with this ranking, nor even for the basis of the ranking. To them this rank is most important since a lowering of rank is, to them, an indicator of declining performance. But not all Universities have the same goals nor do they have the resources of the Harvard, Oxford and Cambridge types.
What THES has done well (i.e. correctly) is to (a) present their data properly (the website you refer to is excellent), (b) provide their sources, and (c) allow comment. Many people might just look down the list of 200 rankings to see where they are without reading the article in detail - but the table is essentially meaningless without the information provided about how it was produced. THES has done a good job.
The difficulty I have is more with the banal use of H-Index and the like by job and grant application bodies and Human Resources departments who do not put this kind of effort into assuring themselves that what they are doing is meaningful within the context of their purpose. Using such indexation blindly can results in a project failing to get off the ground or even stopped, and in individuals having their futures determined by an arbitrary and perhaps irrelevant process.
But the grant allocation and job allocation committees are caught between "a rock and a hard place". The THES rankings use citations, so the grants and jobs will preferentially go to those who do "high citation work" with little or no regard to other factors. See Colin Macilwain's excellent commentary in Nature on this:
http://www.nature.com/news/halt-the-avalanche-of-performance-metrics-1.13553
Note that I said "high citation work": high citation may simply mean "useful", like a precious data set. Precious data sets are indeed the drivers of science, but I would hope that there was more to science than the mere acquisition of data. The way "big science" appears to be going is that the optimal career strategy for a young scientist is to join a "big science" group and so have his/her name on a number of highly cited papers (among dozens or even hundreds of authors). This generates an elite who are elite merely by belonging to such a group, ie being members of a club or clique. Fortunately there has been a trend to demand that such data acquisition consortia make the data public with the resources for others to pick over the data when the consortium has finished with it.
In my own field (astrophysics) I love the competition between small high focus groups and the big space-based science projects that take decades to get off the ground. "Small" can be very effective and I would very much egret the day that "small high-focus science" is not supported. Talented small groups in non-ranked universities can still make a significant impact in their fields - provided they are adequately supported.
But budgets and resources are limited, tenured positions in universities are ever harder to get, and so on. Unfortunately, advancing science is expensive and so will be ever more the province of the richer countries. But optimistically, things will get better - something always turns up.
Breaking news on this topic!
Can we predict the citations of research even before the results are published:
http://www.dashunwang.com/pdf/2013-Science-Wang-127-32.pdf
with commentary at
http://www.nature.com/news/formula-predicts-research-papers-future-citations-1.13881
Both published in high ranking journals ;)!
Work for the weekend - we need the "magic formula" that tells us we are wasting our time.
Hello, Back to ORCID though not in a timely fashion, sorry. I suggested that people have a look at ORCID but some weren't sure why. As a librarian I have found that databases often "attribute" papers to incorrect authors through mechanisms such as an author profile/record. I've also seen authors change their names over time (often to fit in with naming protocols used in English speaking countries) creating database confusion. By obtaining an ORCID ID, you are able to offer a number (unique & constant) & so help avoid these issues. If you would like to see a publisher's & database's explanation, you might like to check out Elsevier's(Scopus is an Elsevier product) - http://www.elsevier.com/journal-authors/authors-update/issue-4/new-orcid-id-aims-to-resolve-authorship-confusion The more accurate your author records, the better chance people have of finding & using (& citing) your work. I highly recommend that you check your author profile/record/or whatever your favoured databases call this feature & ensure that all of your work is linked to your record & that no-one else's work is linked to your record. I particularly recommend it if you've changed your name over time. If you need to have your record changed, the database will provide an opportunity to do so. If you can't find it, talk to your librarian. She or he will be delighted to help you. Helping your share your knowledge is an important part of our professional practice :-)
@Sandra
Thanks for the clarification - the author name issue is a serious one, especially if your name in England is Jones, or even B. Jones! There are almost a dozen of the latter in Astronomy alone (I once thought of getting 6-8 of us together and writing a paper for fun).
The other issue we face is with Chinese naming conventions wherein the first given name is usually the family name (surname). Many ethnically Chinese authors in my field decide to "europeanise" their name by switching the order of names. The result is mild confusion unless the author is already known.
Then there is the problem of the transliteration of the name between scripts: we variously get the Cyrillic version translated to Zel'dovich and Zeldovich and even the odd Zeldovic, or in Chinese Xiang and Chiang. There may be rules for all that - but nobody appears to know what they are.
In my corners of science ( Physics, Astronomy or Mathematics) the main databases are the public domain arXiv and ADS databases. I believe it is up to them to sort this out if there is pressure to do so from their public (and then it will probably be that public who helps to fix it).
In any case if an author sees his/her paper wrongly noted, they will generally send an email and it will be corrected - so there are checks. If an author does not check the entries in these places their paper may not be correctly cited - well, that's tough.
While writing my book and collecting some 8000 references I have on occasion noted errors in those databases or in Wikipedia and I have either corrected them myself (as per Wikipedia) or notified those responsible. No problem.
So, with respect, I remain unconvinced that in Physics, Astronomy or Mathematics we need external bodies (and in particular publishers!), SCOPUS or an ORCID ID.
But maybe my colleagues would disagree: I am old and conservative (small 'c').
I have read the answers given by all experts. Any criteria for analysis has its own merits and demerits. What I can make out from the experts is that they are high quality impact researchers with RG score more than 40. What about an youngster who starts his carer as a faculty or scientist? Where is has to publish and how he can measure his impact? He has to publish in relevant journals with high impact factor which attracts more citations. I very well agree, it is not the number of papers published, it is the number of quality papers which attracts peer researchers. Do not publish for the sake of publishing. All points mentioned by the above experts should be kept in mind. Journal impact, individual impact and citation analysis all plays an important role when it comes to comparison. Proper evaluation of the quality by developing an appropriate criteria is necessary. Lots of research is going on in this area as well. Let us do quality research, publish in quality journals and be recognized by the peer research community. I believe with this, recognition follow automatically.
Here is my candid answer. It is very difficult for young researches to get their papers published in high end Journals , because it takes i. Time and ii.The expectations are very high. Many a times, the initial stages of research needs to be published to draw useful comments , suggestions and also to ascertain whether an young researcher has taken a right direction. In this direction some Journals with impact factor ( particularly open source) are doing their best , they review the papers sent, they come with marginal publication fee and it is relatively easy get the work published in the international arena to showcase the novel ideas. Therefore, I am of the humble opinion, publications must happen regardless of impact factor / indexing/ citation etc... in order to decipher the knowledge in a seamless fashion.
Thomson Reuters is proud to offer a new 8 th edition within the Web of Science Core Collection, the Emerging Sources Citation Index (ESCI). Emerging Sources Citation Index is designed to extend the universe of publications in Web of Science with additional high-quality, peer-reviewed publications of regional importance and in emerging research fields...
http://wokinfo.com/media/pdf/wos_release_520.pdf
I agree with Dr. Jayaram's opinion. At some point of time we all should look for quality research papers and the quality of these papers at present is identified by impact factor, indexing and citation. These information is also required to be furnished if the institute is going for accreditation and for getting national ranking. Simply publishing with out giving importances to these points may not have any value and it is as good as not publishing.
I agree with Dr. Jayaram's opinion. At some point of time we all should look for quality research papers and the quality of these papers at present is identified by impact factor, indexing and citation. These information is also required to be furnished if the institute is going for accreditation and for getting national ranking. Simply publishing with out giving importances to these points may not have any value and it is as good as not publishing.
of course, Impact factor of the journal is very important. But, there are many good journals whose impact factor is very less and not SCOPUS indexed.
The researchers one who expect the good number of citations from the other researchers should inevitably go for well-indexed journals of good impact factor. this procedure needs more time and patience, which is very difficult for time bounded research works (especially for student projects).
Perhaps I might refer you to my post, above, of Sep 30 2013 where I suggested that such indexing mainly serves the interest of the publishers and the bureaucrats who control recruitment. As a scientist I know how to evaluate what I read - I do not need a referee to tell me what I should read (and if I am in doubt I post a question to my colleagues on some appropriate website). My colleagues frequently ask whether I have read the latest paper by A. Person. Networking is a vital part of good research and nowadays that is easier than ever.
I certainly do not need some arbitrarily constructed index to indicate to me whether someone is worth employing in my institute or company: there are brilliant people out there who have a low 'XYZ-index' and mediocre people who have a high one.
When considering academic jobs I generally ask the interesting candidates to let me know which three of their papers they have written in the past five years I should read. Nowadays I would also ask them to make their role in the research precise and I would check that out with the lead author. That takes far less time than arguing with committees and bureaucrats. And if I have not already read those papers I will hopefully read something new and exciting.
Yes, we have to have a scale or standard for the huge number of scientific journals, we all agree that it will be very nice to publish a paper in nature or science but we can not say that journals without impact factor is ''bad'' or low. However, again there should be a standard or scale to differentiate among journals to choice (journals with high scale provide good feedback and improve the manuscript while journals without scale "national, local or even international may accept papers low evaluation, I know journals that accept manuscripts without evaluation. Anyway, a scale is needed for journals whether IP or any other, this should be solved by anymean.
@Fathi: Why do you care where another person's article is published? As I have said, there are many fine papers in lower-ranked journals, and, moreover, the high-ranking ones do not inevitably publish the "best" papers (whatever that means - I suppose that depends on your field of research). We read the paper - not the journal.
I can understand that you yourself might want to publish in Nature, Science, etc. etc. I simply have a preference for the journals that my colleagues read - in that sense such a journal is providing wider access to the community I wish to reach.
Having said that, I always post pre-submitted papers on the arXiv archive: that's where most of my colleagues go every day to see what people have just written. If I have an error or a misunderstanding I am sure that one of my esteemed colleagues will jump on me and I will then correct that version of the e-print. Not all Institutes "allow" their staff to put up an e-Print before formal acceptance by a journal. Such Institutes manifestly have no faith in the people they hired!
I think most physics is now done via such e-print archives. By the time the paper appears in a learned journal it's almost out of date, or better things have appeared in response to the preprint.
There is, in addition, the issue of control of your material, via the imposition of Draconian copyright conditions, by some publishers. Why should a person have to ask the publisher of my paper for permission to publish MY diagram or table? Many leading journals have such conditions - I, and many of my colleagues, avoid such journals no matter what their ranking is. There is a major conflict of interest in that - and it may well spell the end of the journal as we know it. But that's another story.
I might add that in my forthcoming book I have avoided, wherever possible, using diagrams from such journals :)
@Bernard: I do agree with you that we read the paper more than we look for the journal name, however, specialist journals (quality and impact) is an important than general one. There are many parameters to be considered other than IF when we submit our paper to a journal. High profile journals will inevitably have more exposure and weight than low impact journals (all we know that IF is not the only parameter we look for). We know that IF is a widely criticized parameter but it does have some utility in providing a ranking of journal. My concern is how to decide where to send your paper?. and where your work is best placed?. Thus, best to know (spectrum of possible journals and the bibliometric measures that are used).So, authors need to look for, not only IF, appropriateness (or fit) of their work to the journal, time of review and final decision, cost of publication (free or high cost) and nature of access. All these factors will influence the final choice of destination for the manuscript. Thus, the author can make the judgment of where the work is best placed.
@Fathi: But isn't the key point to get a maximum readership?
If the goal were to maximise the number of publications then the low-impact journals would presumably do the job, but then your readership and citations would be almost zero. Maximising readership essentially makes the arbitrarily generated impact factor less useful (or a waste of time).
Doesn't every field have it's own specialist journals that are widely read? Nature has high impact parameter, but that comes from a diversity of fields - it's hardly a specialist journal! In your field, are the most widely cited papers the papers published in Nature and Science?
I should confess that I do not like any form of indexation! We are not described by arbitrarily conceived numbers.
In my opinion, its very good to publish a paper in reputed journals, but many researchers look for articles and not journal name, so in my view there is absolutely no problem to publish article in low impact factor journals.
I agree partially with Mr. Abhijeet Baikerikar that one should publish research articles. Impact factor many times need not be the criteria. What is important is the quality research papers which attracts citation from quality researchers. If you see in Civil Engineering, there are many reputed journals with impact factor less than one. If you see Cement and concrete research (Elsevier) and similar type of journals, the impact factors have increased over a period of 5-6 years and now it is in the range of 3-4. Journals with impact factor more than 5 is very rare in civil Engineering. Publishing papers in such journals is very difficult. Acceptance rate is only 15 to 20 %. one paper out of 5-7 will be accepted. The worst situation what we are facing in Indian is the mushrooming publication of articles in online journals. By paying just Rs 1000/-- 1500/-, an article will be published or accepted with in 10-15 days. The quality of papers are very average. Any articles of average quality ( what is average?) can be published online in many journals in India. In many journals there is no proper review process, review reports will not be sent. I have reviewed few papers during 2012-13 for online journals, I have not received even one reply to my review comments. Many times articles have been punished even without informing to reviewers. It has become just a business!. The quality of the publishes, members of the editorial board and their reputation in research, etc matters. How many researchers will be looking at these aspects? They just want papers to be accepted and published as early as possible. Such papers will not have any quality nor citations. Mr. Bernard J. T. Jones views are very valid. I am again emphasizing the fact that your publication should have good citations by reputed researchers in your area. I agree that some online journals do have good impact factors in the range of 3-4 and papers do have some citations. All these are happening from average researchers and students concentrating only on online journals for their literature review. I have seen many Ph. D thesis (which I have evaluated) wherein most of the referred papers are from online journals which are hardly 2-6 years old. Their research is completely based on what is available in these online journals (Very limited source of current research). Today accessing reputed journals is not that difficult. First we should access all the reputed, established journals for thorough literature review. Journals published from 20 to 40 years which are pretty old journals and journals from SCI and SCIE indexed should be considered from the point of quality. Many institutions in India prescribe quality journals for publication in many areas and more information in this regard can be had from Google. Good wishs.
I agree partially with Mr. Abhijeet Baikerikar that one should publish research articles. Impact factor many times need not be the criteria. What is important is the quality research papers which attracts citation from quality researchers. If you see in Civil Engineering, there are many reputed journals with impact factor less than one. If you see Cement and concrete research (Elsevier) and similar type of journals, the impact factors have increased over a period of 5-6 years and now it is in the range of 3-4. Journals with impact factor more than 5 is very rare in civil Engineering. Publishing papers in such journals is very difficult. Acceptance rate is only 15 to 20 %. one paper out of 5-7 will be accepted. The worst situation what we are facing in Indian is the mushrooming publication of articles in online journals. By paying just Rs 1000/-- 1500/-, an article will be published or accepted with in 10-15 days. The quality of papers are very average. Any articles of average quality ( what is average?) can be published online in many journals in India. In many journals there is no proper review process, review reports will not be sent. I have reviewed few papers during 2012-13 for online journals, I have not received even one reply to my review comments. Many times articles have been punished even without informing to reviewers. It has become just a business!. The quality of the publishes, members of the editorial board and their reputation in research, etc matters. How many researchers will be looking at these aspects? They just want papers to be accepted and published as early as possible. Such papers will not have any quality nor citations. Mr. Bernard J. T. Jones views are very valid. I am again emphasizing the fact that your publication should have good citations by reputed researchers in your area. I agree that some online journals do have good impact factors in the range of 3-4 and papers do have some citations. All these are happening from average researchers and students concentrating only on online journals for their literature review. I have seen many Ph. D thesis (which I have evaluated) wherein most of the referred papers are from online journals which are hardly 2-6 years old. Their research is completely based on what is available in these online journals (Very limited source of current research). Today accessing reputed journals is not that difficult. First we should access all the reputed, established journals for thorough literature review. Journals published from 20 to 40 years which are pretty old journals and journals from SCI and SCIE indexed should be considered from the point of quality. Many institutions in India prescribe quality journals for publication in many areas and more information in this regard can be had from Google. Good wishs.
Measuring a journal’s impact
Different metrics were presented:
https://www.elsevier.com/authors/journal-authors/measuring-a-journals-impact