CON: its updates are not fast (conferences may wait even six months before appearing on Scopus), and Elsevier products are probably updated faster than non-Elsevier ones (e.g., ACM and IEEE)
CON: it covers just a portion of the scientific literature (e.g., not all IEEE conferences)
CON: it's a commercial database (no way to act on it to have conferences or journals included, less-than-full transparency of inclusion decisions; available just for pay)
CON: its updates are not fast (conferences may wait even six months before appearing on Scopus), and Elsevier products are probably updated faster than non-Elsevier ones (e.g., ACM and IEEE)
CON: it covers just a portion of the scientific literature (e.g., not all IEEE conferences)
CON: it's a commercial database (no way to act on it to have conferences or journals included, less-than-full transparency of inclusion decisions; available just for pay)
Agree with Maurizio Naldi. Google scholar can become a good competitor if they add impact factor along with paper citations, and Researchgate if they add automatic article citations report and H-index.
1) Partial inaccuracy of the DB when reporting citations (with specific reference to citations included in conference proceedings).
Consider this recent example: when the DB became the official source for bibliometrics to evaluate Italian academics, authors found out that only few of the citations to their works were correctly indexed. Scopus was hence flooded with correction requests.
In my university papers published in journals with impact factors given by the JCR (ISI) are given 3 points, while journals indexed by databases like scopus are given 2 points.
May I ask you which are the other main publication DBs or rankings that your insitution considers in the same tier of Scopus (and to which 2 points are assigned)?
@Antoni's headline question: Because something is "easy to use" does not mean that it is measuring what it is supposed to measure. For example, many citations are negative citations (to incorrect results) but are counted as favouring the original author! It's similar to Facebook only having "like" buttons: How do you express sympathy to a person who's sister has just died? Press "like"???
There are so many CONS compared to PROS that the answer to your original question is a resounding "NO!". See http://en.wikipedia.org/wiki/H-index
How about including a count of the number of patents or useful products developed? How about including a count of Master students and PhD students successfully supervised? Or can librarians not see such data?
You are right Ian Kennedy, but still we have to have an international scale that can agreed and used with specifications. It is like you supervisor postgraduate students by their results are not published worldwide. On the other way production to many papers without any quality or use or interest is also questionable.
Til about 3 years ago, staff members in my university were a bit confused regarding the best journals to publish in. Now regulations are very clear re: promotion. If you publish in an IF journal (as indexed by JCR of Thomson Reuters), you get 3 points and even a financial reward. Less points are given to pubmed, scopus, CAB, Ulrich,Index Copernicus ...etc.
To Ian: I do agree with your citicism to the traditonal approach to evaluate citations.
I would add that some discrimination should be introduced to distinguish between citations from other authors and self-citations.
As far as I know, neither Scopus nor ISI explicitly include such distinction (in Scopus, they do have a command to exclude self-citations and only display citations from third parties, though the author profile only report the total amount of citations received).
Here is a revolutionary, *non-serious* counter proposal: Evaluate researchers only on the research funds they manage to bring in each year! This will be quick and easy for the admin staff to gather and to understand!
Agree with Ian; attracting research funds is a good way to evaluate researchers. But, consider working in low or middle income countries where research funds are limited, and your chances to get external funds MAY be governed by politics!
I do agree that fundraising should be included in the list of indicators to evaluate a researcher's vitae.
Making it the only indicator would risk to drive the academics away from their "institutional job" (i.e. teaching and publishing), and urge them to focus mostly on "marketing" activities.
Anyway, though Ian counter proposal was deliberately thought-provoking, it raises a whole new issue: the relationship between the academia and the real world out there...
Shouldn't an academic research be evaluated for its ability to have an impact (and this could be measured by the funds such research attracts), and not only for being academic sound and rigorous?
Shouldn't we get down from the ivory tower and perform a kind of research with strong implications for practitioners, rather than something that only fits the (sometimes purely esthetic) guidelines of top-ranked journals, while being virtually useless?
This issue could be good for a discussion thread of its own...!
I do agree with Ghezzi, with regard to research activities in some countries and universities but with others (universities and countries that do not spend any money for reasearch activities) are completely different.
Since SCOPUS is run by a publishing company, it cannot reasonably being assumed as independent of commercial interests. In another strand of worries researchers using Mendeley are not happy about the acquisition of this service by Elsevier.
SCOPUS works with a business model designed as if we as researchers could assess the impact of our papers by ourselves individually. Not much different to financial rating agencies having commercial interests in banks and vice versa.
I favour @Ian's idea of funding as the major point of evaluation of a researcher's work. This might be weighted according to a country's research funding.
@Michael, Hey! That was a joke of mine, to point out that a single, easily calculated metric is of little use. Otherwise, I agree with what you say about SCOPUS.
Personally I found that Scopus provides a better equilibrated picture of researchers' achievements that ISI. Reasons are the larger Journal database (20000 vs 10000) and recalibrated SNIP & SJR factors that take account of single disciplines' systematically higher factors (e.g. life sciences over humanities).
Recently I tried to compare three retrieval systems (ISI, Scopus, Google Scholar) in a book chapter "Education and literature for development in responsibility" regarding interdisciplinary (developmental) global studies.
@ Maurizio Naldi's three con's: presently I do not see that these three arguments do not apply to ISI as well. There is vast discussion going on pertaining to Thomsons impact factor, just one example: http://am.ascb.org/dora/.
@ Ian. Good one. Especially because many say to get funds is one of the most important aspects of a researcher's efficiency. One might think that for such people getting funds is more important than the research itself. What's really funny is that researchers are forced to fight for funds for research even if it actually does not require any additional money. It's getting absurd.
Scopus is far from perfect, but it's currently the best available author-bibliometric (authormetric?) tool that I know of.
ISI/ResearcherID is very slow, very narrow in what it indexes, and has consequently low citation counts. I think it waits for journal articles to appear in final published form (with page numbers, etc.), while Scopus indexes early-access online versions.
Google Scholar "My Citations" is very fast, very broad in its indexing, but also unreliable. It often finds unreviewed or unpublished-but-online "articles" that aren't really articles, and counts citations to/from them, which is very prone to "gaming". It also often double-counts citations, in that two versions of the same article are sometimes counted as two citations for all work cited therein.
Scopus is faster and broader than ISI, but just as reliable in my experience. It is much more reliable than Google Scholar, but not much slower. Having said that, I'd be interested to hear if people think there are better alternatives out there.
Regarding Google Scholar Content, since some years I work with "Publish or Perish" (PoP, by Anne-Will Harzig), a tiny but powerful free Software that analyses Google Scholar data and computes a variety of indices.
Source: http://www.harzing.com/pop.htm.
For the field of developmental and globalisation journals, I compared the three "achievement"-metric systems in 2 handbook chapters (regarding journals and authors, respectively):
* Education and literature for development in responsibility – Partnership hedges globalization
* Quality assurance in transnational education management – the developmental “Global Studies” Curriculum.
Might give some insight how different results can be depending on what "Quality" of work is meant to be.
@Ian, hey, that comment (May 5) slipped through. The joke understood (British humour rules) it is still one important (but not the only one) measure of achievement. Of course, funding proposals are not papers, but in the end you have to deliver anyway (and acknowledge the funding institution). I hope that you are addressing proposal writing in your research manuscript. Without funding a lot less research would be undertaken and there is not so much material about it.
I have a 12 page chapter on funding (boring stuff, but necessary); a 5 page chapter on planning; a 14 page chapter on generating the research question; a short chapter on experimental design and 42 other supporting chapters.
Careful what you wish for. The closed access nature of SCOPUS / Elsevier means outside a university/research institution you may not be able to access or update your information [ https://www.researchgate.net/post/Who_owns_your_profile ],
here is where we remain confused: Till date there is a lot of misunderstanding about the indexing: many new journals immediately ge indexed in SCOPUS and PUBMED where as many others don't get even after couple of years of continuous publishing
There appears to be bias: so as of now we should be very careful while evaluating profiles of a researcher just based on performance in one indexing agency
Not sure that SCOPUS would be equally good in all disciplines. From using it myself my impression is that is much better for quantitative and economistic social sciences that is for sociological and qualitative work.
Indexing of Academic and Scholarly content must follow some criteria and must be uniform for all the journals. Many new indexing agencies, most of them just index any journal that comes up as is evident from the huge emergence of journal publishers and journals. .This can be detrimental in long future.
Some Medline source journals are not updated since one year ago in Scopus. I have sent several messages to customer support at Scopus. They answered that they can not add articles of Medline source journal to Scopus and advice me to contact Medline directly.
Mindful that SCOPUS does not include coverage of all journals, SCOPUS can provide a snapshot.
Some things to consider:
-Be sure to compare "like for like"- If wanting to compare 2 researchers, they should be compared using the same methodology, e.g. SCOPUS and SCOPUS, or Google Scholar and Google Scholar, rather than comparing one researcher's impact in SCOPUS and the other's in Scholar.
-Some researchers decide where to submit a manuscript based on where the journal is indexed (e.g. do not bother submitting to journals that are not indexed in SCOPUS, WoS, etc), this could potentially influence what productivity looks like, on first glance.
-Importantly, no database can capture a researcher in terms of their mentoring ability or support for junior academics, which are essential to advance research in an area, so using databases to evaluate career or productivity is only one part of the picture of a well-rounded academic.
Scopus simply doesn't do justice to the diversity and range of high-quality soiological and applied social science (e.g. social wor) disciplines. Google Scholar is far more comprehensive and up-to-date and yes, it would be the clear winner if it could exclude the self-citations.
I teach that you should only cite the best. If used honestly and responsibly, self-citation is not a sin! If you have a good paraphrase from your own works, or quote yourself, you simply have to own up to it. You are no different to any other scholar being cited. You are a member of the research community. The only difference is that you have more intimate knowledge and faster access to your own materials than to another scholar's works. This is a minuscule bias.
In my experience, unnecessary self-citing to skew citation statistics dishonestly is not common. Weak self-links should be spotted immediately by any competent referee.
Sure, self-citations result in convergent references rather than in divergent references, but who are we to know what our readers want? Surely our readers just want the best references?