From Google Scholar, you can get three metrics, which all go towards measuring scholar productivity:
*** TOTAL CITATIONS: This matters to a point, in that, somebody that has 1000 total citations is possibly being listened to more than somebody with 10 total citations by his/her colleagues ... This allows measurement of QUANTITY ...
*** H INDEX: Of course, quantity only goes so far, since you can get many publications in journals, which are almost instantly indexed by Google Scholar's super efficient web spiders ! H index is an attempt to get a handle on QUALITY.
*** I10 INDEX: This is just like total citations, but, only counts publications that have >= 10 citations on them. This was introduced to eliminate lower impact publications.
In all of them, the general idea is the same: The more citations you have, the higher quality your scholarly work is ... There is an ongoing debate about whether SELF-CITATIONS should be counted or not, but, due to the difficulty in finding a reasonable formula for this for CO-AUTHORED papers (with,say, 5, 10 co-authors), this would create too much turbulence, so, it is left alone ...
All three of these metrics are available on Google Scholar, although, i10 is only two years old. I expect other metrics to be introduced in the future ... because, there are still many gaps ...
Rafael - I know that a lot of people decry Google Scholar in preference to ISI, Scopus etc. Personally, I'm fine with Google Scholar. It highlights a wider publication scope - such as books, book chapters, theses, dissertations, public reports etc. I like to know that I'm also being cited by these types of forums (often quite high quality) than just the conventional academic journal forums.
h-index makes a lot of sense when assessing a researcher's personal achievement in terms of citations generated.
The wider used Impact Factor is a measure for citations of journals and not of individuals; thus, it can only show indirectly, how much the publications 'around' a researcher's article (including the actual article or not) have been cited.
SNIP is another index and based on an elaborate scheme. Its calculation takes into account such characteristics as the average length of reference lists for a research area, thereby using the probability of being cited in a certain field of study. If you can access ScienceDirect, have a look at the original article about SNIP (Moed 2010).
Dean, quality measure of what? The researcher, the journal they publish?
And, Dean, thank you for reminding me of the original question. We should discuss the idea of quality of research first. There is a well-known definition of quality in engineering and software (topics I'm aware of), but this might not fit academic research.
Thanks for getting back Michael. From my perspective, the quality benchmark or measure of a metric is the extent to which it highlights a researchers overall portfolio. While many would argue that ISI is more rigid and scholarly than Google Scholar- I would argue that it is actually too rigid. It only cites article ciation for journals that it 'chooses' to include in its catalogue. Personally, for me, Scopus meets my middle-ground criteria better - and presents, again to me, a more represntative 'quality' picture of my wider work.
I also feel h-Index and google scholar are good indicators for researchers. It is very difficult to assess the quality of research using indicators.We have to develop new method/ indicator based on quality research and it's impact on societal development and industrialisation etc.
To some extent it indicates influence (especially for more experienced scholars), but not always research quality: e.g. would anybody be happy to be cited 100 times if all authors would state that in this paper, a grave mistake was made? Also, if you manage to publish 10 papers that each are cited 10 times, your index is 10, but if you publish 5 that are each cited 1000 times, your index is 5...
Valid points Tiia - but the examples probably reflect 'minority' occurances. Most scholars are more on the side of 'generous' when it comes to citing other peoples work. I wouldn't go out of my way to cite someones work - just to be critical of it. I'm usually citing work because it supports my worldview or findings. Most people's citation indexes i.e. h-index scores tend to be reflective of their overall activity without 'wild' fluctuations - such as a low h-index - but a few articles that are highly cited.
Actually in my area some authors have published some papers that have been cited a lot and some that have not been cited so much: e.g. search for Vahlne, JE (Jan-Erik): his h-index in Google Scholar is 21, and his most cited paper has been cited 6530 times (his 21st -cited paper has been cited 26 times). So, I would say that his influence is much larger than you would expect from his h-index.
You have to admire Google for challenging the dominance of Reed Elsevier and Thomson Reuters in providing metrics to the academic community. Most of all for providing citations for articles in Scholar. The H indexes generated on the back of these citations are inflated because the number of publications Google automatically indexes is significantly higher than either Scopus or WOS. However, the ubiquity of Google and its free to access model make it a natural standard. Interesting Google also produce an H5 index for journals, presumably challenging Thomson Reuters Journal Citation Reports, which if you dig down goes to surprising level of detail [ http://scholar.google.co.uk/citations?view_op=top_venues&hl=en&vq=soc_libraryinformationscience ]. I would say that it is as good as any, accepting the Inflation problem. The point to be made is where else would you find this level of data, available to everyone equally.
Well put - Matt. Tiia - I have no problem with you pointing out such good examples of where the system may fail some. However, it's still back to my point that such instances are far from 'common-place'.
For those who like altmetrics or alternatives to citation based Impact Measures see this link [ http://blogs.bmj.com/bmj-journals-development-blog/2013/10/21/redefining-impact-altmetrics-now-on-journals-from-bmj/ ] very interesting development from a big OA player.
From Google Scholar, you can get three metrics, which all go towards measuring scholar productivity:
*** TOTAL CITATIONS: This matters to a point, in that, somebody that has 1000 total citations is possibly being listened to more than somebody with 10 total citations by his/her colleagues ... This allows measurement of QUANTITY ...
*** H INDEX: Of course, quantity only goes so far, since you can get many publications in journals, which are almost instantly indexed by Google Scholar's super efficient web spiders ! H index is an attempt to get a handle on QUALITY.
*** I10 INDEX: This is just like total citations, but, only counts publications that have >= 10 citations on them. This was introduced to eliminate lower impact publications.
In all of them, the general idea is the same: The more citations you have, the higher quality your scholarly work is ... There is an ongoing debate about whether SELF-CITATIONS should be counted or not, but, due to the difficulty in finding a reasonable formula for this for CO-AUTHORED papers (with,say, 5, 10 co-authors), this would create too much turbulence, so, it is left alone ...
All three of these metrics are available on Google Scholar, although, i10 is only two years old. I expect other metrics to be introduced in the future ... because, there are still many gaps ...
Vitaly, this is actually very true. An example is the three fields I am doing work in: VLSI , Circuits and Systems, Power Electronics. If you look at the Impact factors of the IEEE Journal of Power Electronics, it is almost 5.0. On the other hand, IEEE Circuits and Systems journal is about 2.5, and VLSI is about 2.0. Yet, ironically, it is sometimes more difficult to publish in VLSI.
It is due precisely to the phenomenon you mentioned ... VLSI is much narrower than Power Electronics, and, the impact is less, since there is a smaller audience. But, this doesn't mean that, the field is less important !
With increasing focus on impact beyond scholarly journals Google Scholar offers some advantages. It picks up citation in higher degree theses available in repositories - while most PhDs would go on to publish in journals they don't all do this. Also GS picks up citations in government and regulatory body reports, practice guidance, commissioned reviews and reports. These latter will often include positive mention of work and relevance to policy and practice. In my field and in other similar fields, e.g. education, nursing these citations are a very good measure of impact beyond the academic community.
@Vitaly, since some fields are narrower, and some are wider, this really doesn't create a problem. Every scholar is compared within his/her field anyway ... For example, an H index of 10 means something totally different in Physics, vs. Mathematics or Electrical Engineering. Your own field will usually account for the parameters associated with that field ... Do you agree ?
=====================
@Liz, I have noticed that too. GS does a very good job indexing documents that are not necessarily journals and/or conference proceedings ... This possibly helps in quantifying the citations metric ...
The general assumption is that, IMPACT is proportional to the NUMBER OF CITATIONS. However, in practice, there are quite a few other factors that determine somebody's IMPACT ... The SPEED at which the impact is reflected on the scholarly metrics is another challenge ...Sometimes, It takes years for your work to turn into IMPACT ... I always remember the shocking story I read about how Euler's work took 47 additional years to be completely published after his death !
Although this doesn't totally apply to our CENTURY :) it still shows that, IMPACT isn't necessarily perfectly aligned with CITATIONS ... This is true even in this century :)
Why not save science free from any fashionable or Vip like scheme. Usually researchers publish or do not publish findings or new knowledge following their lab’s policies. So, let science to be read by all and to be reviewed and debated by experts. I mean by debated that any controversial research work could be responded and pondered by research papers and proofs and vise verse, and let debate go on for the research’s accuracy or new findings statement. I think that author from a big lab in a wide university could be easily be cited by his colleagues and could gain much points, even if his work isn’t so popular; and a researcher (may be unknown) from a small lab, showing may be interesting findings couldn’t be cited or his research couldn’t be popular. For the self citation issue, I think that if current results ensue previous work from the same author, it is natural that it must be cited and referred in order to let scientists follow the idea and have full knowledge of the work and consequently could assess it, use it, cite it, reproduce it or improve it.
@Fairouz, I fully agree with your last remark relating self citations. It's very difficult to avoid when working on a consecutive series of projects.
I also partly agree with your critique of the 'commercialization' of scientific impact. Only, as long as our institutions call for impact young scientists are bound to follow the rules. And these institutions do not want to anger their academics or mistreat them deliberately. They are part of the global competition for recognition and research grants, for which decisions are made on the basis of impact.
For some time I have had the idea for a question at RG at the back of my mind "What is the scientific impact of the IF?" Or has this been asked already here succinctly?
Dear Michael, thank you for your support .. My opinion on the scientific impact do not follows actual market rules through which scholars need recognition and research grants. i do not criticize the recognition value of researchers but the competition based system type not necessary for good research or good dissemination of knowledge; I think that many worldwide researchers have already criticize this scheme.
@Vitaly, Not only I agree with that, but also, I know so many scholars that really published one or two things, and, didn't really go for "quantity," and yet an incredible amount of impact.
Bernhard Riemann primarily did his good work in geometry. He really only contributed one thing to number theory in a short paper: his famous Riemann Hypothesis. This "short" paper is one of the six Clay Institute Millenium Prize Candidates !!
If we had the H-index concept in 1850's and this was Riemann's only paper, Riemann would have an H index of "1" ...
Sure, this is an overly dramatic example, but, it shows that, IMPACT is sometimes difficult to quantify ...no matter what metric you use ... Especially, it is highly field-dependent ...
@Vitaly, sure, somebody might have a paper that looks like it will change the world, and might almost become OBSOLETE in 10 years, and might have 1000 citations by the time we realize that, this idea will never work. A perfect example is the SUPERCONDUCTOR work 20+ years ago. Now, it is obvious that, superconductors are very hard to produce at room temperature. So, does this mean that, academics that did superconductor research had zero impact ? Exactly the opposite. Until it was fully realized that, superconductors were difficult to construct at room temperature, an incredible amount of related advances were made ...
Another example (my favorite) is : Fermat's Last theorem (FLT). It was finally proven a decade ago, 400 years after it was conjectured. Let's assume that, it was wrong. Would it mean that, it would have no impact ? Exactly the opposite. The amount of theory that was built trying to prove/disprove FLT had possibly the highest impact on Number Theory (may be, the entire field of Mathematics).
So, in response to your comment "but science will always be unpredictable," I agree with that, but, I also want to note that, "IMPACT IS PERMANENT." The discovery of a conflicting scientific advance doesn't negate somebody's impact ... However, it might STOP further impact ...
Yes, and they have a new interesting feature, which I miss on RG: Scholar Library. With the powerful Google search features, personally arranged by topics that you can define according to your interests. At initiation Scholar Library imports all your citations that are available on Scholar.
Scopus is another useful site for importing and ordering citation - although this tends to be related mainly to journal-based sources - whereas Scholar draws from a wider base of related literature
Correct Vitaly - there is certainly more delay with Scopus. It does pick up well on 'in press' 'advance access' citation though. Personally, the delay doesn't worry me - although I do sometimes wonder why the 'premier' ISI service often takes 3 months or more after publication?
Well, Vitaly, you are a Big Name for me, because you respond to our questions and comments. The Big Names you mean don't. They are too busy to administer and manage work, and if you ask them, they would put you through to their 'assistants', who actually do the job (and maybe are active on RG).
Interesting, and generous recent thread Michael and Vitaly. RG is quite immediate - and that is a benefit but, as Vitaly highlights (present company aside) the 'audience' can be mixed. I agree though Michael - those who contribute to RG and try to assist are to be commended - whether big name or not. I've certainly had several 'interesting' and potentially useful 'personal' contacts that may be very productive - which I wouldn't get through 'conventional' networks previously.
Google scholar is the most easily available tool and automatically updates everything about one's paper impacts and also in a single umbrella most of the paper profile is available,
Google scholar is an easily available tool and within a single umbrella a list of all the quality publications along with their number of citations, h-index and i10-index are available. So, definitely google scholar is a good indicator of the quality of research activity and influence.
I agree with I totally agree with all the above comments. Its easy, fast, up-to-date information makes things less complicated and assess the research activities of an individual scientist. I think it might also cross other systems like H-index in future and will be used by the common people to measure or assess the productivity of a scholar.