Evaluation is very important for universities/ research centers/ researchers (i.e. all stake holders) etc etc to assess the performance of the researchers.
Is it possible, please, to attach your evaluation forms that your organization use?
Research assessments are increasing in number and scope around the globe. Nations and institutions want to inform funding decisions, report to key stakeholders, and identify opportunities for efficiencies and growth. This issue explores various methods for carrying out research assessments, such as using a combination of peer review and bibliometric indicators to assess research quality and achieve long-term economic development.
Official evaluation on scientific achievements of a person is like official communications about delicate but not innately known events. These evaluations are prepared by bosses and their outputs must be often deformed according to the local and subjective interests.
There are two kids of evaluation to a researcher. One - informal- when she/he publishes papers, books, and the like, the continuos although a periodical rewards he/she gets (awards, etc.), and the formal one, with a pro form a committee and everything else. usually the evaluations understood as the second, but it is the first which serves as indicator and ground for the second. If so, the real evaluation is the informal, upon which the formal one is carried out.
Research assessments are increasing in number and scope around the globe. Nations and institutions want to inform funding decisions, report to key stakeholders, and identify opportunities for efficiencies and growth. This issue explores various methods for carrying out research assessments, such as using a combination of peer review and bibliometric indicators to assess research quality and achieve long-term economic development.
Individual researchers can be evaluated from mainly two angles :
from the point of view of the researcher’s professional career,
from the point of view of the institution or stakeholder who is interested in how to best achieve its mission (university, research institute, foundation, etc.)."....
Please, see the original interesting article which I found very helpful....
There is no perfect single measure.Different measures are given in attached link:
Measure: N, the total number of papers published
Measure: N/T, the ratio of total papers published to the time in which they were published
Measure: P, the profit generated by patents or products that result from the research.
H index
Assessment Of Impact Approach
Number 5 is preferred by the handful of researchers who actually achieve something)Measure: I/R, Ratio of the impact of the work to the amount of resources used to generate it.
Reasoning: The ``impact'' of research on the field provides a good overall measure of value. One might ask questions such as: Did the work affect others? or Is the work cited and used? However, to make comparison fair, one cannot compare research performed by a team of twenty four researchers working at a large industrial lab using equipment that costs ten million dollars to the research performed by an individual working on weekends with no staff. Thus, to make a fair assessment, compute the ratio of impact to resources.
Actual Facts: Both impact and resources are difficult to measure. More important, it is unfortunate that ``big science'' often appears to have more impact simply because it generates more publicity.
Warning: Note that although this measure is the most fair, it is unpopular. Administrators dislike the measure because the amount of funding -- the item they wish to emphasize -- appears in the denominator, meaning that a researcher who achieves a given impact with fewer grant funds receives a higher evaluation under this measure! Most researchers dislike the measure as well because it emphasizes output over input -- it is much easier to obtain funding than to produce results that have any real impact.
The attached document (RESEARCH EVALUATION METRICS, 122 pages) from UNESCO is a very good reading and discusses many measures to evaluate researchers. I will actually will use it next semester in a graduate course I teach: research methods and seminar.
Here is an excerpt from the introduction part:
The first metric is citation analysis. For research evaluation, many other indicators were needed. Citation analysis along with peer review ensured better judgment in innumerable cases.
Something more was needed to make the judgment foolproof to a great extent. The advent of World Wide Web (WWW) provided the opportunity. Quite a
number of indicators have come up based on the data available in WWW. This module dwells on a number of methods (including old and new) available for research evaluation. The module comprises the following four units:
Unit 1. Introduction to Research Evaluation Metrics and Related
Indicators.
Unit 2. Innovations in Measuring Science and Scholarship: Analytical
Tools and Indicators in Evaluation Scholarship Communications.
Unit 3. Article and Author Level Measurements, and
Unit 4. Online Citation and Reference Management Tools
according to Ellie Fossey, Carol Harvey, fiona Mcdermott and Larry Davidson (see attached paper) the main criteria for evaluating quality are interconnected with standards for ethics in qualitative research. They include principles for good practice in the conduct of qualitative research, and for trustworthiness in the interpretation of qualitative data.
We have a national research evaluation form (in Persian) which is used in promotion committees.if it helps you in any way, please send me a private message and I will send you the form.
Dear@ Ljubomir, you have given the link to the same document which I indicated in my previous post!. I checked again and my UNESCO link given in previous page works fine.
Here are other useful links which you may find useful:
Yes I did dear Behrouz, since I have used it in August this year under some other thread regarding the meaning of RG score. As this document use RG score as researcher's metrics among other metrics, I have attached it here.
Evaluation for evaluating any researcher ,we have to know his scientific base ,his main purpose of carrying out his research aim the purpose of his programming plan with the outcome of his action of his research .
Unless we know & remain aware of his action it will not be a right practice to go for evaluation of the researcher .
With this wrong motion we remain responsible for the path of criticism for other researcher as whatever may be standing of researcher it carries unhealthy criticism in such case evaluated may not have the honest & impartial approach .
Prof. Roland Iosif Moraru has mentioned three major aspects to be considered during the evaluation of researcher. We can definitely add few more or may be we can choose one as the most relevant point. According to elaboration included in the question, this approach is reasonably suitable. This is so because as stake holders Prof. Hazim Hashim Tahir listed the agencies like universities/ research centers/ researchers etc. I feel this does not depict the full scenario. Mostly universities and research centers are Government aided that is they are run using tax payers’ money. Therefore to what extent the research output is relevant to the National development, should also be a major consideration for such evaluation.
Many thanks for this excellent question. My RG colleagues have given valuable answers but I like to add an "unusual" answer. At the top of researchers are the persons who are classified as full professors & these usually get "fat" salaries plus many privileges. For the sake of organizational, national, and international interests, it is timely & appropriate to evaluate these persons very well in a fair manner. The mechanism to do this evaluation is to invite, say 3 foreign independent experienced scientists to read the published research papers of a given professor & then they are invited to come and examine that professor in an oral "grill". If the professor is up to the challenge then it is fine; if not, the whole research record ought to become questionable. This way "true" professors will be sorted out from "fake" professors.
Had I been given an authority, I would have done that long time ago because I saw, in some 3rd world countries, professors who did not deserve their titles at all.
Promotion committees evaluate the researchers based on number of articles, patents, grants, number of graduate students supervised , quality of teaching (student and faculty evaluations),...
The attached link is informative:
Whilst metrics may capture some partial dimensions of research ‘impact’, they cannot be used as any kind of proxy for measuring research ‘quality’. Not only is there no logical connection between citation counts and the quality of academic research, but the adoption of such a system could systematically discriminate against less established scholars and against work by women and ethnic minorities. Moreover, as we know, citation counts are highly vulnerable to gaming and manipulation. The overall effects of using citations as a substantive proxy for either ‘impact’ or ‘quality’ could be extremely deleterious to the standing and quality of UK academic research as a whole.
Overall, the academic community as a whole should resist the adoption of citation metrics as a means by which to make conclusions about either research impact or research quality. They are not logically connected to either issue, contain systematic biases against different researchers and are all too easily manipulated, particularly by corporate rankings providers. They should certainly not become institutionalized in national, international or institutional practices.
It is, of course, difficult and time-consuming to assess academic research by having experts read it and carefully evaluate it against complex and demanding criteria, ideally under conditions of anonymity. That is as it should be. That is the whole point about good academic work and this cannot be automated or captured by present, or even future, citation counts. Simply because the market produces products, and because some people use them, does not mean that these are the things that we actually want or need for the purposes we have in mind. If we really are committed to using research assessment practices to fund the best quality, most innovative and most publicly engaged work, then citation counts are not the way to do it. Rather, we will end up funding not just those whose work is genuinely transformative, original and field-defining (assuming these qualities earn them high citations), but those who are best at self-promotion and rankings manipulation, and who are privileged by existing structures of prejudice.
We can do an evaluation to a reasercher by examining his publications: the standard publications in journals but also the publications in networks like RG, .... .
In recent years, work at the university has become more intense. And I feel that it is still a rat race. There are some areas of science in which it is impossible to accelerate (three-years long test cycles in crop production, long-term cycles aimed at breeding of new varieties) that simply can not be done faster, because the nature of plants is guided by its own rules. Indeed, some work can be carried out in greenhouses or in the laboratory, but the There are seasons. It is not enough just to announce the acceleration in labor and direct scientists, especially young on acquiring degrees. Reduce require some time. Of course, we remember that to obtain a doctoral degree did not last 8-10 years, and 12-19 habilitation. But it's not just about the rush, but above all about the resilience of the individual and its innovative approach that can contribute to the greater development of science itself. You have to still say that the work in universities was fairly quiet and safe. Now that is changing. You are innovative, have innovative ideas can work together in interdisciplinary teams - are needed. And this baggage of knowledge, combined with the ideas accelerate your career. We constantly need to be creative.
But you can not forget about scientists who are older, they are also needed at the university and are no worse than younger ones. Without this group of workers, which constitutes the core, it's hard to imagine universities. You can not overemphasize a single Regulation scientific achievements of these people and you can not also on the basis of change to expect that suddenly they will otherwise work. It also takes time.
Evaluation of researchers couldn't be same in all countries and in all institutions. In fact there are countries with known scientific research credential and professionalism and others not yet. Added in same country there are universities and laboratories more prestigious and more funded than others. Thus who are earlier able to achieve interesting results and/or projects. And there are younger or poor countries who are as apprenticeships in scientific research. So not all researchers even if all publish in same journals could be evaluated with same metrics. Same not all research topics are able to be accepted by high standards journals, even if the researcher do have spent many years of his/her professional life in it and contributed with an added value to some knowledge in it even a small one for example through students monitoring . Added to the organizational and running scheme of research policies from a country to another, who are independent from the researcher's will. So as any job, researcher should be evaluated on the ratio of his performance within his job place running state. However as science is global, may be a global mechanism could have more credential for differentiating between countries and their job places policies. May be, that could reduce in some of them corruption practice in education and research professions, and help to increase the credential of their education politics. I.e when people know that they will pass international exams or evaluations, their temptations of scientific jobs corruption will decrease
How and when do we do an evaluation to researcher?
When: Professorial Promotions Committee evaluate the qualifications of candidates recommended for appointment or promotion to associate or full professor. The committee may look into the quality of the papers, the quality of the journals, number of citations cited, quality of teaching, administrative work, grants ,..,etc.
How: In general researchers can evaluated by using following metrics:
In addition to the various simple statistics (number of papers, number of citations, and others), Publish or Perish calculates the following citation metrics :
Hirsch's h-index
Proposed by J.E. Hirsch in his paper An index to quantify an individual's scientific research output, arXiv:physics/0508025 v5 29 Sep 2005. It aims to provide a robust single-number metric of an academic's impact, combining quality with quantity.
Egghe's g-index
Proposed by Leo Egghe in his paper Theory and practice of the g-index, Scientometrics, Vol. 69, No 1 (2006), pp. 131-152. It aims to improve on the h-index by giving more weight to highly-cited articles.
Zhang's e-index
The e-index as proposed by Chun-Ting Zhang in his paper The e-index, complementing the h-index for excess citations, PLoS ONE, Vol 5, Issue 5 (May 2009), e5429. The e-index is the (square root) of the surplus of citations in the h-set beyond h2, i.e., beyond the theoretical minimum required to obtain a h-index of 'h'. The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns.
Contemporary h-index
Proposed by Antonis Sidiropoulos, Dimitrios Katsaros, and Yannis Manolopoulos in their paper Generalized h-index for disclosing latent facts in citation networks, arXiv:cs.DL/0607066 v1 13 Jul 2006. It aims to improve on the h-index by giving more weight to recent articles, thus rewarding academics who maintain a steady level of activity.
Age-weighted citation rate (AWCR) and AW-index
The AWCR measures the average number of citations to an entire body of work, adjusted for the age of each individual paper. It was inspired by Bihui Jin's note The AR-index: complementing the h-index, ISSI Newsletter, 2007, 3(1), p. 6. The Publish or Perish implementation differs from Jin's definition in that we sum over all papers instead of only the h-core papers.
Individual h-index (original)
The Individual h-index was proposed by Pablo D. Batista, Monica G. Campiteli, Osame Kinouchi, and Alexandre S. Martinez in their paper Is it possible to compare researchers with different scientific interests?, Scientometrics, Vol 68, No. 1 (2006), pp. 179-189. It divides the standard h-index by the average number of authors in the articles that contribute to the h-index, in order to reduce the effects of co-authorship.
Multi-authored h-index
A further h-like index is due to Michael Schreiber and first described in his paper To share the fame in a fair way, hm modifies h for multi-authored manuscripts, New Journal of Physics, Vol 10 (2008), 040201-1-8. Schreiber's method uses fractional paper counts instead of reduced citation counts to account for shared authorship of papers, and then determines the multi-authored hm index based on the resulting effective rank of the papers using undiluted citation counts.
After every six month our institute used to evaluate the performance of each researcher. However, there is no very hard and fast rule and format for it. We actually used to make a sheet in which we used to mention all the works finished in these duration. After, that our respective supervisor used to verify it the we get final approval from our research committee head.
For metrics to be understood and trusted, clarity into how they work and are calculated is important. In the initial introduction of CiteScore we shared the methodology behind the calculation, today we take transparency a step further by sharing the underlying data. What does this mean? All Scopus subscribers are now able to view the source documents and citations for the CiteScore value...
This is good tool for evaluation for researcher under SCOPUS!
Who,How and when do we do an evaluation to researcher?
Becaue the evluation committee should be very high qualified first and then the rest of question can ce identified easily if you find the right persons