I have seen lot of criticism to every assessing scheme used to evaluate a researcher. It may be impact factor, H-index, g-index or RG score, the community do not agree on any type of score so far. Its clear from all discussions that none of the existing scores are perfect. Now the question is how to evaluate a researcher at a time of promotion, awarding grants, hire a researcher etc.,. My question from members of theforum is, can we suggest a new scheme or score which can be used to evaluate a researcher based on his/her scientific contribution?
As everyone on this blog knows, there is no single metric to evaluate a researcher. On top of that, it depends upon what you are evaluating them for…a postdoc? tenure? promotion? election to an honorific society? a grant? a prize? There are many metrics including …number of pubs, citations, journals used, people trained, prizes won, etc. During the course of my career I have observed that many of things that we scientists value and need to have to work, to be recognized and to be valued have become extremely political and based on whom you know and how well you are funded. Those with good funding (hard these days) those with the right cheerleaders, or powerful backers definitely have an advantage.
So what to do?
What do I do to "evaluate".
First of all, to be clear, in my mind, the QUALITIES that make a good researcher are;
1.The ability to ask important question that sometimes take you out of your technology-centric comfort zone. Don’t get stuck doing things' because you can'. Try to create a body of work that moves your field forward. Techniques are tools…nothing more…unless developing the technique is the subject of the work.
2.One’s dedication to conduct the research in an honest, carefully controlled manner. Make sure that methods are thoroughly described, detailed in writing and that the data are not cherry picked. To make sure that someone else can read the protocol (yes keep detailed written protocols) and get the same results. Ask people to give you feedback on your papers and grants.
3.Realize that people do compete, some do cheat and some do steal. This is reality Accept these as facts but remain as collegial and open as you can. Don’t spend your career complaining … be aware and move forward.
4.Learn to write your paper up clearly and honestly, even if takes 15 drafts. Papers are your "face" to the world of science. People need to understand what you did and to “read” not only your text but your tables and figures. Don’t hide things under the rug. Don’t be afraid to speculate in the Discussion. High impact journals are fine but not necessary. If the study is good, it will rise to the top. Don’t judge people by the journals that they publish in.
5.As a reviewer, evaluate what is in front of you not the “could have done, would have done, needs to do”. Is the study well conducted with all the controls, clearly presented and discussed?
6.Do everything you can to instill 1-5 above in your trainees. In addition, teach them how to deal with failure. Teach them how to READ papers, how to WRITE protocols, results of experiments, and papers. Stress the importance of citing and discussing the work of others. (Your work did not come out of the blue...it is built on the ideas and findings of others). Work with them on their presentations and rehearse them. Teach them how to answer questions THEY are your progeny!
Given 1-6 above, my preferred way to judge a person is to read their papers, pick up the phone and make confidential calls to mentors, collaborators, PIs etc. with whom this person has worked. I ask about creativity, dedication, honesty, bench skills, people skills, stamina, weak points and strong points etc. After reading and 4-5 calls I know everything I need to know. I hear common themes about strengths, weaknesses and how good the person is. Then it is up to me to decide. I agree that this takes time and thought but in the end its well worth it. All evaluations are subjective...so do your work.
I feel any performance assessment strategy should take the following points into consideration
1. A focus on outcomes rather than process.
2. A focus on good vs best practice, where good practice is preferred and best practice is defined by the highest level of practice identified in the benchmark.
3. Assessment of continuous improvement.
4. Benchmarks to measure functional effectiveness rather than countable.
5. Adjust for inequalities in institutional features, so that benchmarks can be expressed as proportions, times, and ratios.
Performance assessment should consider;
-impact of research outcomes
-relevance of the research work
-contribution of research findings to knowledge
a major aspect should not only be seen in number of publications, cites or grants. A good researcher should be able to explain his work to anyone and as a consequence spread the new gained knowledge really everywhere (e.g. teaching, public work etc.)
Applied and basic research juddgement parameters must be different and specific.
It seems from above responses that their is no way to evaluate a scientist, at least in term of numbers.
Giving everything a numeric value would be the ideal way for external institutions (departments, universities, governments) to judge researcher - but I think that's not the way forward within a research community.
As a matter of fact many factors (e.g. country of research, lab equipment, luck, personal situation) play a role on how successful somebody is/can be. But for me, although in an early stage of my research career, things like recognition from other reseachers in your field, loyalty towards collagues, the ability to work in a team or public engagement is something that can not be easily assessed, but is at least as important as impact factors, Hirsch Factors and RG Scores.
By my opinion, there are two general types of evaluation of research work – 1) How researcher consider the overall contribution of colleagues in the same/similar research areas – this one is not so complicated – you simply know who have ideas, who is professional by the quality of the papers (not only those on SCI....) and many other things; 2) How the decision makers/people in Ministries that financially support research evaluate our work – this is problematic. They use so called quantitative criteria - No. of papers in top 10% of IF Journals, No. of papers in journals that are among 50% on the list within research area and No. of papers that are below the 50% threshold – for each contribution they give a evaluation e.g. 8 pts, 5 pts and 3 pts and then score your contribution. And this is the same/similar for all research areas in the group of environmental sciences. This is simply not a correct system of evaluation, because it neglect a lot of other aspects of research work – success in research projects, number of citations, what is the contribution of each researcher in work with young colleagues... So, it should be combination of several aspects of work, not only papers.
Impact factor measures a journal's impact based on citations of papers published in it over the last two years. Hence it isreally the measure of a journal's reputaion, not of a researcher's
.
The h index gives an overall impact of a researcher's work based on his or her papers published only on or after 1 January 1996.
Citation numbers gives the impact of a particular paper authored or coauthored by a researcher based on citation data beginning 1995.
Perhaps a combination of the h index and citation number would be a better reflection of the impact made by a particular researcher on his or her own field of study. There would be different average values for these measures depending on one's field of study.
@Tan, I agree with your view that h-index plus total citation provides quite reasonable measure of scientific performance. Major criticism of h-index + impact factor is that it does not reflect your contribution, for example some one working in a big team and providing small contribution to large number of big papers (middle authors in most of papers). Another person have its own group and publishing as corresponding or first author. How to compare these two type of researchers.
No. No. No.
All the point raised by various researchers above are very pertinent and there are more points/ways into which the career, the progress and achievements of a researcher can be divided and can be assessed. To me it looks as if this more of a statistical calculation of all those factors which a researcher has gone through in his career in a pointwise scale. A small schematic of what comes to my mind is proposed below in terms of activities and scores assigened to them:
(i) Extension and Co-curricular, Extension and Professional Development Related Activities
/ Institutional Co-curricular activities for students such as field studies/ educational tours
,/industry-implant training and placement activity ( 5 point each)
max score-10
Positions held/Leadership role played in organization linked with Extension Work and National
service Scheme (NSS), NCC, NSO or any other similar activity ( each activity 10 points)
max-10
Students and Staff Related Socio Cultural and Sports Programmes, campus publications
(departmental level 2 points, institutional level 5 points)
max-10
Community work such as values of National Integration, Environment democracy, socialism,
Human Rights, peace, scientific temper; flood or, drought relief, small family norms etc. ( 5 points)
10
Maximum Aggregate Limit 20
(ii) Contribution to Corporate Life and Management of the Institution
Contribution to Corporate life in Universities/colleges through meetings, popular lectures,
subject related events, articles in college magazine and University volumes ( 2 point each)
max-10
Institutional Governance responsibilities like, Vice-Principal, Dean, Director, Warden, Bursa,
School Chairperson, IQAC Coordinator (10 points each)
m-10
Participation in committees concerned with any aspect of departmental or institutional
management such as admission committee, campus development, library committee ( 5 points each)
m-10
Responsibility for, or participation in committees for Students Welfare, Counseling and
Discipline ( 5 points each)
m-10
Organization of Conference / Training as Chairman/Organizational Secretary/Treasurer:
(a) International ( 10 points) National/regional ( 5 points)
(b) As member of the organizing committee ( 1 point each)
10
Maximum Aggregate Limit 15
(iii) Professional Development Related Activities
Membership in profession related committees at state and national level
a) At national level : 3 points each
b) At site activity : 2 points each
m-10
Participation in subject associations, conferences, seminars without paper presentation ( each
activity : 2 points)
m-10
Participation in short term training courses less than one week duration in educational
technology, curriculum development, professional development, Examination reforms,
Institutional governance ( each activity: 5 points)
10
Membership/participation in State/Central Bodies/Committees on Education, Research and
National Development ( 5 points each)
10
Publication of articles in newspapers, m..and so on
The scores assigined are empirical and can be kept as per requirement..
I think there is no any well proof system exist to evaluate a researcher. In my opinion, one can combine all the existing method of evaluation like IF, H-index, G-index, etc.
I raise above question because some time you have to evaluate junior colleagues. Despite your good intention and honest evaluation, still you got criticism. Thus it is important to evolve some criteria acceptable to community, preferable less subjectivity.
There is no parametric assessment for the evaluation of a researcher. Several qualities need to be taken in to consideration. Quality is a relative term and it is not absolute. At present research community consider Noble Prize is one of the highest award for scientific contribution but not as a best researcher. Can we evaluate beauty? Similarly can we evaluate a researcher? Answer is yes/ no. In computers we have 1 and 0. But now a days fuzzy logic is coming in to picture. One can develop some methodology which can further evolve in future. Science is rapidly changing the evaluation methods need to change continuously and scientist should be evaluated for his/her quality of the science carried out at that given time not by a committee but by the community. Every scientific principle, we have two or more number of school of thoughts.When we evaluate criticism always will be there. My observation is that when a student gets A grade he/she says that I got A grade. If the student gets B or any other grade the remark is "teacher gave this grade".
The best researcher is the researcher who gives most benefit to humankind.
I agree with Setyawan Widyarto, existing indexes only measured the impact of publications and work among scientists, are just numbers, but do not consider the usefulness of the research or the specific benefits to people or the environment, etc.
Hi to all,
I do not agree that it is good to measure the usefulness of the research by direct benefit to the people community, or ecosystem services. In that way you favour the applied research and push down some basic research (e.g. taxonomy) which is already significantly neglected. This could be very dangerous approach that could only support decision makers to give the money only for the research that directly bring the benefit. It is good only for short term, but in general, it push down complete system
The benefit to human kind shouldn't belitle down to either fund or money.
Hi, i think there are many methodes to evaluate an researche, because we can't have the same vision, but we can regroupe all the visions in the aim of have a good evaluation.
But basic research also provide a benefit, because for example, eventually facilitate the description of populations and differentiation between pathogenic and nonpathogenic or invasive species which are not. Never ignore the basic research that provides the foundation of knowledge. What is dificill, evaluate which are useful research and which are not.
I agree that the indexes are not perfect. On the other hand, if work of some researcher is so good then it will have some responses, citations etc. and that will increase H-index, citations etc.
Of course, this will be increased also by very bad research, when others will cite particular work because it was extremely bad or there are cases when journals print "reviews" about their articles from last year and by that they increase their impact factor.
Research is the highest expression of the intelligence and creativity of the humanity. It is very easy to assign a score to a process or a thing but it is not fair to do this to a person. Scientometrics seems to transform researchers into goods at the stock market. The H-index or any other index (there will be more and more...) has to be only a part of the evaluation. The capacity of solving problems, the project management efficiency and creativity should be evaluation criteria as well.
Given our present, *super-transitional*, state of science, the current evaluations are deficient in one critical aspect, which, however, makes all the difference.
We should rank much higher than the current practice the more radical research, addressing more strategic scientific questions, *independent of the current support it gets*. This is because during an unprecedented transitional stage we are experiencing now, most of the incremental research, eventually, will play hardly any role in future science.
However, as has always been historically the case, the problem with such 'radical' evaluation is the lack of qualified people able to make such judgement at the time. Still, it appears that it is not that difficult to have a two-stream evaluation (and scientific journals and conferences to support them), setting in a separate stream *qualified* but much more radical research work. Without such separation, we are, at a great cost to the society, unnecessarily prolonging the present transitional period.
That is a very difficult question Gajendra, there is no one single answer! You must have seen sometime heated discussions about the RG score itself. Although RG have made improvements, they still have a long way to go.
Going back to your main question, the best way to evaluate a researcher is to evaluate his scientific contribution. How we do this is by raising questions like the one here in this thread and comparing peoples answers and see if some sort of scheme can be formulated. Remember, RG were only able to improve their score by listening to us discussing their score!!
@Gajendra
I completely agree with Issam that 'That is a very difficult question Gajendra, there is no one single answer! ". My gut feeling is that you might have been on selection committees evaluating researchers for appointment or promotion and you would agree with me that putting aside publications, impact factors etc. a good researcher would always be able to prove himself/herself. It is another matter who is appointed or promoted.
I have to disagree with most of the previous comments. We evaluate everybody, everyday. In fact, we evaluate students, we evaluate colleagues and even friends. For that we use the available criteria (wisdom). Frequently, we even do not have quantitative indicators to evaluate people around us, and yet we do it.
Thus, my point is that whenever we have quantitative indicators, we should use them. However, they should be used with care, since all of them have shortcomings. Indeed, h-factor is biased towards longer careers.
Whatever the metric used, the most productive researchers are always placed well in the evaluation in their respective fields.
Under one respect the answer is simple, that is when somebody has to be promoted (or not). There are usually some well known criteria to be satisfied; once they are an important part of country-wide bill, otherwise they are collected in a university documents (say statute). One has to satisfy all of such requirements, or just a significant part of them to qualify. In practice, such regulations are applied either strictly or in elastic way, but never below minimum standard defined in mentioned documents.
My private opinion is that established bodies, like Scientific Councils, are best qualified to judge the quality of researcher. Their members are looking at the candidate from various perspectives and each one of them values something different in the list of achievements. Finally, they cast their votes and the majority decides. Of course, this procedure is by no means perfect, even at Nobel Prize level. But do we have anything better today? (It is like democracy - it is awful but we haven't invented anything better yet.)
"Evaluation" inescapably has to deal with numbers and this is most likely the future of scientific evaluation. What we observe is in reality an increasing commitment to methods having their roots in social networks discipline. Citation index, h-index, being a hub in publication network (not yet) and so on - they all belong here. Such methods have one important feature: they seem to be objective. In the same time they take into account true scientific productivity and novelty, paying much less attention to "management skills" or success rate when fighting for grants.
Well, i gone through the various answers given by various RG Users. My personal opinion about best parameter to evaluate researcher is ´ Independent thinking of that researcher ´. Because Ph.D is the transition time frame from dependent student to independent body. Publications, thesis, citations are dependent (guide ) product.
Research is done in pursuit of truth. While at the time of recruitment level itself only those have passion for research-cum-teaching should be recruited. For promotion of researcher, in my opinion, he/she should be assessed from number of research publication in refereed journals and impact of those publications (i.e., citations) during the period of assessment. Plus presentation of future (next five years) his/her research proposals. Again research proposals and publications should be of immediate relevance. It could be any discipline of science. For promotion of a teacher who can not invest time for research, I believe best way (to avoid bias) is to assess annually from students anonymously i.e., by confidential voting.
Now a days it is very difficult to identify a good researcher because there will be several factors influencing his/ her authorship. So better to check the Impact factor of the journals published the research work and the journals editorial process.
All kinds of evaluation are more or less adequate for, call them, established/experienced researchers. The real problem is with young researchers, still before their first publication. It is obvious that not all Master's diploma are equal, even in a very narrow discipline. Who will be hired: the smart, the one ready to work hard, even overtime or the son/daughter of a "big name"? No matter what is the evaluation procedure, some luck is always helpful. Contrary to pure business affairs, dress code is less important for prospective researcher, nevertheless the overall impression is very often a deciding factor. Formal rules are only one side of the story. The opinion of your mentor (if there is any), letter(s) of recommendation, and similar things play their role as well. These will probably never be eliminated, no matter how subjective they might be.
A very nice tricky question Gajendra, in my opinion, to evaluate for induction it’s the depth of knowledge for the subject, aptitude to work hard to the limits of time needed to pursue research honestly. While for the in service evaluation for juniors could be additionally problem diagnosis, realistic target setting and accomplishment; output of research including publications and presentation of output; behavior and coordinated team work, established linkages with national stakeholders. For seniors additionally management capabilities, project hunting and execution; team building and HRD; sharing credit; facing criticism. International linkages with stakeholders.
I actually think for me a pretty straightforward answer. Many scientists have done many great things and discovered everything we are working on right now. A good scientist should be the expert in his/her niche in the field that they have chosen. Secondly which separates out some from the rest of the pack is if they are able to explain how their research will impact not just movement forward in their field, but can translate almost directly over in the practice of medicine.
Lastly, what I consider a great scientist is someone who might be the top expert in your niche or some other niche in which they take the time to talk to you if you are generally interested in what they study. I guess they are not afraid of passing the knowledge on to the next generation and are not afraid of someone stealing their ideas or if you come up with something off a conversation they offer a co-authorship.
So I guess an ethical, non-selfish, and a great teacher all rolled together.
I can still remember my first time meeting one of top antibody scientists in the world and I, at the time, was struggling with my staining and he explained the entire process to me from using them for immunocytochemistry to how to make my own antibody including the restrictions as far as the limitations.
I will end with a great scientist understands, recognizes, and acknowledges the limitations and interpretation of the results while mastering the design of the experiment where whatever result you get it is publishable result.
Thank you Dr. Gandhi for you saying what I was trying get across. The point being evaluating your fellow scientist should be taken seriously because we are all working toward the same goal.
A non-helpful review, such as one where no experiments are suggested, is just waste of everyone's time and effort.
Dear William Lester. I fully agree with your view. Quality of good scientist or human being is more importan than your research publications. Service oriented concept of science is important for growth of science.
@Nitin Gandhi, You have one position in your group and you have ten candidates, how you will select. Please do not compare researchers with GOD, all are normal human being. Most of researchers are doing research for their bread & butter not for their deep interest in science. Thus evaluation is important as you are getting too many qualified candidate in research, as this field provides job opportunity. This is the reason the institutes/countries, who pay more may attract talent.
A superb question, as mentioned above people talk about contribution. But can you tell me whats the measure of someone's scientific contribution?? Is that mean you are talking about publications??? I am not agree with it. A good quality research always do not lead to good quality publication. There are many publications in small journals which came much before it Nature (published in Nature) version came. Does is mean they have not done enough research before??
In my opinion, researcher's cant be judged by their publications or impact points. They can be well judged only by spending some time with them around. For Ex- While buying a new cell phone we do know the features and reviews (publication in case of researcher) but not the performance of your piece (Performance in your Project in case of researcher).
So i feel, there is no single measure for the quality of researcher, but they can be measured on the basis of there thinking towards a specific project. A good researcher will always follow a positive approach and the least error method where as the normal researcher will follow any path leading to the aim.
Thnx
Sudhanshu
@Nitin, I do not think brain scan may help. It does not matter much "why are you doing science", it matters "what you contribute to science". If a highly intelligent person is lazy or do not wants to contribute it have no use. Same time person join science for job and work sincerely to contribute, this person will be more useful. Evaluations should be based on contribution not on level of intelligence. Same way as education, we work hard to qualify exams.
Rather than with a calculator (H-factor, mean of impact factor of the papers, sum of them, ....) I believe that makes a lot more sense to evaluate a scientist on a personal interview asking incisively for what she/he has done, the ideas about what will she/he do in the future and the impact she/he thinks that the work will have. This can evidence how smart and thorough is the candidate
Dear Garcia-Sanz, I agree with your view. If you keep everything subjective it may have following types of problems, i) if person judging the researcher is not fair than it may be difficult to make any justice, ii) if you do not follow any criteria, peoples will criticize your decision, iii) evaluators are not perfect, they may make bad decision.
@DR RG: We will have to, still, evolve a scoring system to evaluate all the contributions to science (what parameters to cover) in real sence, where each activity is given some marks/score. But can you define the level of contribution to science and scores/marks for each? "Work Sincerety " may lead to contribute to a wide spectrum of disciplines/areas of Science, with no time left to concentrate and produce visible quality publications/products. Again, how can you asses the usefulness?
I feel, so long the factors like cast, religion, region, favour, buddy,,,, factors prevail in science, an unbiased assessment of a researchers' contributions will remain a distant dream.
If I must make decision for a colaborator, I will look of science papers of applicant and especially these of them, which are in field of future activity. Also of interest are applications executed by applicant. In most of cases the decision will be right, because The leader of team is interesting of better team results.
But I wish mention, that for highest scientific positions the choice is as roule executed not on base of right evaluation of scientific output of applicants.There reviewers are only in the ground of decision of the scientific board in which members are not familiar with concrete research area and make the decision on base of listen opinions and fillings, but not on real quality. The scientific output of applicants is very different, they are not as Gods, but as different fruits -the comparison is too dificult! And member of board of decision is influenced by own friendship and enemy relations. After that the most competent peoples-reviewers become far from the final decision and decision maker are the less competent board members!
The best way to evaluate a researcher is to begin by evaluating the purpose of the question. The appropriate criteria will emerge out of that exercise.
If you are selecting a candidate to become a colleague, then I would put rapport on the top of my list.
A related question I always found fascinating and never had the platform to ask. If you would not mind giving a quick response or long it would much appreciated. I would like to have enough responses to do a short chalk talk focus topic. Thank you in advance for your time. Your input is invaluable.
Link
https://www.researchgate.net/post/What_are_the_main_qualities_and_how_long_do_you_spend_reviewing_an_average_manuscript
I strongly think that evaluation of researchers should be based on reproducibilty, application and authenticity of his work irrespective of IMPACT FACTORS. Sometimes we find data from high impact journal but not reproducible. So what is use of publishing it ????
Actually in INDIA we expect at least 3 publications from a Ph.D. why not one quality and reproducible publication ??? Its time to think seriously as huge funds are at stake...
I thank Dr.Ragava for starting this discussion.
Perhaps the following quotation should be considered:
"There are two ways to do great mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else -- but persistent."
Raoul Bott (1923 - 2005)
I thank Dr Gajendra Pal Singh Raghava for starting this discussion and by my opinion I completely agree with answers of Dr Issam Sinjab - that is a very difficult question and there is no one single answer! The best way to evaluate a researcher is to evaluate his/her scientific contribution such as - all scientific publications (books, monographs, scientific papers, plenary lections, meetings abstracts, etc), Impact factor, H-index, RG SCORE and finally citation number of scientific publications according to Science Citations Index. Also, the capacity of solving problems, the project management efficiency and creativity should be evaluation criteria as well. A good scientist should be the expert in his/her niche in the field that they have chosen. Finally, my point is, whatever the metric used the most productive researchers are always placed well in the evaluation in their respective fields.
To evaluate a researcher, one need not look at things like impact factor of journals where he or she has published their work. One need not even look at H factor of that person. These are to be used for different purposes. Just pose some problems and see how they rise to the situation and try to look for a solution. All other paraphernalia is to be considered later.
There are also a lot of different qualities of a researcher. If you are looking for a leader, who will build and direct a new or existing research group, that is one set of qualities that can be found in the researcher's CV and experience. If you want someone to be in a supporting role, who will follow directions and work effectively with a team, that is again a different set of qualities. This is one reason why building research impact scores is so hard, because the purpose of each score is different.
In my opinion, the best way to evaluate the researcher whether he/she is competent enough to work independently as well as in team environment. initially any of these two options are acceptable, but gradual up gradation is mandatory to be fit with both the options.
I agree with Georgi. In fact, no matter how many paper u didi, and how much RG score u didi, the most important lies in the future u gave. As we are skilled in Bioscience, we research in botanicalogy, in Biochemistry, In Immunology.....where we really want to do some help to the world and human, meaning many of us wanna make a contribution to change the whole whole better.
My opinion is, do not care the evaluation from others, keep our minds to the thing we are researching, and do our best to change the world.
Thank u .
Further, body language reading of a researcher is also one of the important ways to judge whether he/she is worthwhile.
What's the final conclusion after so many responses. One should finally draft a set of criteria and implement universally to avoid any bias. Otherwise this discussion is like beating the husk.
I don't agree with Muralidhar Katti on "final conclusion after so many responses". I want to pay your attention on a very important problem "Research-Practice Gap" on the link http://www.jnd.org/dn.mss/the_research-practice_gap_1.html
"There is an immense gap between research and practice. I'm tempted to paraphrase Kipling and say "Oh, research is research, and practice is practice, and never the twain shall meet," but I will resist. The gap between these two communities is real and frustrating ... "
Let's view our discussion from the point of view of this Gap. I chose four phrases from this discussion that are more relevant to the "Research-Practice Gap" problem in order to reduce this gap.
_______________________________________________________
Setyawan Widyarto
"The best researcher is the researcher who gives most benefit to humankind"
Pilar Goni
"... existing indexes only measured the impact of publications and work among scientists, are just numbers, but do not consider the usefulness of the research or the specific benefits to people or the environment, etc"
Devang Pandya
"No research is fruitful if it is not useful for the public in some or the other way... "
Pinakin Karpe
"I strongly think that evaluation of researchers should be based on reproducibilty, application and authenticity of his work irrespective of IMPACT FACTORS. Sometimes we find data from high impact journal but not reproducible. So what is use of publishing it ????"
_______________________________________________________
We learned the opinions of researchers about researchers. However, it would be interesting to know the opinions of developers about researchers. Let's consider concrete area of researches, e.g. "Computer Science". Assume that we choice the best researches from this area at the moment. Now I want to propose to your consideration four phrases on "what think the developers about the researchers" that I chose from the questions "What should researchers know about software development practice?" and "Why don't developers use the best research on software development?" on the links https://www.researchgate.net/post/What_should_researchers_know_about_software_development_practice
https://www.researchgate.net/post/Why_dont_developers_use_the_best_research_on_software_development
_______________________________________________________
Milan Tair
"Every researcher who wishes to contribute to practice through theory should be involved in development processes from the problem analysis, problem definition, first draft brainstorming sessions, design, development itself, testing, implementation etc."
Basit Shahzad
"All those who work in this domain should know the SDLC (Systems Development Life Cycle) well. Through understanding about the project management concepts and risk management is also worthy is available."
Robert Standefer
"In my experience, developers don't use the best research because there's a mismatch between the research and their day to day life. Developers with more autonomy may adopt practices and such from cutting-edge research, but for the most part, devs work for businesses, and businesses are more interested in ROI (Return On Investmen) and such. There has to be a level of practicality in the research that will immediately and positively impact the developer's work."
John Sanders
"If you look at this discussion as a whole you can see why academic prescription tends to fails. Many different views some proposals but no evidence. Much research is minute in scope, some of it will creep into products but not dramatically - usually as an extra to an existing product ..."
_______________________________________________________
Now I want to resume. The researches are implemented in the "Research Coordinates Space" that are in most of cases not real-life in practice, therefore, even the best research can be failed. To find a link between the Researchers and Developers, there was proposed a conception of "Transitional Development" during the discussion "Why don't developers use the best research on software development?". One can say that the Transitional Development is an Advanced
Research (maybe Research++ like C++) as Not Profit Development. The main property of Transitional Development must become an Online Acceptance in the Internet for any people to solve their tasks. Most of practitioners know never about IF, H-index, g-index or RG score. The researchers thought up these things for themselves to evaluate each other. I propose to change point of view for evaluating the researchers taking an account the Transitional Development but not the Research paper only. I propose to include the Research paper into the Transitional Development. Thus, the Transitional Development will become the "Research + Online Acceptance". The Transitional Development is not the Research already but is not a Commercial Development yet. I propose to create a new formula for evaluating the researchers in ralation to the notion "Transitional Development". This formula must become more objective than the known IF, H-index and other metrics.
There is a way to evaluate a researcher : read the papers he has written. It's a litlle bit like a writer, one should not try to evaluate his production by the number of books, the number of pages, or the publisher. If one try to evaluate writers by the number of books sold, it may be worse: skilled writers will be pushed to write recipe books and biographies of sportsmen. Actually, quantitative evaluation of researchers is the main reason why so many "scientific" papers are just bullshit.
I think the best way to evaluate a researcher is a closer look on his work. Not so much on the impact factor of his papers or himself but on structure and content of his or her articles and if he or her is making step by step progress in the respective field. When it comes to factors that we can measure I would prever the RG Score because it combines the impact of research publications with discussion and networking qualitiies of a person.
The best way to evaluate a researcher is yet to be discovered. Metrics have their disadvantages and are easily gamed.
It's true that none of the methods devised till today for evaluating a researcher have been adequate . I agree with Graham that what has been added by the researcher, in terms of information, theories, etc should be considered for evaluation .
Stimulating responses. A little diversion please. Kindly follow and write your suggestions about my question on the importance of impact factor. https://www.researchgate.net/post/What_is_the_importance_of_impact_factor_IF
Discussion power and research planning and execution strength are the two major papameters to evaluate a potential of a researcher.
Are we talking about objective or subjective evaluation? Please be precise
As everyone on this blog knows, there is no single metric to evaluate a researcher. On top of that, it depends upon what you are evaluating them for…a postdoc? tenure? promotion? election to an honorific society? a grant? a prize? There are many metrics including …number of pubs, citations, journals used, people trained, prizes won, etc. During the course of my career I have observed that many of things that we scientists value and need to have to work, to be recognized and to be valued have become extremely political and based on whom you know and how well you are funded. Those with good funding (hard these days) those with the right cheerleaders, or powerful backers definitely have an advantage.
So what to do?
What do I do to "evaluate".
First of all, to be clear, in my mind, the QUALITIES that make a good researcher are;
1.The ability to ask important question that sometimes take you out of your technology-centric comfort zone. Don’t get stuck doing things' because you can'. Try to create a body of work that moves your field forward. Techniques are tools…nothing more…unless developing the technique is the subject of the work.
2.One’s dedication to conduct the research in an honest, carefully controlled manner. Make sure that methods are thoroughly described, detailed in writing and that the data are not cherry picked. To make sure that someone else can read the protocol (yes keep detailed written protocols) and get the same results. Ask people to give you feedback on your papers and grants.
3.Realize that people do compete, some do cheat and some do steal. This is reality Accept these as facts but remain as collegial and open as you can. Don’t spend your career complaining … be aware and move forward.
4.Learn to write your paper up clearly and honestly, even if takes 15 drafts. Papers are your "face" to the world of science. People need to understand what you did and to “read” not only your text but your tables and figures. Don’t hide things under the rug. Don’t be afraid to speculate in the Discussion. High impact journals are fine but not necessary. If the study is good, it will rise to the top. Don’t judge people by the journals that they publish in.
5.As a reviewer, evaluate what is in front of you not the “could have done, would have done, needs to do”. Is the study well conducted with all the controls, clearly presented and discussed?
6.Do everything you can to instill 1-5 above in your trainees. In addition, teach them how to deal with failure. Teach them how to READ papers, how to WRITE protocols, results of experiments, and papers. Stress the importance of citing and discussing the work of others. (Your work did not come out of the blue...it is built on the ideas and findings of others). Work with them on their presentations and rehearse them. Teach them how to answer questions THEY are your progeny!
Given 1-6 above, my preferred way to judge a person is to read their papers, pick up the phone and make confidential calls to mentors, collaborators, PIs etc. with whom this person has worked. I ask about creativity, dedication, honesty, bench skills, people skills, stamina, weak points and strong points etc. After reading and 4-5 calls I know everything I need to know. I hear common themes about strengths, weaknesses and how good the person is. Then it is up to me to decide. I agree that this takes time and thought but in the end its well worth it. All evaluations are subjective...so do your work.
i have one more doubts... Many of the company having HR Manager to recruit their staff?. If some of the researcher also recruited by any one company means? what is the selection procedure? . HR manager know about life science or not.?. PhD or M.S degree itself enough to select good researcher?.
Points well taken!
As I noted, what we are evaluating a scientist for is important.
With regard to your other comment, unfortunately the world is not a perfect place...nor are we humans. The science that we do is part of that world, i.e..... it is imperfect. I did not discuss ethics and humane behavior in the context of evaluating researchers....although I do so when making confidential calls! But obviously, ethical conduct is key to all human activities. The good news is that courses on ethics are now being built into our educational system and we need this, given the history of inhumane and ethically-flawed "experiments", the ever-increasing cases of misconduct, sloppy papers and non-collegiality. It is often complicated BUT we are finally recognizing that humane procedures must be implemented in everything we do with animals, be they mice or primates. As you know compliance is also increasing...although this has its downside too (as I said....all systems are imperfect).
All we can do is identify problems and then try to solve them. Its a slow process but I think we are improving....at least in the Western World.
Dear Gajendra, I think it is good to create a scheme to evaluate the researcher starting from qualification, scientific background, skills, experiences, the value of research and end with updating his knowledge. this is scheme when give as score will enhance the researcher to give best work not just to obtain grade.
Contribution to betterment of human lives is the best parameter of a good researcher. Applications and easy implementation of one's research work is also significant.
There is no way to 'evaluate' a researcher because individual value is not quantifiable. The value of the research may be measurable, such as in terms of people's lives saved or quality of life improved but scientists are people, not research engines.
Is it possible to say that 75 research papers indicate more 'value' than 5? Is volume of research more important than subject or quality? Impact factor is already greeted with much justifiable scepticism and would not be accepted by any empirical scientist as a measure of value in any other field of research. It is great fun getting a research score and being compared to our peers but we should not take it too seriously. All the scientific greats had skeletons in their closets, many had personalities that today would be considered bordering on the psychopathic.
Contribution to the betterment of human lives is indeed, as Jaya says an indicator of the value of a researcher but the betterment of human life has in the main come from advances in societal values and human rights rather than scientific endeavour.
In terms of chemistry Fritz Haber was a great researcher but he is remembered for his callous attitude to human life, including that of his own wife. Gallileo plagiarised the telescope and even saintly Gregor Mendel fiddled his statistics. Perhaps we need to be very careful when evaluating anybody!
To my opinion a relevant research is where other researchers cites. As long as the paper are being cited. The research must be still relevant to the present.
Yes, the best way to evaluate a researcher is to evaluate his/her scientific contribution in intrenational level and how this work has help or improved our lives and our globe.
I think these should be minimum criteria although there might be many more depending on the researcher or scientist to scientist. For me his or her research must provide base or backbone for further research, has direct application for several other research or others techniques, Could be answer for past problems and most importantly has reproducibility of results along with minimum cost or minimum expenditure of money.
The first step is a better question - what would the evaluation be used for - to understand more about an individual or more about the institution that includes/might in the future include that individual? Considering the characteristics of a researcher, great merit can be found in originality but also in the more mundane process of validating and accurately trying to reproduce, and question, other researchers work. Similarly, there is merit in sustained output in a single research area, but the accumulation of new knowledge is also driven by the curious who make a significant 'starter' contribution in an area and then move on to new challenges elsewhere. This may not push up the conventional metrics or income generation associated with much of modern-day research, but is it valueless? What about the researcher as an educator or communicator, and as a generator of ideas to inspire others?
It is unhelpful to view 'researcher' as describing someone who publishes many papers & generates a lot of research income, gets a decent H index etc. Experience tells me that in many institutions there are researchers who meet these criteria but in terms of original, questioning thought and inspiration, could only be described as notably average.
Great care should be taken when trying to link research and measures of productivity - the inherent competences for research are very personal, part of the individual and any attempt at genuine evaluation has to be made using criteria designed with the individual in mind. The commonly used evaluation metrics of today are designed to reflect back on to the organisations that include such researchers and they say little about the value of the individual in real terms
Several methods exist, but they are all subjective! the best method is the self-assessment using the example of the leading researchers of the field!
Apart from all methods listed above, instead of going for total number of publication, one should give importance to first author publications as far as judging any researcher is concerned. As we know that these days almost 100% contribution is of first author only in the publications rest are just the official collogues and minor contributors. So if A has 50 papers but 5 first authors and B has 30 papers including 10 first authors, B is better.
@Anuj, in this situation we are discouraging collaborations, for any big and complex problem, we need a team. In that situation everyone will demand to be first author. Evaluation of researchers, some time kill team based activity which is must now days to solve complex problems.
I agree with you that the role of collaboration is also very important. Since here the thread is about judging any researcher so my only point was that being a first author clearly indicates the major contribution of a person as compared to as a co-author. It also suggest that the main idea/ new concept/innovative method given in that manuscript has come from first author only. So If you want to compare two researchers and their innovative thinking, first author's publication can be a critera. It dosnt mean that co-authos have done nothing but it certainly give an idea about the real scientific contribution of the researcher. I agree that you need team to solve any complex problem, and everybody's contribution is important. Only thing is that, all members in that team will not have the same amount of contribution in that work so if you just want to compare the two different fellows, one who is just doing collaboration and doing supporting work, instead of his own new work and inviting others, the other one is better who is doing lesser supporting work but more major work which has come from his own mind.
Yes it's all about collaboration and infrastructure these days!
Generally ( but not always) the first author is the bench person or trainee. The last author runs the lab, does a lot of the thinking and/ or strategic planning and writes most of the grants. There two are equally important in my view. In many cases the first author could not be sitting at the bench pipetting if there were no lab infrastructure or money, or key reagents to use.... Or even the administrative backup and compliance. The last author could not do the bench work because of the need to coordinate the study, deal with fundraising, etc. So 100% for any ONE person never applies.
The other authors vary tremendously in their contributions. Some run key facilities. Others provide expertise in an area needed for the study. Some ship a key reagent. In some labs everyone who walked into the lab and provided moral support is listed!! In other labs, contributions are more strictly defined by the PI. In a few labs, PIs even take their names OFF the paper so that the first author gets more credit.... Important for tenure, study sections etc.
IN SUMMARY this is a very idiosyncratic and gray area. In my own case I prefer to be inclusive rather than exclusive. But that decision has varied over the course of my career. I now fully realize that it "takes a village"... It's not all about any one person.
Since there is no simple formula, it is important to be aware of the huge variation in outlooks on this issue. This is a situation where "one size never fits all"!!
@Ellen, I agree with your view. In addition, some time on single problem three or four persons contributed significantly according to their expertise. As a PI , Its difficult for me to decide who have contributed most, same time you can not have more than one first author (except put * these authors contribute equally, still question is who will be first in series). Even if I wish to make justice with each contributor, some time its is difficult to take decision. I agree it is difficult to make any hypothesis to judge contribution of a researcher.
I think it's perfectly reasonable to have 2 or even 3 "first
Authors ", ie "these authors contributed equally to the study." When in
doubt, that's what I do. You must list names alphabetically however.
That way they can each claim their " first" authorship .
It encourages team work, keeps up the morale, and in 2014 peers
understand this.
However that is MY style. As I said. its a
gray area.
I am also now seeing LAST coauthorshipd
when two large labs were essential to the study.
In this competitive world of highly technical and interdisciplinary
research ( not to mention complex / stressful funding issues ), we
need to be a little more flexible and generous with regard to how
we address authorship.
At least thats my view!! Not all will agree . But that's ok... things will shake out
over time
Yes, the best way to evaluate a researcher is to evaluate his/her scientific contribution in intrenational level and how this work has help or improved our lives and our globe.