This can be very easily misused by organised groups which practices self-citation in group. Another problem is that some papers or books which become standard citation in some area of research, are often cited without being read. One cites them because one must. The last issue is not really a problem because those works have indiscutable merits. But the first issue is really a problem. I believe that there is no formula of quality, and that every commission must more or less read the productions presented by the candidates in order to decide which of them is to hire. The nice world in which you compute a formula and then take a decision will never come.
In Pakistan citations and impact factor of the Journal is counted for award of productivity allowance for researchers yearly. If you have good network of collaborators and scientific brotherhood, Citations may be increased significantly
It is not easy to take into account citations: 1) when would you count them, e.g. how many months/years after the publication of the work (NB! some works keep getting citations decades after they were published), 2) even if you establish a certain time period, then would you also differentiate between "online first" types of articles (some journals publish initial versions of articles even a year or more before they publish the final version), 3) which citations would you exclude: e.g. only the author's self citations or also e.g. citations by the author's closest colleagues, main co-authors... (as these can be also given out of friendship etc.)?
Citation counts is considered to be one of the best measures of the performance of researchers though it also has defects like other measures. Self-citation can easily be removed. Citation counts made at some Web sites, for example Google Scholar, includes citations in non-peer-reviewed literature, but citation counts restricted to peer-reviewed literature, as given by Scopus, would be a better measure.
Citations are not only the measure of performance of an article. It is one of the many ways to evaluate the weight of an article. The other measures are (1) the social and industrial impact in the scientific community, (2) the visibility of the article in the same field, (3) how many people are interested working on it so that science and invention are progressed.
This can be very easily misused by organised groups which practices self-citation in group. Another problem is that some papers or books which become standard citation in some area of research, are often cited without being read. One cites them because one must. The last issue is not really a problem because those works have indiscutable merits. But the first issue is really a problem. I believe that there is no formula of quality, and that every commission must more or less read the productions presented by the candidates in order to decide which of them is to hire. The nice world in which you compute a formula and then take a decision will never come.
Comments in an issue of Nature in 2016 are relevant to the topic of measuring the quality of scientific work. Unfortunately, I did not retain the link but I copied the comments, and will include these. There are also links mentioned in it, referring to other discussions of the topic.
I put most of the the following comments (in pieces) somewhere where they really didn't belong. They do (in my view) have a lot of potential to increase research quality:
The need for finding real process is [or should be] central in psychology, to develop the quasi-science of 'psychology' into a science -- so there is something reasonable to try to replicate. How to find real process, AND real patterns for that matter, still seem to be too much of a challenge for today's psychology; if you cannot take care of these problems insomuch as they have to do with your particular study in psychology, then you should STOP and take care of these matters before proceeding *. Think: empiricism; think proximate cause; think nature-AND-nurture.
* I actually think this should be a RULE and that psychologists and researchers should show they have done this before doing -- and certainly, before publishing -- their research, (We really should not have to "read between the lines", when psychology writers and researchers leave this implicit or simply assume their view is clear to all or already-agreed-upon. They would be "hanging" their assumptions "out" for all to see with this, but THIS is exactly what we need -- even if over-and-over-and-over ! (Part of the key to this procedure , and its possible effectiveness, is the "over-and-over-and-over part".)
[ (I thought of leaving this final remark out, but I can't help myself : It might be good if all asked themselves: "How I am like B.F. Skinner in form and outlook?; am I?") ]
Another idea I have is to make every psychologist be explicit about her/his general ("overall") theory of psychology (often referred to as "personality theory" ) [(like neo-Freudian and neo-Piagetian, to cite some old, YET still current ones)]. I submit we ALL have one and it colors a lot of what we do (it contains many of the non-explicit assumptions). This would be a second way to get at researchers' assumptions; this is more involved, but should be done sometime: perhaps researchers could all have their statement of this (which may be rather extended) "on file" somewhere (like a web page).
I consider my two solutions, here (above), to be better than relying on any current "peer standard" of quality.
I think that one should distinguish between measurements of performance of researchers in a capitalistic, competitive society, and measurements of research quality. It is necessary to make this sharp distinction because, although it may be reasonable for some people to think that they are correlated, both things are not the same (and there are many examples in the history of science that can be given of this).
To measure the performance of researchers there are good measures, citation counts as given by Scopus being one of the best. To measure the quality of research, the only possibility, I think, is reading the articles, so that the quality, substance, and importance of the insights being reported can be appreciated upon reflecting on them. In certain cases, this requires a long time because of our human limitations.
More than 30 years ago as a new, 40-something PhD student, I co-authored, "Significant Contributions to Strategic Management Literature" with Prof. Ben Oviatt. I presented it at the Academy of Management meeting in Anaheim, Calif., in 1985. We compiled citation counts from the Social Sciences Citation Index (SSCI) and then used cluster analysis to delineate academic respondents to a survey about the literature of strategic management. The resulting groups were differentiated from one another by their different perspectives on the strategy field as reflected in their choices about the significant literature in strategy at that time.
The current list of journals included in the SSCI is here: http://bit.ly/SSCIPubs. I believe that citation counts are an essential part of making an objective assessment of the relative importance of a given paper or book. However, those counts should be in clearly stated context. The number of counts that make a paper "widely read" in some fields would be far great (or fewer, depending on the field) that it is in strategic management.
In most fields, there is a consensus about what the 'A', 'B', and 'C' journals are. For instance, I would deign a paper that has 50 citations in 'A' journals only to be much more important than one that has 250 in 'C' journals only. That is because there are far fewer 'A' journals in the typical discipline than there are 'C' journals.
I agree with @Mihai that there is a possibility to generate more citations, and if it will be rewarded, such organized groups will bias the valuation. It is also important (as@Mushtaq says) to have a network of collaborators, since it can generate more citations, comparing with author working alone (who also works more).
It is also not obvious whether a good research is always cited more frequently. For example, if a scientist solves a new (but little known) problem completely, there might be fewer citations comparing to the case when he/she contributes in small piece to a well known problem that nobody can solve fully.
But I also agree that such measure can help to distinguish between scientists who write little (this is another measure - number of publications), write a lot but such things that nobody reads (and cites), and write many good papers (that are also cited).
So if we want a measure of performance, it should be a vector with several components: number of publications, number or reads (difficult to measure, but on RG we do it), number of citations. We can introduce many weights (journal's index, author's contribution, weight for article with many coauthors should be discounted by their number). And still understand that the valuation of researcher can be biased. For example, some powerful bosses include themselves in publications without substantial contributions to the creative process.
Thanks for all your input. Some of you suggest weighted measures, which of course is sensible, but which also stresses the fact that research quality spans many dimensions. And, how should the weights be established?
Is it necessarily such that much read research hold higher quality than less read research? I am by no means certain on that. What about a paradigm shift paper in physics introducing stuff few people understand. Would such a paper be rewarded at all in any of the above mentioned systems? Perhaps not.
What about papers which get almost no citations or reads for 50 years after they are written, but suddenly gain popularity because authors turned out to be in front of their time. The Nobel prize winning paper of John Nash might be seen as an example. In a strict reward based system as we are facing today, John Nash would have been sacked from Princeton many many years before he got the prize. Is this the kind of system we want? I am really not sure at all. I believe that in order to capture the beautiful minds, we have to allow some silly guys (and dolls) space room and time. A too strict reward/punishment system may very well kill academia as we used to love it. And, even more serious, the really good ideas many not any longer be created within our beloved "business".
I agree, Kjetil. And what about theoretical papers? They are often not even mentioned in the journals among the manuscripts to be welcomed. As Haig & Evers (2016, Ch. 4, p. 82) write : "most new theories are in a decidely underdeveloped state, and the unfortunate result is that researchers unwittingly submit low-content theories to premature empirical testing." However, the latter may get many citations while comprehensive theoretical accounts, if accepted for publication at all, may largely be ignored.
B.D. Haig & C.W. Evers (2016). Realist inquiry in social science. Sage. https://www.researchgate.net/publication/315643748_Realist_inquiry_in_social_science