Lately, more and more critical voices blame the rules of the scientific game. Even Nobel Price winners doubt the sense of rankings and impact factors. How can scientific success be measured?
Counter-question: why do you want to measure it at all? I have a feeling scientists on their own pay so much attention to these "rankings", come up with different methods of calculations in order to critisize them afterwards, saying "forget it, IF does not mean anything anyway..." ;))
True sicentific discoveries, whether trivial or breakthroughs, will be anyway appropriately acknowledged without any additional ranking tables or high IFs, a lot of examples around. Maybe I am too naive but I define "scientific success" as a competence in ideas generation coupled on the ability, both intellectual and financial, to carry them out.
Maria is right as far as true discoveries are concerned. However most works have an incremental nature, and my experience teaches me that their impact really depends on the journal in which they have been published. Many times, I could observe a correlation between citation numbers and impact factor. Even now, this correlation sounds a bit strange to me, as I know many excellent papers published in not-so-famous journals, and conversely very poor papers in well-reputed journals. However, it seems that these are exceptions and that reputed journals, i.e. with high IF, are those leading to many citations. From this point of view, I feel that IF is not meaningless. Additionally, even if the term "competition" is too strong, I feel that many scientists like comparing themselves to their peers, and need some kind of metrics to measure their own performances.
Ranking of universities is a totally different problem, as the way they are calculated depends on the countries. My university just appeared in the Shanghai ranking soon after it was composed of 4 former universities which merged together. Did we become better scientists ? Certainly not, this is just a "mass effect", as a bigger university has statistically more chance to have brilliant researchers than a smaller one.
Maria, me personally, I do not want to measure it at all, but I think that it is important in the inner-scientific discourse. That was the reason for my question. I agree that researchers pay too much attention, but I do not believe that it has no influence. My experience is that researchers who want to apply for certain positions are mainly evaluated basing on their rankings and impact points, already in early stages of these processes. I am convinced that "true scientific discoveries" are not acknowledged anyway or automatically but that the acknowledgement has a lot to do with paradigms, hegemonic discourses and political networks in the scientific communities. I would not go back to the examples of Galileo Gallilei or Newton or Freud, but still similar examples exist today. What do you think?
Well, Markus, exactly hegemonic discourse has nothing to do in the scientific community ;)..
On my opinion, this metric system should exist but it should not be put in such a degree of importance. The current situation turned out to be so - you apply for a grant and you are asked for your h-index, IF points or whatever metric which somehow characterizes you. These points emerge from your true markers of achievements - publications - theoretically it is fair and should reflect your "scientific success" in numbers.
But then comes - 1) we know that sometimes the opportunity to publish in high IF journals get only members of particular network 2) the reviewers of submitted manuscripts behave not like objective judges but rather like victims of internal political games 3) highly-cited papers are the methodological ones, which often have only marginally to do with "scientific hypothesis generation and testing" or reviews.. and so on. Afterwards, such metric numbers do reflect a particular status but the system, as always, not optimal ;)