Dear Nisreen! That is exactly what I would like to know from my colleagues. If you are asked to evaluate a paper, what criteria would you use to say "that is a top quality paper" ? We have had many numeric (quantitative) and indirect indices, like the ISI "Impact Factor". Is it possible to directly evaluate "quality" in Science (and parasitology in particular)?
From my experience there are 2 kinds of "quality": The journal quality and the value quality.
The journal quality is whether the study is conducted by the rules of science and the topic the journal which to promote (often microbiology due to the fast development in the field = citations = higher impact for the journal). This kind of quality in the scientific method is a question of learning the rules of planning and conducting research - and write it eloquently in English. This the bulk of publications.
The "value" quality is more subjective and is often hard to judge whether you are an editor, reviewer or author evaluating your own work. In other words "novelty". If you get a novel idea, do the experiment, and try to publish - the novelty (value) may not be obvious to your peers even if the study has been done well (journal quality).
I personally like Karl Popper's focus on boldness and falsifiability as a quality (and I think this goes for parasitology too):
"[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures."
It I see a bold hypothesis and an honest self-criticism by the author, in addition to a well done study, I consider it of high quality.
I would say that a "quality" study is characterized by a representative study group (i.e. 100 mice, not 2 or 5), a good, novel and repeatable methods, proper statistical analysis and well presented results. And what is the most important, a well stated hypothesis. You may say that I described a standard journal paper. However, we all know that many research group use poor statistical methods, work with a very short number of repeats, analyse data with wrong stats methods. All of that result in poor science. As Brian Lassen wrote it is very hard to discuss the quality of research and in my opinion it is a very personal judgement.
A lot of people are saying "1000 mice are better than 100", but does this number is so exact? What to do when the study does not allow us to reach this number? An article in parasitology analyzing only 50 hosts is not valid? And analyzing 10 hosts?
To try to resolve these doubts turned to this article:
Thanks for those providing their insight on "quality". Hope we will have many more contributions. Perhaps we may start discussing some aspects. I liked Ummer's objectivity and his 5 criteria (the sixth was left blanck, are there any other criteria, Ummer's?). I would like to comment on the "regional" versus "global" problem addressed in a research report. There may be a "regional" problem that leads to investigation and production of innovative aproaches evenso it is not a global problem. So that quality would be an intrinsic character of research. What do you think about it?
Brian Lassen and Yuri Tokarev have pointed out that defining "quality" of a scientific paper is a hard task. I agree. So, for a hard or a complex problem we will not have simple answers, that is for true! That is a beginning. Relying on a single "index" like Impact Factor is not an adequate solution for quality evaluation. At the same time, I think we should not only rely on what some Journal or Scientific Society is adopting as criteria, we as scientists should be continously seeking for the possible consensus about these criteria and their application. That is because Science is dynamics and things may change. Also different fields of Science may have slightly different requirements for "quality definitions". We have a research group at PUCRS University trying to establish a multi-criteria algorith for evaluation of "quality" in research, and the preliminar scheme will be tested soon by our funding agency in southernmost Brazil's State. We must try to offer an option to some very simplistic approaches like the isolated attention to "impact factor" or even to "citation index". I appreciate very much all your insights and hope to get along with our discussions.