I ask this because as the volume of research increases and conferences where that research is published flourishes. I am curious to know does the time for publication and that process of review make the research better or rather do we hinder the active engagement in debate on the research as it takes place.
My doctoral thesis examined how the quality of an academic article is assessed so I did quite an investigation of issues in peer review, interviewed senior academics, analysed published artilces, editorial boards etc. A few years back now but little has changed.
While we might naively hope that the review process is robust and objective, psychology / decision sciences make it difficult to ignore the fact that rational / informed decision makers are of course influenced by matters other than the inherent quality of content of a paper - even if they are not aware of this influence. However, I hasten to add that the degree of influence of extrinsic factors is not necessarily always high, and, also, that the peer review process can be seen as a natural and acceptable way of reducing information overload.
The field of history/philosophy of science is an interesting adjunct to this discussion. In paticular, the work of Thomas Kuhn provides an interesting perspective on science (the book to read is The Structure of Scientific Revolutions).
Yes, because every aspect of scientific research is "flawed" since science is conducted by humans. The pertinent question is never whether this or that methodology is flawed, howeverr, but what is the viable alternative? Without peer review, all research becomes editorial opinion, nay, even worse. As a reviewer do I make mistakes? Sure. Lots of them. But, that just means tomorrow I need to try harder. So, my rule has become this as a reviewer: unless I have the time to do the review right, I turn it down. Period.
john
thanks for this , this is an important debate and your honesty adds to it
Passing a peer review process at a 'good' journal serves as a quality control mechanism that signals something is worth reading - this is more and more important as the amount of noise goes up with the sheer volume of papers online.
Undoubtedly, the peer review is flawed, as John said, with whom I fully agree. But at least it also has two great virtues, if well done. It helps the author to have a broader perspective of their work and realize the problems that maybe had not been fixed; scientific debate is always positive. In addition, in preparing his report, the reviewer may question his own way of working as a researcher and author. And the benefit is not only for them, but also for the entire scientific community.
The peer review process is necessary for all of these reasons but it does have some warts. The biggest one that needs fixing is the editorial policy of certain journals allowing single instead of double blind reviews. Knowing the authors whose work you are reviewing is bad science. Reviewers should tell editors they will only review if author information is removed from the paper. I know sometimes (proceedings, books etc.) this is impossible but where double blind reviews can be used they ought to be used.
I agree with John on all counts. What I add here is the mention of another "wart": that the emotional age of a reviewer may not match his or her chronological age. (The same holds true for book reviewers.) Occasionally reviewers throw tantrums, or hurl insults, over a viewpoint, method, or theory that runs counter to theirs or that the author of the paper is passing over in favor of one that that author considers superior. Likewise, reviewers will sometimes strongly urge the author(s) to read several papers by the same scholar—and conventional wisdom holds that these are almost certainly papers that the reviewer has written and wishes to have cited.
Moreover, in highly complex papers, a reviewer may not fully understand the author's work and ask for changes that are either inappropriate or irrelevant to the study.
These kinds of flaws notwithstanding, most Ph.D. students, researchers, and professors with whom I have worked say that at least 90% of the time the reviewers' comments improve the paper. And, as John said so succinctly, "what is the viable alternative?" No process is perfect.
Review process is flawed as in some cases reviewer is not s specialist in the field and gets a paper to review. He might look for flaws in methodlogy etc but he/she cannot point out the technalities.
There has been warm debate on efficeny and validity of the peer review system in past few years at many international forulms specially World Association of Medical Editors), but as yet we do not have any system better than peer-review so we have to go with it.
Agree with you all and here is where an editor worthy of the title can have the biggest influence. I respect editors who have the cojones (figuratively) to tell authors to ignore reviewer temper tantrums. A good editor is never swayed by the ego of the reviewer only by the evidence.
There was some really interesting empirical work on peer review in the late 70's/early 80's. A bunch of it was published as a special isse of The Behavioral and Brain Sciences, vol 5, no. 2 (June, 1982). A key piece was "Peer-review practices of psychological journals: The fate of published articles submitted again" by Peters and Ceci. Yes, you read that right--they resubmitted papers that had already been published, but they submitted them with new author names and affiliations. The results were intriguing and the special issue is jam-packed with interesting comentary on the methodlogy, the findings and the ethics. I am not sure if you can get this online; Cambridge University Press's search engine did not seem to be working when I checked.
Bill
Couldn't get it on-line either. See also (2006 resp. 2013):
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/
http://www.nature.com/scitable/blog/scibytes/the_debate_about_peer_review
Richard Smith wrote a whole book about Troubles with Medical Journals including the peer review proces with very nice examples and repair suggestions. Book was mentioned in his paper I already distributed yesterday, but it is difficult to find because not with the publisher he mentioned!
http://www.amazon.com/gp/product/1853156736
Via another discussion thread here on RG I happened to stumble upon another drastic case of misuse or rather abuse of peer reviewing by the very editor of a journal (as reported in Nature/Elsevier). You wouldn't believe it weren't it documented in minutiae on almost 100 pages of a legal verdict after a process running 2008-2012 in UK. I'll send you here a short 1-page summary by the journalist (acting on ethical grounds, IMHO not a whistleblower) who won the process after 4 years. The rest you can find from there, if you are interested and have some free time left.
http://www.nature.com/news/i-was-sued-for-libel-under-an-unjust-law-1.10979
Very educative thread after I have seen parallel threads hammering the issue. I appreciate the sincerity of colleagues here. Natalie pointed out very realistic issues and Paul has been keen to address the issue objectively.
I suffered with some reviewers who were out-of-line and unethical in their comments and behavior toward some papers I submitted a while ago, but whta is right will remain right. In some cases, the editor took a positive stance in others they sided with their reviewers.
I took all these experiences as part of our education in going global and being an active learner. No one, at least in my agenda, can influence our determination to go forward and being part of the academic community who is interested to learn.
So, going back to the main question in this thread, I agree that there are cases where the process is indeed flawed, but thank God, these are very few cases in comparison to the amount of peer reviewed papers ongoing the review process.
I also agree with what John said. I think Editors play the most important role in the whole process. And I also think it is important to decline a review invitation if there is not enough time to do it properly. But putting this aside, I think the biggest issue are so called "pay to publish" journals, who present themselves as "peer reviewed" although you can suggest reviewers (!). I think this is something that should be dealt with imediatelly.
Dear Paul,
If you refute some principle that had been accepted to be true, and if your article is sent to a referee who happens to be a believer of that principle, your article would be rejected anyway, even though you may actually be correct.
My doctoral thesis examined how the quality of an academic article is assessed so I did quite an investigation of issues in peer review, interviewed senior academics, analysed published artilces, editorial boards etc. A few years back now but little has changed.
While we might naively hope that the review process is robust and objective, psychology / decision sciences make it difficult to ignore the fact that rational / informed decision makers are of course influenced by matters other than the inherent quality of content of a paper - even if they are not aware of this influence. However, I hasten to add that the degree of influence of extrinsic factors is not necessarily always high, and, also, that the peer review process can be seen as a natural and acceptable way of reducing information overload.
The field of history/philosophy of science is an interesting adjunct to this discussion. In paticular, the work of Thomas Kuhn provides an interesting perspective on science (the book to read is The Structure of Scientific Revolutions).
It is not only flawed; It is broken or collapsing,but the majority would not admit it.
So I think, in general, we agree that the problem is not the existence of the peer review process, but whether this process is done right always (or almost always). I cannot comment on other fields of science, but I think in History (my field), generally it works well, and is beneficial to the outcome of the publications. Although sometimes I think the reviewers have been too kind (sorry John, sometimes there is not coj**** to be picky), and that they should have given a more critical appraisal for some works would have been better (or have not posted ).
This question is very crucial especially now that journal publications are highly valued by institutions.The peer reviewd process can be valued or devalued,and the authenticity lies solely with the editors of the acceptable papers.Should the extrinsic value overide the intrinsic value,the authenticity might be flawed-especialy for beginners in research report writing and fora papers.Now the onus lies on the editors and their boards to select peers with the prerquisite capacity and capability to examine/assess the quality and quantity of the research -whether for conference presentation and /or journal publication processes.A leaf worth borrowing is what Association for Institutional Research(AIR) does.All presentable papers -whether for fora or publication MUST be properly reviewed from proposal to final paper before acceptance and reviewers are ONLY those selected from the area of interest of the presenter (both for fora or publication).These reviewers are not just few but so many ,and all their suggestions/corrections following assessment are equally reviewed repeatedly before final judgement. Its a blind-review process.Additionally ,reviewers are those who have profound interest in these papers..not for any gain but KNOWLEDGE_based.Authenticity might be threatened when research review process is not academically based.When research interest is on financial gain,when "publish or perish" still reigns in our institution .Just like Ronald Stewart has also submitted above,financial implication has stifled good research papers from being published,that is why we should encourage collaborations so that those with relevant research areas put get their work published.Any group/organization whose utmost interest is the financial gain and not on quality,threatens the real reason for conferences,research and research publications and Fora in academia.When proper standards for review process is laid,authenticity might not be flawed. My experience-i was so scared to venture into research because i didnt have money for publications and we hardly get grants even from our institutions,and there is no need for research if your report is not diseminated..My worry for research in developing countries is that the real and genuine researchers are being discouraged by financial implications involved in authentic research.
Peer-reviewing and publication process need to be rapid taking into account the quality as well, if this process is slow then many Open Access journals get mushroomed publishing in weeks
Taking into consideration the quality of the publication, the peer-reviewing process must be fast. The process if becomes slow it will create hindrance in the way of publishing articles by open access journals.
5 reasons why peer review matters!
" I have come to understand that peer review is about striving towards the TRUTH – the very quest of scientific enquiry! Here are 5 reasons why I think peer review matters…
http://www.elsevier.com/reviewers-update/home/featured-article/5-reasons-why-peer-review-matters
Peer review process is good provided it is taken in right spirit. The process must be
Fast
Transparent, and
Objective.
It is imperative for young researcher to get their reviews in time.
Peer review makes the quality of articles better ,its not time wasting.However,the team of reviewers must be experienced and quite knowledgeable on the issue under review. Peer reviewed articles are always the best quality works in the field of research.But the time of review should be reduced by specifying the area of concentration eg problem statement,purpose ,objective ,methodology analysis. Give a guideline on what is expected.Lengthy review process most of the time discourages most new researchers from presenting their articles,but this process is quality assurance so that only publishable and relevant things adding to knowledge are made available to readers.
This is good practice by Elsevier! Elsevier trials publishing peer review reports as articles!
For participating journals, reviews of accepted articles will appear in an article format on ScienceDirect, with a separate DOI.
https://www.elsevier.com/reviewers-update/story/peer-review/elsevier-pilot-trials-publishing-peer-review-reports-as-articles
Because peer review is the main mechanism by which the quality of research is judged & because the number of scientific articles published each year continues to grow, peer review ought to be considered as duty & responsibility. The reviewer has to say from the start if s/he is up to the challenge or not in the allotted time.
I have done several reviews in the past but no one knew that except those who requested me to do this work. As I opted for anonymity, I also wanted the authors' names to be erased from the articles given to me. This allowed me to give unbiased opinion about the language & the scientific value of the work. Sometimes, I exchanged ideas with authors without knowing each other. I do not believe in doing review work in a hasty manner & my habit is to read an article 3 times before giving my comments. A scientific approach into reviewing enhances quality research & this approach may end up with acceptance or rejection on merits & nothing else.
Hey All
I got an article published titled "Gatekeepers of the academic world: a recipe for good peer review". Its requested to august scientists / researchers / academicians to kindly have a look and continue interesting topic started by Dr Paul Davis.
Is the peer review process flawed for academic articles? - ResearchGate. Available from: https://www.researchgate.net/post/Is_the_peer_review_process_flawed_for_academic_articles [accessed May 19, 2016].
Reviewer concerns about transparency of peer review process! It is an example of COPE case.
"Our journal uses an internally transparent process where throughout the editor or peer review process, authors, editors and reviewers are all aware of the identities of who is involved. Reviewers are also told—when initially solicited to do a peer review—that they will be named on the final article manuscript as a reviewer. Prior to publication, the pre-print version of a text is sent to reviewers for their approval to be named (or not) as a reviewer on the article. We do not currently publish the content of the peer reviews.
We recently had concerns raised by one reviewer who disagreed with the content of the manuscript and its suitability for publication; the second reviewer was enthusiastic about the manuscript, and the editors decided to publish the text. The first reviewer accused the editors of behaving in a non-transparent manner and even of being unethical, because: (1) we did not publish the content of the critical peer review and (2) we did not have a disclaimer on the text stating that reviewers were not responsible for the content of the published manuscript (which we had assumed was obvious).
We have thus begun the process of adding the following disclaimer to all our peer reviewed texts (and backdating to all those previously published): “Reviewer evaluations are given serious consideration by the editors and authors in the preparation of manuscripts for publication. Nonetheless, being named as a reviewer does not necessarily denote approval of a manuscript by the reviewer; the editors of the journal take full responsibility for final acceptance and publication of an article”.
Question(s) for the COPE Forum
• What are the benefits of going to fully transparent review, with publication of the content of peer reviews?
• We are aware of the risks (eg, reviewers feeling inhibited from making critical comments for fear of reprisal). Do the benefits outweigh the risks?"
Do read the advices from COPE!
http://publicationethics.org/case/reviewer-concerns-about-transparency-peer-review-process
Is peer review just a crapshoot?
How do reviewer recommendations influence editor decisions?
An important concern in the scientific publication process is how well reviewers evaluate the quality of papers and how their recommendations influence editors’ decisions to accept or reject papers. Are there some comprehensible patterns in the review process or is it just a crapshoot — a random process with reviewers and editors making arbitrary decisions?...
https://www.elsevier.com/connect/is-peer-review-just-a-crapshoot
Whether a review process in a scientific journal works or not, depends very much on the editor and his or her editorial board. In a good journal, probably the review process is better than in a bad journal (whatever this means exactly...). The reason is that there, probably an editor of high quality is working, and such an editor a) knows a lot of people in the subject field of the journal b) is experienced enough to judge who is to be a member of the editorial board c) follows a defined review process that is fair. If many people think, that a certain journal/editor/editorial board fail these requirements, then such a journal will loose its rating as a good journal rather fast (or never get such a rating in the first place). Of course, we are not living in an ideal world, but it helps to have some ideals, and one of mine is related to the topic of the question: The current review process is maybe not optimal - but there is probably no better alternative. From the general point of view, in my opinion, a review system is one of the corner stones of what makes science successful and reliable. Or putting it differently: One should not believe that every reviewer is just keen to getting rid of academic competitors...
Scientists are very fortunate to have a peer review process that is as rigorous and as vigorous as it is given that it is run by human beings and the material that is being reviewed is also submitted by human beings.
As human beings are fallible by definition and construction, how could the human process of peer review not be?
BTW: I am not saying and implying that the peer (=human) review process can be or should be mechanized, e.g. by AI. We are still far away of knowing how to construct such superintelligent infallible machines.
Publish and don’t be damned
Some science journals that claim to peer review papers do not do so
One estimate puts the number of papers in questionable journals at 400,000...
https://www.economist.com/science-and-technology/2018/06/23/some-science-journals-that-claim-to-peer-review-papers-do-not-do-so
Peer review if it's rigorous, is a good mechanism to enhance the quality of articles.
Peer review process is good enough if perfectly implemented. However, every review process/system has some lacunae.
No system is perfect as all are man made. As the time proceeds, flaws come automatically. Pre review is good enough if it is done honestly, but it should be faster as sometimes it takes more than one year.
Good point Pathak. Relying on honest reviewers who want to spread good practices supports the straightness of the process of reviewing.
Lot of thanks Dr Hussain for your response on the above mentioned topic.
Hello there, I agree with the idea that the peer review system is flawed, yet this process is absolutely necessary in science. (I consider it flawed for various reasons I will not detailed here). As a possible solution, I suggest that the peer review system should be made open to any expert and non-expert of the fields, in a completely anonymous manner where both reviewers and authors ID are hidden during the review - or multiple - review process. Review & comments can be up-voted or down-vote by the community, like it is on many web-platforms. I expect the review from experts to become be more pertinent, but I think non-expert can raise concerns about several aspects, maybe the clarity of the explanation, methods details, etc. How to determine the duration of the 1st review process is an issue, maybe after a given number of votes the review can be stopped by the authors, if they desire so. Then comes 2nd round,... Once the author(s) estimate that enough advice from peers were obtained, and that the quality of the work has improved significantly, then author(s) alone release the paper. Note that editor is completely absent from this effort. I also suggest we incentive the review process by rewarding the "best" answers, which way of doing so should be decided by a democratic vote within the community. Finally, to ensure the above voting processes, the necessity to ensure 1 vote (one ID) per researcher is of course required. Something like that.
The suggestions and blueprints are there! Many years ago, Elsevier a.o. initiated a contest among researchers to submit proposals to renew the review process. Why does it take so long for academics and publishers to put it into practice? We are constantly loosing time ... and credibility.
Human created system, be it a peer review process flawed. Do we as a reviewer submit our reviews very soon. It's a psychology, when a stipulated time period is provided, we consider the duration as ample to complete our task. But I guess one must be judgemental when bearing the responsibility of being a reviewer. Quality reviewers based on past experiences should be assigned this responsibility and furthermore some monetary values along with limited duration for review should be given to make the process faster. Probably, this may work. Thank you
My experience as an author as well as Editor of a journal is fairly positive about the peer review process as it gives a chance to get considered feedback from independent experts that usually gives a lot of value addition. However, at times, if the reviewers are not well acquainted with the area we may get comments that may not be relevant. Still, overall I recommend the process. The only concern is to get dedicated reviewers that willing to go in-depth and give feedback quickly. A lot also depends on the editorial team, how it handles the peer review process.
When reviewing goes wrong: the ugly side of peer review
Illustrating some of the most common ways that things can go wrong during peer review – and what to do if this happens...
https://www.elsevier.com/connect/editors-update/when-reviewing-goes-wrong-the-ugly-side-of-peer-review
Pl check out following discussions on similar aspect: -
What is the peer review process and why is it important?https://www.researchgate.net/post/What_is_the_peer_review_process_and_why_is_it_important
What is your opinion of the academic peer review system, and what are your suggestions for improving it? https://www.researchgate.net/post/What_is_your_opinion_of_the_academic_peer_review_system_and_what_are_your_suggestions_for_improving_it
For different aspects of peer review process, I do recommend the following readings. Too many fine resources are available.
https://publicationethics.org/peerreview
https://publicationethics.org/resources/e-learning
Peer review per se is not a bad idea and can help improve a manuscript in terms of intelligibility by having one or two outside views.
However, the implementation and practice contains flaws, some of which are human, some of which are structural problems.
Reviewers may not be interested in improving a manuscript but rather diminishing their own effort by rejection. Given the limited amount of time for an editor of a journal with a high number of weekly submissions, the editor usually follows the reviewer's or reviewers' suggestions. Although the lack of compensation for reviewers can be interpreted as a "neutrality guarantee", it is dead wrong because a reviewer has no incentives to improve a manuscript but rather keep the process short and, perhaps wrongfully, dismiss a good article.
Speaking about comments as well: Peer review serves to ensure the quality of the article under review, but how is the quality of the peer review process ensured? Peer review is nothing uniquely academic, it exists, in disguise through, in other institutions (companies, schools, etc) as well. Having worked in other sectors as well, the peer-review process is flawed. Here is a comment by a "good venue" in mathmeatics, SIAP, where the one referee (they didn't bother to look for a second one) said "I [didn't] read past section 2 [add: out of 5]" and later on "I will consider [the manuscript] again if it is rewritten [...]". Reading the introduction and the background material but not passing to the results is a very incomplete measure to assess a manuscript. The editorial practice of the venue, to end the anecdote, is virtually the same as for what has been called ``predatory journal''. Arguably, this is a small sample, however, such example stick: With (hopefully almost all, or at least the majority) of referees doing a decent job, a non-negligible fraction does not.
Institutional, the publish-or-perish policies make publications and the accompanying peer-review process a business as everything else. As long as there is limited space in journals, long time lags between submission and publication or acceptance, people will try to loop-hole a quality control.Mandatory publications for the completion of PhD-level education just aggrevate the excess amount of article manuscript circulating the intestines of publishing houses. Suppose the scenario that we have person A who serves as a referee for journals JoA, JoB, JoC in an overlap area. The journals are good venues and everyone, including A, wants to publish there. The number of submissions thus is high and we take the example that A has 3 manuscripts to assess at a time. A good peer-review takes time and effort which correspondingly is absent for A's own research. On a rational homo oeconomicus level, A is inclined to, first, dismiss manuscripts easily to have less competitors in the market and, second, to let the papers sit in a while until the time elapses after which reports about the papers is expected to be sent to the journals. Given that a large proportion of publications is due to the work of PhD students who are discouraged after paper rejection and have a substantial probability of leaving, A may simply modify some ideas and use them in their own work. Upon suitable camouflage and given A's anonymity, no one can find out anyway. Ethically, this can be justified by rescuing some ideas for the sake of advancement of human knowledge (and to increase chances for promotion for oneself).
In terms of paper content, I admit that I am a big fan of somewhat older papers dating back to times where there was no peer-review. There is a famour article in aerodynamics by L. Crocco (deriving the Crocco's equation, the link to the article can be found on Wikipedia actually). The paper is short and even for the late 30's, the math is well-known. Today, this paper would not have survived the peer-review. In other fields, such as, I guess philosophy, the necessity for authors to please referee's turns research into a negotiation about content, scope and writing style. A flippant solution is to send articles to a journal and ask the referee's to fill in the gaps such that they are pleased themselves which is more likely if they performed the work themselves. The paper results are mostly boring, then.
Apart from superficial quality controls, the scope of peer-review is limited. Light mistakes or even unlucky formulations are easily spotted but what about serious cases of academic fraud? The Schön case, named after Jan Hendrik Schön, is a good example. With publications every months and even in good (physics) journals such as Science and Nature, the guy simply has faked data to support his hypothesis. Consequences had been imposed years later but the question is: Why did reviewers not become skeptical in the very beginning after the N-th article? This is no accuse but simply an instance for the natural boundaries of peer-review: Superficial mistakes may be found, but deep academic misconduct is not found. In terms of mistakes, a somewhat different policy should be employed as well. Mistakes accompany research and, for sure, it is good to spot them but there is no need to sanction these little things. Even more, people who do similar work may run into the situation where they do the same mistake. If it is known that there is a minor mistake, where it is, reserach benefits more than by continuously doing the same errors over and over.
Economically, the duration and arduous implementation of peer-review leads often to extending PhD studies to times which are utterly unreasonable. The research training can be completed in 2 to 3 years (meaning 4 to 5 years of grad school). However, 4 to 8 years a common (meaning 6 to 10 years grad school). This is systematically used to exploit cheap workforce further. For Post-Docs and non-tenured people, the consequences is to either hand the pressure to produce publications for funding and/or promotion to "subordinates". I do not go deeper in the topic of abuse and exploitation in academia.
I suggest, first, to make the peer-review process more human: At the other end from the reviewer's perspective, a person has invested a lot of time into writing a manuscript. Apart from rare cases such as the Schön case the vast majority of people do not misconduct. There is nothing wrong with encouraging people to continue the work and rather opt for acceptance than denial. Second, the peer-review itself should lead to improvement and not de-installment: Be constructive. Third, be fast. Time is proportional to money, even in academia. Third, I think good peer-review should be compensated academically and/or financially. After all, reviewers' service is asked by journals and authors alike, so a symbolic appreciation should be shown if the collaboration between authors and reviewers leads to publication success.
"Although the lack of compensation for reviewers can be interpreted as a "neutrality guarantee", it is dead wrong because a reviewer has no incentives to improve a manuscript but rather keep the process short and, perhaps wrongfully, dismiss a good article."
Isn't it curious that a researcher applies for the job of a reviewer (knowing what it means and what it is good for) - but then, after he/she gets the job, doesn't want to fulfill it correctly and as expected from him/her??? Isn't this highly irrational, selfish and myopic behavior, what is said to be absent from scientists?
In theory, I agree with you Paul. In practice, I think the behavior you have listed is very human. Scientists are by no means less prone to human fallacies and shortcomings than other professional groups.
They are, indeed, David. And not only when reviewing! Examples abound: (1) the irrational struggle between proponents and opposites of Bayes appraoch in statistics (cf. "The Theory That Would Not Die"; (2) the phenomenon of "China's Publication Bazaar" (Science, 29 Nov 2013: Vol. 342, Issue 6162, pp. 1035-1039), and many others. Time for another (updated) book on the psychology and sociology of scientists instead of the extreme simplified philosophy of science (exaggeration on purpose: I love philosophies of science ...)
I think despite of all moaning, the more important question is: What are the shortcomings of academic peer review and how can peer review be even improved (It has positive sides) . Criticism comes easy, improving sth is somewhat more challenging, yet also more rewarding.
Despite peer review’s acceptance within the research community, concerns have been raised about its overall effectiveness. Criticisms directed at the peer review process include bias toward certain authors, inability to detect major flaws, unnecessary delays in publication, and inability to uncover corruption/scientific misconduct. These concerns have weakened the scientific community’s faith in the review process.
The ups and downs of peer review
https://pdfs.semanticscholar.org/1a4b/37be17241dd90a94d4474f142eb80e4a09fa.pdf
Yes the possibility is high. But what is key to help address the problem is having in place a strong policy of peer review process that puts quality control and efficiency top on the agenda. I say so because the quality of peer review adds credibility to the articles. I have also noticed that some articles are poorly peer reviewed. What also frustrated is that some peer reviewee process take too long.
It would be interesting to know if length of time to complete a peer review is correlated with the "quality" of the review (how judge that? One attempt: Article Development of the Review Quality Instrument (RQI) for Asses...
). Also, is that length of time predictive of the outcome of the review (in terms of acceptance/revision/rejection).
Work may exist that addresses these questions empirically. In health research, the following comes close: https://www.bmj.com/content/318/7175/23.short
These questions , however, fail to address the underlying issue in this discussion--whether peer review is too seriously broken or too antiquated to serve its intended purpose. Some are beginning to address the question "what alternatives are there>" If participants here are not already familiar with Ralph's (2016) essay on the topic, I think it provides an evocative starting point, see Article Practical Suggestions for Improving Scholarly Peer Review Qu...
Dear Paul Davis , this article seems to be the one that belongs to this discussion. I have asked the authors for full text copy. Abstract follows.
This paper investigates the impact of referee behaviour on the quality and efficiency of peer review. We focused on the importance of reciprocity motives in ensuring cooperation between all involved parties. We modelled peer review as a process based on knowledge asymmetries and subject to evaluation bias. We built various simulation scenarios in which we tested different interaction conditions and author and referee behaviour. We found that reciprocity cannot always have per se a positive effect on the quality of peer review, as it may tend to increase evaluation bias. It can have a positive effect only when reciprocity motives are inspired by disinterested standards of fairness ...
Article Opening the Black-Box of Peer Review: An Agent-Based Model o...
The peer review system has flaws. But it’s still a barrier to bad science
The peer review system has received a fair amount of negative press in recent years. It has been criticized largely because it is not particularly transparent and depends on a small number of peer reviews, an approach that can lend itself to cronyism. In addition it depends on trust: trust that reviewers will be fair and are willing to put sufficient time into a critical review.
https://theconversation.com/the-peer-review-system-has-flaws-but-its-still-a-barrier-to-bad-science-84223
Dear Friends and Colleagues of RG
The reviewing of scientific articles by specialist scientists in a given field of expertise conducted in the editorial process is indispensable for maintaining a high level of scientific publications and for the development of scientific journals. However, it happens that for scientists who operate in narrow, specific specialties, fields, scientific disciplines can be problematic and troublesome.
Best wishes