Dubious science and failure to identify it can lead to serious consequences. I stumbled across this interesting article: "Who's Afraid of Peer Review?"; the article investigated several publishers by submitting fictitious manuscripts. It is alarming that many were accepted. How to we stop profiteering and abuse of the peer review process?
http://www.sciencemag.org/content/342/6154/60/suppl/DC1
The problem of peer review is three-fold. Foremost it is a gate-keeper, not allowing opposing views or competitors. Secondly, it is a platform for those who need credits (I am a reviewer for several journals.) Thirdly, are the trolls. Those who have nothing to say and are opposed to anyone who has something to say. There are many honest reviewers who attempt and accomplish an honest review. Their efforts are important, but are greatly diluted in influence by the others.
Gate-keeper reviews are the worst as they prevent new ideas and examination of the existing consensus. Platform reviews are meaningless and their presence prevent legitimate reviews. Trolls not only obstruct legitimate reviews, they add abusive behavior to the processes.
Peer review can be improved by open reviews. Anonymous peer review is the basis and support of all bad behavior. Open review is the answer. A paper should stand for open review for a set period of time. Anyone should be allowed to comment, but only if fully identified. An editor arbitrates between author and commenter. Publication must include comments and resolution.
The peer review process is a necessary mechanism associated with the publication of a manuscript in a journal. Its main responsibility is to ensure the quality of the manuscript submitted. However, the quality of the peer review process depends on the editor of the journal and the qualification of the reviewers.
The editor should ensure the selection of the appropriate qualified experts in the field of the manuscript and on the quality of the work done by them. The editor should not accept the following types of comments from a reviewer:
1- The paragraph is wrong and should be changed by the author.
2- I disagreed with the opinion of the Author.
3- The idea behind this sentence is incorrect.
And similar ones. The reason is very simple. If the reviewer has a different opinion about any specific paragraph written by the Author, then he/she should indicate where the problem is, which is his/her opinion and a possible text that can be used by the author during the revision of the paragraph.
On the other hand, the reviewer must do its work with responsibility and in the best possible manner. In case he/she have no time to do the work with the necessary quality, then he/he should reject the invitation submitted by the editor.
The acceptance or rejection of a manuscript is a great responsibility of the editor of a journal and of the reviewers selected by the editor to do the revision work.
The problem of peer review is three-fold. Foremost it is a gate-keeper, not allowing opposing views or competitors. Secondly, it is a platform for those who need credits (I am a reviewer for several journals.) Thirdly, are the trolls. Those who have nothing to say and are opposed to anyone who has something to say. There are many honest reviewers who attempt and accomplish an honest review. Their efforts are important, but are greatly diluted in influence by the others.
Gate-keeper reviews are the worst as they prevent new ideas and examination of the existing consensus. Platform reviews are meaningless and their presence prevent legitimate reviews. Trolls not only obstruct legitimate reviews, they add abusive behavior to the processes.
Peer review can be improved by open reviews. Anonymous peer review is the basis and support of all bad behavior. Open review is the answer. A paper should stand for open review for a set period of time. Anyone should be allowed to comment, but only if fully identified. An editor arbitrates between author and commenter. Publication must include comments and resolution.
@Jorge Morales Pedraza and Joseph L Alvarez: your ideas are so concise and deep I wish you could come together for a publication on the peer review process. They are so many childish trolls in the system that leave you reeling in pain and confusion; open review is the answer and concise comments/opinions with possible exits that can be used by the author for revisions are the way out of this quagmire. Thank you so much orge Morales Pedraza and Joseph L Alvarez!!!
We all are associated with doing research, writing papers, and reviewing process.
We have to be honest --- advise others also to be honest in this process.
What is reflected in the article published in Science is a serious problem of Open Access publications. And to some extent it can be solved by using some of the suggestions given above. But I would like people to address the serious problem which has been doing irreparable harm to the development of new ideas and true understanding of the subject. This problem is related to anonymous refereeing which has created a Gate keeping system.
I am a culprit of this system . A paper rejected in One Journal ( saying that this is just a simple classroom exercise) got selected in a peer review system ( Both journals were peer reviewed). Still i believe that peer review but it should add a third eye perspective to the work of an author. It is like a third party inspection.
Dear All, peer-review is a very important step, to be ensure that only well-written, paper with high novelty will be publish. However, the process of peer-review is not transparent yet. I believe that the main aim of the review to help to the authors to improve manuscript. From this point of view, transparence is very important!. I think that one of the most optimal system have been developed by Frontiers: http://www.frontiersin.org/about/reviewsystem.
This system is really transparent and, I think, in future this system must be use by all other journals as well.
Here are 4 threads on related subjects that you may be interested in examining
https://www.researchgate.net/post/How_do_we_solve_the_problem_with_the_unethical_selection_of_authors_of_a_paper
https://www.researchgate.net/post/Do_you_think_that_the_contribution_of_the_first_author_of_a_scientific_paper_should_be_evaluated_more_than_coauthors_contributions#view=55a0fb57614325d0d98b456d
https://www.researchgate.net/post/What_is_a_reliable_way_for_researchers_to_achieve_a_great_number_of_citations_over_time#view=55a28fef5cd9e31c1c8b45cb
https://www.researchgate.net/post/How_can_academic_research_dishonesty_be_prevented_and_what_punishment_do_you_suggest_for_the_perpetrators
I agree there is a problem, but criminal prosecutions for abuse? Impossible because (a) it would be impossible to define abuse and (b) it would deter peer reviewers doing their work in the future.
My first professor said to me regarding peer reviews:
'Where are people, there occur the human weaknesses'.
Publications are the most important gateway to the world, so you get there attention, recognition and money for research.
In any case, you're dependent on the quality of your own publication and qualifications, as well as the good will and help, but also the jealousy and envy of the editors and reviewers.
Due to the almost unassailable power and arbitrariness of the reviewers, each of us is forced to rely on their own skill and luck.
By the way, I also belong to those with human weaknesses!
I agree whole heartedly with one of the previous respondents who argued that all reviews should be open (i.e. not anonymised). This would, I think, make for more constructive and thoughtful reviews. I also think that editors ought to offer very clear guidance as to what constitutes an acceptable review and should reject reviews that are purely negative or partisan (partisan, that is, in terms of methodology and/or theoretical orientation). The peer review process plays an important part in the career development of early and mid-career academics in particular. A negative review - or a review which simply parades the reviewer's prejudices can do untold harm to the self confidence of inexperienced academics (and, indeed, to more senior academics). As scholars we should take extremely seriously the need to support one another and help one another develop as scholars and writers. A system of open reviews coupled with strong leadership from editors could help shift the academic culture to one that is more collegial and less competitive.
The idea of criminal prosecution in case of abuse of the peer-review process is interesting, in particular when, in order to prevent a critical paper or on unorthodox new view from being published, fabricated negative conclusions are passed off as the genuine findings of a serious investigation of the scientific quility of a work.
A peer-review report is a document that in science serves as the evidence of the scientific quality of the work under consideration. Under Dutch law, putting fabrications on such a document would be "forgery" (valsheid in geschrifte, art. 225 WvS), just like putting fabrications on e.g. an invoice.
In the Netherlands, my PhD project on possible foundations of physics had at one time been cancelled by the use of such fabricated peer review reports (it is easy to prove that the negative conclusions are fabricated). If I had the means, I would sue them all (those who compiled the false peer review reports, among which is a Nobel laureate, and those who knowingly used these reports as if they were genuine).
I think the first step is to start seeing this as scientific misconduct; see my paper in Sci Eng Ethics (available from my ResearchGate account): https://www.researchgate.net/publication/235659086_Scientific_Misconduct_Three_Forms_that_Directly_Harm_Others_as_the_Modus_Operandi_of_Mill%27s_Tyranny_of_the_Prevailing_Opinion
Article Scientific Misconduct: Three Forms that Directly Harm Others...
It will be hard to determine which review is right or which is wrong. However the method suggested by Joseph L Alvarez is good enough that open review will increase the accountability of the reviewers. There are some cases when editors returns back genuine research papers without sending it to reviewers.
The dishonesty by side of position holders do not only obstruct good works of authors, but it also affects the reach-ability of genuine works to needful audience.
Dear Prabhat Ranjan,
about determining which review is right and which is wrong: I think the question should be which review is within the framework of a scientific discussion, and which is not.
In my paper in Sci Eng Ethics I have put the criterium of demarcation as follows: when not even an attempt has been made to adhere to principles of good scientific practice, then a review is not within the framework of a scientific discussion. E.g. lashing out at the author, instead of addressing the argument in the paper (and I have seen such peer review reports, I even had to wait ten months for it).
This criterium leaves enough room for honest mistakes in a peer review report (that is still within the framework of a scientific discussion).
Dear All,
Joseph’s categorisation on subclasses of harmful reviewers is appropriate. I agree also with him that the gate-keepers are the most dangerous and unfortunately they represent almost the majority of reviewers and defend an unnecessary status quo and their own interests. I have had some not desired experience with them. Another damaging type of reviewers can be the platform builders which embodies the second largest subgroup. They are mostly indifferent to the value and idea of a manuscript and thus are not able for an evaluation. As to the trolls I have not met with them.
The open review is a fine idea but regarding the complexity of - often - business interests present in the publication process and the scientific prejudice, there is a tiny likelihood of its implementation.
Dear All,
Joseph’s categorisation on subclasses of harmful reviewers is appropriate. I agree also with him that the gate-keepers are the most dangerous and unfortunately they represent almost the majority of reviewers and defend an unnecessary status quo and their own interests. I have had some not desired experience with them. Another damaging type of reviewers can be the platform builders who embodies the second largest subgroup. They are mostly indifferent to the value and idea of a manuscript and thus are not able for an evaluation. As to the trolls I have not met with them.
The open review is a fine idea but regarding the complexity of - often - business interests present in the publication process and the scientific prejudice, there is a tiny likelihood of its implementation.
There must be some problem with peer review of journal articles, but as an editor myself, I have not found a better alternative. It is important for the editor to find reviewers who are closely working on the topic of the submission, and editor should not simply ask reviewers to recommend the publication or rejection but also reasons for their recommendations. From my experience, almost all reviewers are serious, and I appreciate their work great, particularly since they do it entirely on the voluntary basis. (I don't think to review article for journal counts much, if at all, in their institutions' assessment of their academic performance.) Of course, some reviewers are more strict than others; this is inevitable. Fortunately, a paper rejected by one journal can be resubmitted by another journal, if the author indeed believes the editor's decision based on the reviewers' recommendation is not fair.
I agree with Yong Huang, however I can add that credibility of authors, reviewers and editors increase with their honest and genuine works. Maintaining or upgrading one's credibility (may be an author, individual, journal or market brand) takes times.
This is quite serious..behaviors favored by the current academic reward system (numerous publications no matter how trivial or repetitive, abuse of those under supervision, torpedoing manuscripts or proposals under review to keep competition down, alliances by the powerful to fund each other with a far lower bar for the usual metrics) continue to spread the gap between those with power and those without (newer researchers and faculty), and also tend to select for the nastiest to succeed. Most importantly, it is not only ego at stake; careers and professional quality of life are really the issue here.
http://www.the-scientist.com/?articles.view/articleNo/32287/title/All-s-Not-Fair-in-Science-and-Publishing/
Peer review is an unpaid and thankless job.I have a very opposite sentiment to the question posted. ..
Honesty of peer review process is presumption of academic honesty and we must act according to our professional principles. Criminal prosecution of abuse of the peer review process is a MUST!
Dear friends, COPE Ethical Guidelines for Peer Reviewers is fine resource for this issue.
Compromised peer review system in published papers - the case follows, second link.
Third link brings the practice on cases of abuse of the peer review process!
http://publicationethics.org/search/site/peer%20review
http://publicationethics.org/case/compromised-peer-review-system-published-papers
http://publicationethics.org/search/site/abuse%20of%20the%20peer%20review%20process
Ljubomir
I doubt the possibility of criminal prosecution and if it would fix the system in its present form. Nevertheless, I do have in mind an appropriate punishment for a troll I encountered.
Here is another perspective: As a Reviewer/editor, I have made substantial recommendations, changes, added resources and even rewritten the introduction and the body of the manuscript and did not receive any formal credit--even in the form of editing acknowledgements let alone authorship! Very few people acknowledge anonymous reviewer comments let alone pay homage to specific contributions. Personally, I would love an open review process. It would certainly distinguish the naysayers and squashers of new perspectives. But how would authors properly acknowledge contributions and/or give writing credits to their research if 25 people chimed in the same or similar recommendations? Just to the first person? What if the fifteenth person is the actual expert on the topic? So you see, there are some problems with open review as well. But I think that contributions should be acknowledged in some shape or form on all papers. This would help to change the spirit of writing/review and move it closer to the prospects of open review.
Fine response dear @Beth. Let me bring to your attention, as well as to the attention of the followers of this thread, the following article Elsevier editors share their top reviewing tips - part 3. Links to part 1 and 2 are available in the article. It is very serious resource!
"Although the structure of your peer review report may depend on the journal for which you’re reviewing, to a large extent it will follow a standard structure, just like research articles do..."
http://www.elsevier.com/reviewers-update/home/featured-article/elsevier-editors-share-their-top-reviewing-tips-part-3
Dear Martina,
What you indicated is strange. How it is possible that fake or fictious manuscripts are accepted and those with true and real data are not? It would be fine to know the scientific fields, subjects and persons which are touched in these falsification process. Can you mention some examples?
Dear Ljubomir,
You mentioned honesty as a determining virtue in reviewing. Interestingly, there is a thread on characteristics of a perfect scientist. Most of the participants have not thought – because they have not mentioned – that honesty may have importance in forming the perfect scientist. This means that there must be a secret(?) contradiction here. If the perfect scientist does not need to be honest, cheating reviewers can be perfect scientists. Cannot they?
One thing is true that there exist so many unhealthy practices during peer review processes. I have some similar experiences.Once, a reviewer kept a good article of a member of our research group delayed and published a couple of papers based on this article in some fast publishing journals. Similar unethical practices happened in some other occassions also. This situation has to be changed. The peer review system must be transparent and open. I am not sure whether the defaulters can be prosecuted. But Some some strict measures to be taken to avoid such unhappy instances.
This idea of an open review process is new to me and I find it so good. How can responsible behavior be fostered by anonymity. Anonymity in general promote irreponsible behavior. Internet forum where people are anonymous through avatar are notorious not serious. Since what I say here on RG is public and since I will be judged based on this, even if I feel frustrated about an idea, I will avoid making comments that are not appropriates. I may not succeed all the time but the fact that I have to bear the responsibility of what I say is a very strong incentive to do so.
A reviewer has such power that it should be a minimum requirement that his reputation be on the line. Our reputations should always be on the line. Someone whose work do not be managed to be published do not even have the opportunity to build a reputation and a carreer. At least those judging him/her should put their reputation on the line while doing so and so provide reasons for doing so and they will be judged themself later on.
Serious , open, and responsible reviewing that involve time and effort and sometime creativity in the sudgestions made should be acknowledged in a
in the publication itself and reviewers should have a list of reviews where their reviewing contributions should be recognized as their own publications. In a certain way when the reviewer interactions significantly change the papers, the reviewer become a contributor of this research. If that were officially acknowledge then reviewers would be encouraged to be creative in their reviews since it would become opportunity to be recognized as contributors.
@Ljubomir--good to know. I am currently working on a book for Elsevier.
Dear Louis,
Open review has got its history even among the threads at RG. You can review any (?) publication here but –unfortunately – the reviewers may be anonymous.
Dear Beth,
A normal author always acknowledges helpful comments of reviewers. The question may be what is a helpful comment?
András,
When I became a member of RG, the open publication reviewing did not exist. I suggested it in one of the threads and six month later it was made available. Here is a good example of non- acknowledgement. Maybe they got the ideas themself. Maybe they got the ideas from someone else. Or maybe they got the idea from me. RG should acknowledge sudgestions by RG members that they adopted. RG is not very open in general and cultivate a anonymous culture. For example they do their best to act without revealing who in the management decided this and for what reasons. They have an opac anonymous scoring system that nobody have cracked so far. I have zero scientific reputation, no publication and my RG score is 129!!!. It is fine with me I do not use this score to evaluate myself nor other RG members. The score is public but the scoring method is private!!!!. They unilaterally delete members without any justifications. And the worst is the famous anonymous down voting. They basically are a typical commercial organisation and anonymity is perfect for such power structure.
Dear Hunk Smid,
Nothing is perfect said the fox when it realised that there were neither hunters nor chickens in the planet of the little prince. Peer review system and RG evaluation may have various innate interests which are not transparent for the public.
Dear Louis,
You can be glad to have some part in the establishment of such a development as the open review. By the way, it is not often used and the anonymity option has been an own goal.
As to the RG scoring system, it has already discussed in many threads. This system is not only opaque but also decreases scientific performance as it forces members to permanent activity, participation in boring threads in order to get higher scores which are often but an dubious virtual performance.
Dear Martina,
The “Irren ist menschlich” scenario for accepting weak or not suitable manuscripts is a bit exaggerated.
Dear Joseph L Alvarez, would you like to share this punishment for trolls? Or would you rather inbox me?
John Anyam
The punishment shall fit the crime. Identified trolls should be the lead item on Google News. They should be required to begin all correspondence with I am a troll.
Dear Andras: I will provide some of my commentary from various documents that I have peer-reviewed in the last couple of years. You can determine if these are relevant comments.
"Overall, this is a well-written and researched paper. But you lose readership in your abstract because you did not take the same time and consideration to represent your work in this very important brief to potential readers."
"Your explanatory contribution of X variable is small (6%) and many statisticians would attribute this to noise in the data. Therefore, you do not represent strong evidence for your conclusions."
"Your decision to use SPSS software on a sample size less than 200 is not sound. Some versions of this statistical software (which you did not identify) will not even run if there is an insignificant amount of sample size. This makes me question your results due to choice of research design and lack of appropriate calculation for sample size. You should reconsider your research design and software choice so that you can provide statistical results in accordance to your N population."
"You have designated some of your variables as moderators but you have failed to statistically verify them using Baron and Kenny or other acceptable methods for your particular research that distinguish variables that are moderators or mediators."
"You did a very nice job on a novel topic in management. However, you have identified 4 constructs in your study and only 3 research questions. You have doubled up on one of the questions which means that this question and any hypotheses associated with the research question cannot be used towards results/conclusions. You cannot measure more than once indicator in relationship to an outcome in an RQ or a hypothesis statement."
"You failed to identify the literature review information in your research."
"You have not provided any theoretical framework for your conclusions. Therefore, there is no way to compare results because you do not establish constructs against expected outcomes."
"Identification of predictor variables (unless it is a literature review) is only valid if you associate the predictors with outcomes. If you do not adhere to the causal path of X-->Y, then you cannot draw conclusions that X is causal to Y if it is not studied in relationship to an outcome."
"You have drawn conclusions from results of statistical analysis but did not identify your research methods, statistical software, or present evidence from the analysis in the expected output for your research design. Therefore, I cannot evaluate your findings and must reject this paper."
"Your theory section reads more like a research design case study. Separate these items in your document by adding a research design section and background/lit review to distinguish this information. Find a good example of a case study and emulate the writing style. Ask your college or local research librarian to help you, if necessary."
"Condense your material-synthesize. The length of your paper is tedious and can easily be cut in half if you adhere to this process. This way, readers can focus on quality information."
"Find a trusted editor--the excessive grammar and punctuation mistakes in this paper are distracting and take away from any validity of the material."
"Focus your information and present it in a consolidated manner. You do not have to 'teach' the supporting information. You simply have to present it in a cohesive manner that supports your position and final results."
"The content of this paper is valuable but not appropriate to the concepts that are consistent with this journal. Please consider XYZ journal or something like it and resubmit. I think that you will have better luck there."
"You have not referenced your information in text. This is problematic and sufficient reason to reject this paper."
"Your introduction should introduce your work and your intent to provide new information in the document, not the history of XYZ (pg 1, line 1-45. Further, some of your paragraphs in background should actually be in your introduction (Pg. 2, lines 16-54."
"You have utilized the same phrase in many sentences. Use synonyms or rewrite language to make these sections read cleaner."
"Your tables are not formatted properly and you have headings with one word broken up over 2-3 lines. Review the author's guidelines and the XYZ style for tables to make corrections. You may want to create the table using a horizontal versus a vertical page setup and then fit the items accordingly when you transfer them to your document."
"The methods section is inappropriate. Though the author(s) have correctly cited independent and dependent variables, they do not define these factors in an authoritative format. The info is scattered throughout the paper and much of the relevant theory info is buried in extraneous text."
"Remove extraneous information, there is quite a lot of it. One example on pg.4 PDF lines 10-43 can easily be reduced to this..." (And then I synthesized the information for them in 6 lines as an example)."
"Two of the Research Hypotheses have more than one factor. You should only test one factor at a time. For example--- Your Hypothesis #1. should be split into 1a and 1b."
I could go on but I think that this should give you a good idea of the type of information that I share with authors. I have slightly altered responses to eliminate actual project identifiers.
Beth Ann
In this discussion much of the focus has been on abuse of the peer review process to prevent publication of a new idea or a criticism. But we should not forget that there can be good reasons to reject a paper, there is even truly such a thing as a "good rejection" - that is when your paper improves when you incorporate the comments of the referee and it gets accepted by the next journal to which you submit it.
@Marcoen: I was trying to express the concept of what you term a 'good rejection' in an effort to show that there is value in the peer review process when blind reviewers read the paper for content and take the time to educate writers. When I was first learning how to present information in an academic fashion, getting someone to tell me what was wrong and/or how to correct it was like pulling teeth! I recall being so frustrated when they made a negative comment ("this is not right")and then did not take a minute to say something to indicate what is not right and how to correct it. Earlier in this thread someone said that it is hard to get reputable reviewers. I think this is sad because there are many experienced professors who can contribute and positively impact other researchers by continuing to teach and share their trade in this special way. (By the way, sometimes you do get recognized for your work as a volunteer reviewer. Emerald Publishing just recently did this by making me an Outstanding Reviewer in 2015.)
My congratulations dear @Beth for deserved Emerald Publishing Award of an Outstanding Reviewer in 2015. It is fine motivation to keep on doing better and better.
Also, the other recognized publishing houses do recognize the work of their volunteers in the review process!
I reviewed in the last two years around 7 papers for different journals, and I tried my best to be fair and constructive. Nevertheless, my own papers did not always receive the same treatment. Reviewing is a subjective process, and as such , open to mistakes. I think that a reviewer should aspire for maximum objectivity in his judgement, and he should see to it that his judgement is based on knowledge.
Dear Beth,
Many thanks for the details of your reviews. This is a fine example of transparency. By the way, I have had no idea to criticise your reviewing activity.
Dear Beth Ann Fiedler
Thank you so much for your post. I have learnt a lot from it; I supervise undergraduate projects and am at times at loss for what to say to be really helpful. If you, or anyone out there, could provide examples and helpful review comments as you have done in your post in the form of an "RG publication: say; How to make helpful Review Comments" some of us (baby researchers) will really appreciate it. Thanks again!!!
My best advice is to learn how to find resources systematically, not ad hoc which is the most popular method and the most incorrect form! Spend time with resource librarians at your college or local library. There are some decent accessible resources out there. Here are some of them.
1) Purdue University https://owl.english.purdue.edu/owl/resource/658/01/
2) For those for whom English is a second language http://www.grammar.com/
3) Rice University http://www.ruf.rice.edu/~bioslabs/tools/report/reportform.html
4) Tips on Researching http://netforbeginners.about.com/od/navigatingthenet/tp/How-to-Properly-Research-Online.htm?utm_term=Academic%20Research%20Guidelines&utm_content=p1-main-1-title&utm_medium=sem-rel&utm_source=msn&utm_campaign=adid-c4ed7036-9123-40e7-88f3-2648036d2afb-0-ab_msb_ocode-28813&ad=semD&an=msn_s&am=broad&q=Academic%20Research%20Guidelines&dqi=&o=28813&l=sem&qsrc=6&askid=c4ed7036-9123-40e7-88f3-2648036d2afb-0-ab_msb
5) Learn how to write an outline
http://grammar.about.com/od/mo/g/Outline-term.htm?utm_term=Academic%20Research%20Guidelines&utm_content=p1-main-4-title&utm_medium=sem-rel&utm_source=msn&utm_campaign=adid-c4ed7036-9123-40e7-88f3-2648036d2afb-0-ab_msb_ocode-28813&ad=semD&an=msn_s&am=broad&q=Academic%20Research%20Guidelines&dqi=&o=28813&l=sem&qsrc=6&askid=c4ed7036-9123-40e7-88f3-2648036d2afb-0-ab_msb
1.It is likely that an objective assessment using AI is the answer. It is said logic of mathematics has been automated to prove theorems If we add NLP to it and structure input text template format we can do away with human reviews.
2. Rejected scripts can be published in a separate journal. Monitoring number of references could be used to move it back into regular journals.
Cheers
In my view we must abolish the outdated old Peer Review system and replace with the healthy open access reviewing of scientific papers such as the Faculty 1000 for uninterrupted scientific progress and for global health. The reasons behind my assertion are provided below.
I have been working all my life in the field of active ion-transport mediated by the P2-ATPase system (H-pump, Na-pump and Ca-pump) across animal cell plasma membrane playing essential role in cellular homeostasis. In early 1980’s while I was working in the SUNY-Upstate Medical Center, Syracuse, New York I noticed some serious limitations in then-ongoing working model, the 50-year old widely trusted Post-Albers hypothesis, and promptly published those unique observations in Biochem J. 233, 231-238, 1986. Being unaware of the basic flaws in the single topology model of Post-Albers model the scientists in the field were force-fitting their data in their own way. Based on our careful investigations we came up with a revised dual-topology model in mirror-image orientation of two (100 k Da.) alpha-subunits of the ATPase that works satisfactorily explaining all hard to explain observations reported in the literature thus far. We published them promptly in a series of publications in reputed journals like, Biochem J., Biochemistery (ACS), JBC, BBRC, FEBS Letters. However, the reputed workers in the ion-transport field ignored our newly proposed unique model and continued using the old single topology PA scheme based on which they built their own careers by supporting each other. As a result, in spite of these vital contributions giving new directions I could not get my NIH grant renewed to continue my research. I also submitted several fresh proposals on my exciting new findings but narrowly missed funding each occasion. I had to quit the ion-transport field in 1992, but continued to watch the whereabouts of the decaying vital P2-ATPase field.
The P2-ATPase field came to a virtual halt in 2000 for the lack of new ideas, when I decided to reintroduce my decades-old published and unpublished ideas to rejuvenate this critical field for human health and welfare. During the next decade I wrote a number of articles showing the unique nature of our dual topology model, critical roles of the endogenous activator proteins and Calcium (u molar) as regulators of the P2-ATPase system, novel allosteric nature of the dual-topology ATPase molecule during interactions with various cations and ion-channeling, Ca-ATPase as the provisional pumping mechanism of the P-2 ATPase system in order to pump out excess Ca for homeostasis, and in-and-out transport of Na, K, H and Ca through the ion-channels within the P2- ATPase complex etc. Many of these papers are now posted in my RG site.
The detailed dual-topology P2-ATPase model was published in a book on Ion-transporting ATPase in 2012 (see my publication list in RG). Alongside, I presented many critical data in the APS, 2012 (Ion Channel) and ASBMB, 2014 (on transport ATPase) in Asilomar, Ca. Sorry to mention that not a single of the experts in the transport ATPase field came to my presentations. I also tried to publish the papers with new insights, mentioned above, in various journals like Biochemistry, JBC, Biochem J. Gastroenterology, Nature etc. My submissions were outright rejected by the editorial board without even sending to outside experts in the field. It became clear to me that it is impossible to penetrate the sitting “EXPERTS” of the journal’s editorial board.
Meanwhile, I sent some of the papers to the “experts” in the P2-ATPase field, but failed to receive feedback from any expert. Prior to these I communicated in 2004 with Prof John Forte (UCB) and Prof Amir Askari (Ohio State) who recognized the limitations in P2-ATPase field by then and helped me in many ways for my book article. While treasuring their comments I profusely thanked them for standing out from the crowd.
Then, fortunately in August 2013, I came across the newly founded Faculty 1000 open access journal. My first article, “The parietal cell gastric H, K-ATPase also functions as the Na, K-ATPase and Ca-ATPase in altered states [v2; ref status: F1000Research 2013, 2:165 (doi: 10.12688/f1000research.2-165.v2) promptly came out with favorable comments from two reviewers, was viewed and downloaded by numerous scientists, and very recently been cited by four different young investigators in the field.
Soon, I submitted a second paper, Ray, T (2013b) Tissue-specific regulation of the Na, K-ATPase by the cytosolic NaAF: some thoughts on brain function [v1; ref status: However, this is still awaiting peer review, http://f1000r.es/23v] F1000Research 2013, 2:241 (doi: 10.12688/f1000research.2-241.v1)
Subsequently I joined the Research Gate and published the rest of my ideas in RG. In the Q and A forum of RG I have been discussing on various aspects of the P2-ATPase system in a systematic manner with many enthusiastic young members interested in the vital field. Thanks to RG, the P2-ATPase field is in the process of being rejuvenated. Recently, I presented my new hypothesis on neurodegeneration and Alzheimer in RG based on regulation of the Na, K-ATPase by its cytosolic endogenous 170 k Da activator protein (NaAF), and Ca in homeostasis in the area-specific brain cells. My hypothesis has received due attention after the long dominating amyloid theory has been discarded very recently.
In conclusion, I am convinced that the open peer review system is the best way to continue scientific research in 21st century for keeping scientific quest apart from personal prejudice.
@Jecinta Ndiombueze Anowu
1. Yes . It would have its limitations I suppose. IMHO it would eventually lead to a better overall acceptability and is quite feasible given so many new developments e.g.Watson of IBM etc.Moreover it does not exclude human interface at appropriate levels. As a start format errors . dimensions , references (perhaps additional ones ) could all be obtained automatically at the first level.
2.It has more possibilities in Legal and Healthcare fields as well. It might even lead to identifying new areas of research given we are in the age of Driverless Cars a la Google
3. It might make the paper more readable for new researchers to access relevant details needed as background Some of the experiences mentioned above seem to support this idea.
Cheers
Yes, I agree with all the points you raised. Open scientific discussions will create better understanding and elucidation of the events for all concerned. During my response raised by the referees I recalled some relevant data collected way past that became very important for the F1000 paper. So, possibilities could be enormous during the question answer periods. And the papers will be more interesting to the readers as a result.
Cheers!
We can rebuild and maintain trust in the Peer Review process by honesty and integrity.
Yes, Dr. Kundu, we do. I am very hopeful about that due to the technological advances in recent years. The old peer review system may stay for critical review of old publications for us, but the new ideas should be tested in an open forum for real advancements in any field.
Cheers!
Trust is at the heart of the peer review system, which operates on the assumption that authors, reviewers and all others involved are genuine and act in a transparent manner.
http://blogs.biomedcentral.com/bmcblog/2015/03/26/manipulation-peer-review/
In my opinion, the old peer-review process and the new open review option both have their advantages and disadvantages, therefore they should be allowed to co-exist. The old system of double-blind peer-review is good and has its benefits, except that it is subject to abuse by some reviewers. So i advocate for integrity and honesty by all scientists, whether authors or reviewers. If not, we would later begin to see the problems or disadvantages of the new model of open review. I would have proposed criminal prosecution of person(s) involved in outright scientific fraud, but refrained from that due to so many issues involved.
Peer review and publication are time-consuming, frequently involving more than a year between submission and publication. The process is also highly competitive. For example, the highly-regarded journal Science accepts less than 8% of the articles it receives, and The New England Journal of Medicine publishes just 6% of its submissions.
http://undsci.berkeley.edu/article/howscienceworks_16
Here is the latest update from Elsevier, the experience on peer review process by Melissa Burke!
Peer review: how exactly do I do that?
Researcher Melissa Burke reflects on her experiences at a recent Sense About Science workshop on the topic!
Different models of peer review are covered, as:
Recognition for peer reviewers is also treated in this article. Good experience shared!
http://www.elsevier.com/reviewers-update/story/peer-review/peer-review-how-exactly-do-i-do-that
Peer review is commonly accepted as an essential part of scientific publication. But the ways peer review is put into practice vary across journals and disciplines. What is the best method of peer review? Is it truly a value-adding process? What are the ethical concerns? And how can new technology be used to improve traditional models?
This Nature web debate consists of 22 articles of analyses and perspectives from leading scientists, publishers and other stakeholders to address these questions.
http://www.nature.com/nature/peerreview/debate/
To respond to Joseph's points. I think we have probably all had horrible experiences with reviewers. Just to exact some petty revenge by giving some examples: "My first impression of "article title" was that it was yet another unnecessary navel-gazing exercise that contributes nothing..." and "I don't have any good recommendations for how to improve the paper because I don't see a major contribution." While comments like this might make the reviewer feel good, it hardly helps me as the author by giving me actionable advise in fact it temporarily demoralizes me. Perhaps that is their point of their comments.
Part of the problem with the review system is that no credit is given for reviewing in promotion and tenure. It is usually lumped in with "service" and given the smallest amount of credit. Things will not improve until there is emphasis placed on reviewing. We have articles that provide guidance on reviewing but until people actually place some importance on doing it, you will not receive good reviews.
This lack of credit means that reviewing falls to the bottom of the priority pile. So you get reviews late and frankly they are not very good. As an editor you have to deal with this. You don't want to antagonize the few reviewers you have by pressing for better reviews because they will simply stop reviewing. So we are stuck with bad reviews. Joseph's ideas about open review will founder because of this problem. I was editor of a journal that attempted open reviews as Joseph describes but we could not get reviews that way as nobody was on the hook to do it. We had to revert back to blind reviews. Also journal listing services such as Cabell's require blind reviews to be listed which is another pressure to continue the review process.
Blind reviews are meant to provide objectivity in reviews by forcing the reviewer to deal with the paper rather than the author's reputation. However this also provides cover for the mean spirited comments such as described above. Open reviewing introduces this problem back into the mix again.
A potential suggestion is to recognize that the current reviewing system is a relic of the printed journal era. It is often difficult to know what the reviewer is talking about from the printed word. I have often wished that I could talk to the reviewer to get their straight talk opinion. We could move to a system where the reviewers and the authors have a conference call where the reviewers who have read the paper have it presented again from the authors. The reviewers then give their comments about how to improve the paper followed by responses from the authors. After several rounds of this, the authors would have a good idea of how to improve the paper, the "cheap shot" review comments would be eliminated and thus a very substantive helpful review would be given.
Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is common in decisions related to faculty advancement and tenure.
http://riaus.org.au/articles/a-brush-without-peer-review/
The real peer review involves replication. But it does it really? Too much haste, interests, competition
Science lost the naivety in these troubled times.
@Michael
I agree there is a problem with reviewer credit and finding competent reviewers. Worse is finding competent editors. Editors have their names listed with the journal. Being an editor does not ensure competency. It does ensure that someone wants their name listed. Promotion and tenure is the problem with credit. Those that step forward are the self-promoters. Those who must publish to be promoted will publish and publish and publish. It is not surprising that there are unnecessary navel-gazing exercises that contribute nothing. Unfortunately, there is a proliferation of journals to ensure that all contributions are small, incremental contributions, if at all. This exacerbates the problems of peer review, but does not excuse them.
You say you have tried open review. How open? I propose that the publication be openly published on a website as a 'publication for review.' Anyone may comment with full, verifiable identification. All comments become a part of the record. All resolutions become a part of the record. Any improper and irrelevant comments may be deleted, but kept in a file for reference. A document moves from review to published upon agreement between author and editor. Note: A document may be updated after publication and have another issue date.
Many more details could be discussed, but some form of open review is necessary to cure the current problems of peer review and the enormous amount of empty publications.
What ever it means, I have been asked three times to peer review articles for the Annals so fare. Two times I agreed, once I declined as I just did not have had the time to do a proper job. Elswhere in RG I proposed to start giving IF-like credits to peer reviewers. The editor should assign you 0 (for non-contributing) to, say, 3 credits for an eye-opening peer review job. Then you can link this credit-multiplier to the IF or whatever you want, and count it. I, for myself, would have been happy to get a feedback from an editor of a journal as the Annals. So fare, I just have to assume my work was acknowledged, as I got re-invites...
Make the peer review count, thats just what I would stand in for. Lets see which 'peer' still lets his student write some blah blah to see himself getting credited with a zero.
Reviewers need to be objective and must acknowledge creativity and refrain from imposing their own views.
I'd be against criminalising any abuse of the system unless it fell within existing legislation (e.g. defamation, hate speech, etc.) I'd be in favour of signed reviews (i.e. non-anonymised reviews) since I find the whole business of anonymity to be against the spirit of collegiality. If one can only 'speak one's mind' under cover of anonymity then in my view it's better to hold one's peace.) Anonymity also withholds from the person whose work is being reviewed information which may be important in trying to interpret the reviewer's comments (e.g. the status, gender, institutional affiliation, etc. of the reviewer). I'd also like to see editors requiring reviewers to offer constructive criticism and to reject any reviews that they deem to be destructive, patronising, etc. I should add that in my experience the standard of peer reviewing is generally very high - marginally better, I think, for academic journals than for book proposals.
Elsevier Publishing Campus: Working together to train authors and reviewers!
Two major goals:
Good article about this issue.
http://editorsupdate.elsevier.com/issue-49-november-2015/elsevier-publishing-campus-working-together-to-train-authors-and-reviewers/
Science papers' reviewing is not a paid work, and reviewers are volunteering their time to do this job. The publishers are choosing the reviewers from among the experts in the various fields, in that sense, we have to count on the reviewer's integrity, and certainly we should not prosecute him for what we think is the abuse of the peer review process. Nevertheless, a more transparent reviewing process is always recommended.
We keep hearing that reviewing papers is not a paid job as if researchers are not paid, as if reviewing papers is not part of their job as researcher.
The newest article form Elsevier about reviewing!
How to review manuscripts!? Your ultimate checklist when reviewing a paper.
https://www.elsevier.com/reviewers-update/home/featured-article/how-to-review-manuscripts