No, not at all. AI will never totally replace peer reviewers, but AI tools may help in peer review process. We do have already AI application in practice in publishing.
The recent advances in artificial intelligence (AI) create the potential for (semi) automated peer review systems, where potentially low-quality or controversial studies could be flagged, and reviewer-document matching could be performed in an automated manner. However, there are ethical concerns, which arise from such approaches, particularly associated with bias and the extent to which AI systems may replicate bias. Our main goal in this study is to discuss the potential, pitfalls, and uncertainties of the use of AI to approximate or assist human decisions in the quality assurance and peer-review process associated with research outputs...
Nevertheless, the machine-learning system was often able to successfully predict the peer review outcome: we found a strong correlation between such superficial features and the outcome of the review process as a whole.
We have seen how tools could be developed based on such systems, which could be used to create greater efficiency in the quality control and peer review process. We have also seen how such tools could be used to gain insight on the reviewing process: our results suggest that such tools can create measurable benefits for scientometric studies, because of their explainability capability...
The Future of Peer Review Is Human-Machine Hybrid-augmented Intelligence!
The integration of AI tools into the peer review process can be beneficial in assisting with certain tasks such as language editing and conflict of interest detection. However, the use of AI tools must be continually evaluated and responsibly implemented to ensure that they are not perpetuating biases or impacting the quality and reliability of scholarly literature. The expertise and judgment of human reviewers will always be essential in ensuring the rigor and dependability of the peer review process, and the continued integration of AI tools should be viewed as a complementary tool rather than a replacement...
The dangers of using large language models for peer review
The real risk here is that the LLM produced a review report that looks properly balanced but has no specific critical content about the manuscript or the described study. Because it summarises the paper and methodology remarkably well, it could easily be mistaken for an actual review report by those that have not fully read the manuscript. Even worse, the specific but unrelated comments could be perceived as reason for rejection. Therefore, it is important that all participants within the peer-review process remain vigilant about the use of LLMs. Editors should make sure that comments in review reports truly relate to the manuscript in question, and authors should be even more ready to challenge reviewer comments that are seemingly unrelated, and above all, reviewers should refrain from using LLM tools...
ChatGPT may provide answers to specific questions but lacks the ability to evaluate the overall quality and relevance of research...
Thus human editors are important who can offer a fresh perspective and constructive criticism. Human editors can provide invaluable feedback by identifying weaknesses in the argument, suggesting alternative approaches, and challenging assumptions that may be limiting the researcher’s thinking. This type of feedback is essential for pushing researchers to think beyond their own biases and assumptions, and to consider alternative perspectives that can lead to new insights and discoveries.
Moreover, human editors are able to identify issues that may not be immediately apparent to the researcher. This includes inconsistencies in the data or analysis, as well as gaps in the literature review. By bringing these issues to the researcher’s attention, human editors can help improve the quality and impact of the research...
With the rise in use of artificial intelligence within research and publication practices, there are interesting conversations to be had in the educational technology publishing space about how AI can be embraced without undermining the spirit and quality of academic publishing expectations. We have made it clear in our guidelines for the use of AI for AJET that AI is not to be used to generate peer reviews. However, more work is needed to better understand how AI might be used in ethical and useful ways to improve the editorial process. Whether that extends to assistance with peer review in the future is something we will continue to consider as the impact of AI on academic publishing becomes more apparent...
The debate on whether generative AI should be permissible for peer review has waged for most of 2023, and in recent months, key funders have announced their stance. Foremost among them is the National Institutes of Health (NIH), the largest funder of medical research in the world. In June of 2023, the NIH banned the use of generative AI during peer review, citing confidentiality and security as primary concerns; a Security, Confidentiality and Nondisclosure Agreement stipulating that AI tools are prohibited was then sent all NIH peer reviewers. The Australian Research Council followed quickly afterwards with a similar ban. Other funding bodies, such as the United States’ National Science Foundation and the European Research Council, currently have working groups developing position statements regarding generative AI use for peer review...
Publishers, however, are placed in a unique position. Some journals have proposed adopting generative AI tools to augment the current peer review process and to automate some processes that are currently completed by editors or reviewers, which could meaningfully shorten the time required to complete a thorough peer review. Currently, few publishers have posted public position statements regarding the use of generative AI during peer review; an exception is Elsevier, who has stated that book and commissioned content reviewers are not permitted to use generative AI due to confidentiality concerns. The future of generative AI integration into journals’ manuscript evaluation workflows remains unclear...
There's also the possibility that AI becomes so good that it actually can do peer review. Of course, nobody believes that right now, but we also didn't believe that open AI would be at the stage it is today. ChatGPT is passing college exams.
The challenge, though, is that AI algorithms can inherit biases from the data they're trained on. It could lead to even more bias, like biased reviewer recommendations. We have to ensure we're making efforts to eliminate that and reduce unintended bias.
There are also ethical considerations around privacy and data security and transparency. Authors and reviewers need to be aware of how their data is being used and who has access to it.
And there are some things AI tools are still not capable of doing — evaluation that you need human judgment for. AI algorithms can't yet determine what's novel or groundbreaking. They’ve been trained on existing research, and it's new discoveries we're looking for...
Artificial intelligence and machine learning software are developed to catch common errors or shortcomings, allowing peer reviewers to focus on more conceptually-based criticism, such as the paper’s novelty, rigor, and potential impact. This strategy is more widely seen in humanities and social sciences research.
Pros: Increases efficient use of peer reviewers’ time; improves standardization of review; can automate processes like copyediting or formatting
Cons: Requires extensive upfront cost and development time as well as ongoing maintenance; prone to unintentional bias; ethically dubious; requires human oversight...
A Critical Examination of the Ethics of AI-Mediated Peer Review
Recent advancements in artificial intelligence (AI) systems, including large language models like ChatGPT, offer promise and peril for scholarly peer review. On the one hand, AI can enhance efficiency by addressing issues like long publication delays. On the other hand, it brings ethical and social concerns that could compromise the integrity of the peer review process and outcomes. However, human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency, which already diminish credibility. While there is increasing attention to the use of AI in peer review, discussions revolve mainly around plagiarism and authorship in academic journal publishing, ignoring the broader epistemic, social, cultural, and societal epistemic in which peer review is positioned. The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos, encompassing moral and epistemic norms that define appropriate conduct in the scholarly community. In this regard, there is a "norm-counternorm continuum," where the acceptability of AI in peer review is shaped by institutional logics, ethical practices, and internal regulatory mechanisms. The discussion here emphasizes the need to critically assess the legitimacy of AI-driven peer review, addressing the benefits and downsides relative to the broader epistemic, social, ethical, and regulatory factors that sculpt its implementation and impact...
Amid bans and restrictions on their use, artificial intelligence tools are creating interest among those who see a solution to systemic peer-review woes...
Debate over the use of artificial intelligence, already touching everything from admissions to grading, has reached peer reviewing, as academics balance technological uncertainty and ethical concerns with potential solutions for persistent peer-review problems...
What are the implications using AI for peer review when it becomes harder and harder to get high-quality feedback...
The use of AI could be seen as a simple Band-Aid to a wider issue of the peer-review world...
"It remains important to distinguish AI – which involves the software ‘learning’ from its processes to create capabilities that equal or exceed a human’s – from automation. Automation allows a machine to carry out predetermined tasks with specified outcomes and is already used routinely in peer review to do things like identifying whether all ethical declarations are present, checking that citations are correctly presented, and screening for poor language use. These automated tools can have many benefits, not least in saving editors time and by improving scholarly language in a whole range of cases where linguistic access could be a barrier. This study gives some more examples of the benefits of AI. However, we are now at the stage where artificial intelligence can carry out tasks which require creativity and judgement, such as recommending acceptance or rejection, creating reviewer reports, and identifying cases of image manipulation, duplication, and plagiarism. This is where the ethical issues really come to the fore..."