In the age of generative AI, editors and reviewers closely evaluate the introduction and literature review for transparent disclosure of AI assistance. Authors are required to clearly state how these tools were used and ensure active human oversight (PMCJKMS). They meticulously verify the accuracy of citations, since AI is known to hallucinate—producing fabricated or misleading references (PMCWikipedia). Equally important is evidencing critical, original insight, as AI-generated text often lacks depth and reflective nuance [arXiv]. Ultimately, human judgement and scholarly integrity must drive the evaluation, with AI remaining a support—not a substitute—for rigorous academic writing.
It is necessary for editors and reviewers, in this era of generative AI, to closely evaluate the introduction and literature review for transparent disclosure of AI-assistance in your study. And, no matter how new a phenomenon or idea sounds, someone, somewhere, somehow must have said something about it.
Dogo (2002:57) in Babatunde (2016) states: "somehow, somewhere, somebody must have said or done something that relates to your work". Therefore, scanning through academic journals, books, and peer-reviewed studies on the keywords in your study is necessary to understand the focus of our study.
Ref:
Babatunde, A.I. (2016): Assessing the Dependency of Newspapers on the News Agency of Nigeria (Chap. three) – Taking Stock: Nigerian Media and National 12
Challenges (ACSPN Book Series 1) https://www.acspn.com.ng/wp-content/uploads/2020/04/Chapter-3-Assessing-the-Dependency-of-Newspapers...Abdulfatai-Babatunde.pdf
The proliferation of generative AI necessitates a rigorous reevaluation of editorial and peer review protocols, particularly within the Introduction and Literature Review sections of academic manuscripts. In an era where algorithmic text generation can emulate scholarly language with increasing sophistication, reviewers must be equipped to discern genuine intellectual engagement from algorithmically produced mimicry (Van Dis et al., 2023). Upholding the principles of academic integrity and originality demands heightened scrutiny and refined evaluative criteria.
The Literature Review, in particular, warrants intensified critical attention. Rather than serving as a catalog of sources, it must function as a coherent and interpretive synthesis that situates the proposed research within a well-defined scholarly context. Reviewers must distinguish between superficial aggregation—a common feature of AI-generated output—and the deliberate construction of a narrative that engages with theoretical tensions, methodological debates, and empirical findings (Gilson et al., 2022). This synthesis, characterized by a distinctive authorial voice and a depth of critical analysis, remains a hallmark of human scholarship and a central criterion for evaluating authentic academic contribution.
References:
Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical ducation and knowledge assessment. JMIR Medical Education, 9, e45312. https://doi.org/10.2196/45312
Van Dis, E. a. M., Bollen, J., Zuidema, W., Van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7
I really liked this response Ashley Covert . On a whim, I ran it through a free AI checker that told me it was 100% AI generated, and so ran it through 7 or 8 more which all gave me the same response. I am not here to make an accusation, AI checkers are not particularly reliable, but it does get to the crux of the debate - how can reviewers determine what is AI generated and what is not? Either the response is, ironically, generated by AI, or it is not but the AI checkers think that it is. So, how are reviewers supposed to work out it out?
Shannon Mason The proliferation of AI-generated content presents a significant challenge to academic integrity, revealing the profound limitations of automated detection systems and compelling reviewers to adopt more sophisticated evaluative techniques. The unreliability of these tools is starkly illustrated by the inconsistent analysis of a single document, which yielded an 83% similarity score from one platform due to administrative metadata, yet only a 12% score from Blackboard's SafeAssign. This discrepancy underscores a critical flaw: algorithmic systems cannot differentiate substantive academic discourse from formulaic boilerplate, compromising their validity and necessitating a more rigorous, human-centric model of evaluation (Mishra, 2024; Perkins et al., 2024; Smith, 2023; Yeo, 2023).
For reviewers, a necessary technique involves a multi-faceted, forensic assessment moving beyond simplistic metrics. This human-centric approach is composed of four key investigative practices: a critical analysis of the authorial voice to detect the generic quality of AI text versus the nuanced signature of human writing; diligent verification of factual and source integrity to identify AI "hallucinations" or fabricated citations; a longitudinal stylistic comparison to note any abrupt, inexplicable improvements in an author's established work; and a process-oriented interrogation to determine if the author can genuinely articulate and defend their intellectual journey and methodological choices. An author’s inability to engage in such a dialectical examination often betrays a superficial or nonexistent role in the text's creation.
This evaluative technique repositions automated tools not as arbiters of authenticity but as supplementary aids, emphasizing that the final authority in academic assessment must remain with human discernment. An over-reliance on flawed systems without rigorous human oversight directly threatens pedagogical and scholarly standards (Bittle & El-Gayar, 2025). Ultimately, only the reviewer’s critical judgment can provide the fair, precise, and contextually aware evaluation that authentic intellectual work demands, ensuring that the integrity of the academic review process is upheld in the age of AI.
Ref:
Bittle, K., & El-Gayar, O. (2025). Generative AI and Academic Integrity in Higher Education: A Systematic review and research agenda. Information, 16(4), 296. https://doi.org/10.3390/info16040296
Mishra, S. (2024). Enhancing Plagiarism Detection: The role of Artificial intelligence in upholding academic Integrity. University of Nebraska Repository. https://digitalcommons.unl.edu/libphilprac/7809/
Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2403.19148
Smith, R. (2023). Academic integrity in the age of AI. EDUCAUSE Review. https://er.educause.edu/articles/sponsored/2023/11/academic-integrity-in-the-age-of-ai
Yeo, M. A. (2023). Academic integrity in the age of Artificial Intelligence (AI) authoring apps. TESOL Journal, 14(3). https://doi.org/10.1002/tesj.716
With regard to the important issue of differentiating between the oral and written use of one's own words, on one hand, and on the other hand, the appropriation, citation, or borrowing of the words, phrases, sentences, and ideas produced by speakers and writers, the inclusion of quotation marks is a standard across all research fields. In the social sciences and humanities, the APA style is frequently employed. The Internet contains numerous relevant sources of information helpful to scientific researchers in situations raised in this RG discussion thread (see above). See, for example, the following LINK, which is excerpted below:
Paraphrasing: When summarizing or expressing ideas from a source in your own words, an in-text citation including the author's last name and the year of publication is required.Example: (Thompson, 2014)
Direct quotes: When quoting directly from a source (under 40 words), enclose the quote in double quotation marks and include the author's last name, publication year, and page number in the in-text citation.Example: "Quoted text" (Author, Year, p. Page Number)
Block quotes: For direct quotes of 40 words or more, format them as block quotes (indented, no quotation marks) and include the citation (author, year, page number) in parentheses after the quote.
Reference list: All cited sources are listed in a separate reference list at the end of the paper.
Use of AI-generated material in psychology (APA style)
Generally permitted but requires transparency: AI-generated material is generally permitted, but transparent disclosure is essential.
Citing AI: When using AI-generated content (including text, images, or data), cite the AI tool used, typically listing the developer as the author and providing details about the interaction.Example: (OpenAI, 2023) when quoting from ChatGPT
Disclosure Statement: Include a disclosure statement in your paper outlining the AI tools used and their specific role in generating content or assisting with research.
Vetting for Accuracy: Verify the accuracy and reliability of AI-generated content as AI tools may not always be factually correct and could even create fake citations.
AI as an author: AI tools cannot be listed as authors on publications because they do not meet authorship criteria.
Important considerations
Instructor guidelines: Always check with your instructor for specific policies regarding AI use in assignments, as policies may vary.
Academic integrity: Understand and adhere to the ethical guidelines surrounding the use of AI in academic writing, focusing on transparency, responsible use, and avoiding plagiarism.
AI detection software: Be aware that AI detection software may be used to screen assignments for AI-generated content. Keeping drafts can be helpful to demonstrate your own writing process."