Given the remarkable advancements AI has brought to academic writing, is it time for researchers to focus more on research (methodology) and let AI handle the writing?
Yes, I believe so. The ability to write a clear, well-structured, and interpretable scientific article is not a skill that every researcher possesses. One may be an excellent scientist, with solid methodological expertise and a valuable message to share with the world, but still lack the communication skills to do so effectively. In such cases, AI can help fill that gap — not by replacing the research, but by giving form and clarity to ideas that might otherwise remain obscure or poorly conveyed.
Moreover, it’s important to note that even on the methodological side, AI is beginning to play an increasingly central role. Just a few weeks ago, a paper was published listing Claude 4, an AI model, as the first author (https://arxiv.org/html/2506.09250v1). The paper rigorously and convincingly debunked a study conducted by Apple researchers, using appropriate engineering and scientific methods. This is a clear indication that AI is not just a tool for writing, but is progressively becoming an agent of analysis, critique, and even discovery.
Perhaps in the future we will see more AI systems listed as co-authors — or even lead authors — on scientific publications. The real question, then, might be: will our ego be able to accept this shift?
Thank you for your detailed response, Gianluca Mondillo. The example of Claude 4 as a lead author is especially thought-provoking. Your closing question captures a deeper challenge we face as researchers and educators in adapting to these transformative changes. Thank you again for sharing such a compelling perspective.
While AI tools have become incredibly good at enhancing academic writing, fully entrusting the writing process to them misses a crucial point: writing is not separate from research—it’s part of how we think.
AI doesn’t generate original insight. It rephrases, enhances, or expands on what we give it. If we input clear arguments or structured prompts, it will return coherent text. But the quality entirely depends on what we feed it.
Now consider a case where we’re researching something novel—say a topic where no online content exists. You’ve done site visits, collected local data, maybe even recorded observations. How is AI going to write that up meaningfully unless you give it those raw inputs? Are we going to upload images, transcripts, and GPS logs into a prompt? Even then, it’ll just assemble language—it won’t interpret, critique, or synthesize the way a researcher does.
Even when AI writes a “deep” article, the moment you correct its logic or math, it adapts: “You're right, let me fix that.” That’s not reasoning—it’s a probabilistic correction loop.
So yes, we can and should use AI to support the writing process. It can save time, help polish phrasing, and even catch blind spots. But it cannot replace the act of writing as a thinking process. Relying entirely on AI risks turning research communication into something that sounds good but lacks substance.