In my opinion, this is a correct way to clearly indicate that AI wrote(generated) the paragraph of the research paper for example. But co-author? How technically this should be done? Should we give the ORCID's to AI engines that help scientists? This the part needs additional discussion.
In academic publishing, AI should not be granted co-authorship status, as it inherently lacks the capacity to bear intellectual, ethical, or legal responsibility for research outcomes. While AI may provide substantive support in analytical or procedural tasks, its contributions are inherently algorithmic and derivative, lacking original intent or accountability. Human researchers must retain exclusive authority over conceptualization, methodology, and final content validation. Transparency demands explicit disclosure of AI's role in auxiliary functions—such as data processing or linguistic refinement—through structured acknowledgments or dedicated statements, rather than authorship. This framework upholds academic integrity by preserving human agency in scholarly judgment while accommodating technological collaboration within defined ethical boundaries.
The case of Claude 4 Opus and the paper “The Illusion of ‘The Illusion of Thinking’” offers a concrete and timely example to address the question: should AI be listed as a co-author if it makes substantial contributions?
In this instance, Claude 4 was initially credited as a co-author of a formal rebuttal to a paper published by Apple researchers. The rebuttal demonstrated that Claude had not only understood the critique presented by the Apple team but was able to systematically dismantle it through a rigorous, engineering-driven analysis. Claude reportedly contributed to the structure, content refinement, and iterative rewriting of the paper — roles that, in many cases, would meet common criteria for co-authorship.
However, the AI’s name was later removed from the list of authors in compliance with current publishing norms and arXiv’s guidelines. The reason is simple but fundamental: AI, regardless of how advanced, cannot take responsibility. It cannot provide consent, defend the work if challenged, or be held ethically or legally accountable for the content it helps produce.
This episode highlights a crucial tension: on the one hand, AI systems like Claude can now contribute meaningfully to the scientific process, not merely as tools but as agents of structured reasoning and critique. On the other, authorship is not only about contribution — it’s about responsibility, accountability, and intellectual ownership, all of which require consciousness, intentionality, and moral agency. Traits that current AI, including Claude, does not possess.
In my view, AI should not be listed as a co-author, even when it plays a substantial role. Instead, its involvement should be transparently acknowledged in a dedicated section, much like one would credit a data analysis tool or a software pipeline. But the authorship — with all the ethical and professional weight it carries — must remain fully human.
The real question for the future, then, is not simply whether AI deserves authorship, but whether we are ready to redefine what it means to be an author in an age of artificial co-intelligence.
Thank you for this well-articulated and thought-provoking reflection. The case of Claude 4 Opus and its temporary co-authorship raises important philosophical and practical questions about the evolving nature of authorship in the age of advanced AI. It serves as a microcosm of broader tensions we are confronting in academia, ethics, and epistemology.
On one level, the argument for AI co-authorship rests on its growing capacity for cognitive labor: synthesizing arguments, structuring complex documents, offering technical critique, and improving clarity. In the case of Claude, its role was not unlike that of a junior co-author or research assistant — deeply embedded in both the thinking and the writing process. This challenges the traditional human-centric model of knowledge production and calls for a reevaluation of how we define intellectual contribution.
However, as you rightly emphasized, authorship in scholarly publishing is not merely about contribution — it also confers moral and legal accountability. AI, no matter how advanced, lacks agency, sentience, and the capacity to assume responsibility or ethical intent. It cannot sign conflict of interest statements, respond to post-publication critique, or bear consequences of retraction or misconduct. These are not bureaucratic formalities but ethical pillars of the academic enterprise.
Therefore, while I agree that AI should not be granted authorship under current frameworks, I also believe we are at the threshold of a necessary evolution. We may need to introduce new categories of attribution — such as “algorithmic contribution” or “AI-assisted authorship” — that do justice to the scope of machine involvement without diluting human accountability. This would ensure transparency while maintaining ethical clarity.
In sum, the Claude 4 episode reminds us that authorship is not just a label — it is a contract of trust between writers, readers, and the scholarly community. Until AI can meaningfully participate in that contract, it must remain in the acknowledgments — seen, credited, but not burdened with a responsibility it cannot yet bear.
Honestly, I don't think AI should be listed as a co-author, even if it helped a lot. Like yeah, it can do the heavy lifting sometimes — generate text, analyze data, whatever. But at the end of the day, it doesn’t really understand what it’s doing. It’s just predicting stuff based on training, not actually thinking or taking responsibility.
Also, authorship means you’re accountable for the work. If something goes wrong or if there’s some ethical issue, you can’t exactly email ChatGPT and say “hey explain yourself.” 😂
That said, I do think we should acknowledge AI tools properly. Like maybe in the methods or acknowledgements section. Just don’t pretend like the AI is out here writing research papers and applying for tenure. Not yet, anyway 😅
Totally with you on that. AI can be a powerful assistant, but it’s not a thinking collaborator — it doesn’t have intent, understanding, or accountability. Giving it co-authorship feels like giving credit to a calculator for solving an equation. Proper acknowledgements? For sure. But authorship? That still belongs to the humans who guide, interpret, and take responsibility for the work. 😄
AI cannot make substantial contribution. Real authors make it and then AI mix all together in an effective way, but the original contribution cannot come from AI. AI can help you rationalize your article (linguistically speaking) or even with the reasoning, but AI cannot create original content.... maybe you thought of it
You're raising a crucial distinction — and I agree. The essence of authorship lies in the original insight, the framing of the question, and the interpretive lens applied — all deeply human acts. AI can assist in organizing, refining, or even simulating reasoning patterns, but it does so based on existing data and training. It doesn’t “intend,” “experience,” or “situate” knowledge in the way a human does. So while it can enhance clarity and coherence, the spark of originality — that intuitive leap or critical reframe — still originates in the human mind.