Why Do Scientific Journals Reject the Use of Artificial Intelligence in Research Writing?

In an era of rapid innovation and technological advancement, artificial intelligence (AI) is transforming nearly every field. Yet, many scientific journals remain hesitant—if not outright resistant—to its use in research writing and preparation. This resistance raises a legitimate question: Is it reasonable to reject a tool designed by humans to assist them, simply because it reduces the traditional burdens of academic work?

AI: A Blessing or a Threat?

It is perfectly logical for AI to play a significant role in facilitating researchers’ tasks. Tools like Anara.ai, ChatGPT, and Scite.ai now allow researchers to summarize articles, analyze data, generate ideas, and even draft early versions of research papers. These technologies do not replace the researcher—they enhance their capabilities, saving time and energy that can be redirected toward deep analysis and critical thinking.

However, scientific journals are concerned that heavy reliance on AI might compromise the “authenticity” of the research. Was the paper truly written by the researcher? Were the findings genuinely interpreted through human intellect? These concerns are not unfounded, especially given the increasing difficulty of distinguishing human-generated content from that created by algorithms.

Is Effort Still the Measure of Quality?

There seems to be an unspoken belief that valuable research must come from significant personal effort and intellectual labor. As if the quality of research depends solely on how hard the researcher worked! This mindset is outdated in a world where technology can streamline and simplify many tasks. Would journals have rejected the use of computers in the 1980s on the grounds that they made writing easier? Why, then, do we accept the use of advanced statistical software but reject AI tools that assist with language and idea generation?

Disclosure Is Required… But the Guidelines Are Vague

Recently, scientific journals have begun requiring researchers to disclose their use of AI tools. However, this demand often comes without clear guidelines or consistent standards. As a result, many researchers feel confused and even anxious: If I disclose my use of AI in a certain part of my research, will my paper be rejected? If I don’t, will I face ethical questions later?

The lack of specific instructions around:

  • What kind of AI use is acceptable?
  • When is AI considered a tool versus an inappropriate contributor?
  • What exactly needs to be disclosed?

…makes disclosure feel like a risk rather than a sign of transparency and professionalism.

Embracing AI Doesn’t Mean Abandoning Human Intellect

Using AI does not mean surrendering our thinking to machines. Rather, it is an extension of human capabilities. AI is a gift—a blessing from the human mind to itself. Why deny ourselves the benefits? Scientific journals would do better to create clear and thoughtful policies on AI use, rather than banning it altogether. Just as researchers are expected to cite their sources, they can also be asked to clarify how AI contributed to their work.

In Conclusion

This is not about rejecting AI itself, but rather about the fear that it might replace the researcher instead of supporting them. In truth, AI will never be a substitute for human intelligence; it will always be a powerful complement. It’s time for scientific journals to adopt this perspective and move forward with the times—instead of resisting them.

More Bandar Aljabri's questions See All
Similar questions and discussions