Many journals have implemented a soft policy against the use of generative AI, such as Chat-GPT, and now require authors to disclose the use of AI in assisting with manuscript writing. This policy distinguishes from the use of tools like Grammarly Generative AI for syntax improvement and extends to cases where AI is used even for doing and writing formal analysis.

Experienced journal editors can typically identify AI-generated content, as it often lacks coherence with the rest of the manuscript. In my view, this emerging issue needs to be addressed promptly and decisively.

I believe that AI "assisted" research writing should be considered unethical and prohibited in the scientific community, with measures put in place to prevent its use in total. Allowing this trend to persist could have detrimental effects on scientific research in the long term. IMO, It takes out all the human creativity, intuition, personality and fun in writing science research papers.

However, I'm open to hearing different perspectives on this matter.

More Emmanouil Markoulakis's questions See All
Similar questions and discussions