The use of AI in research writing is growing these days, and on the other hand, studies show unethical concerns for the same. Hence, it is important to get your kind attention about this.
Here are some tips on how to use ChatGPT or AI tools ethically in scientific writing:
Use AI tools to enhance your writing, not replace it: AI tools can support and improve scientific writing, but they should not replace human input entirely. Ensure that you have a thorough understanding of the topic and the research before turning to AI tools to incorporate relevant information.
Cite your sources accurately: When using AI to generate content, ensure that it provides accurate citations for the sources used. Verify that the sources are credible to avoid any misinformation in the research literature.
Avoid plagiarism: Using an AI tool does not exempt you from plagiarism. Ensure that the content generated is not a direct copy from any existing work. Carefully review the content to ensure proper attribution of ideas.
Respect the ethical guidelines: Follow ethical guidelines set by scientific communities and the journal publishing the research paper. Avoid using AI tools to distort results to advance specific agendas.
Keep AI use transparent: Be transparent about the use of AI in your writing. Disclose the use of AI tools in your work and maintain complete control over your content.
AI tools can be a useful resource in scientific writing, but it's important to use them ethically. Always prioritize human involvement and ensure that AI tools support, rather than replace, human input. Use them transparently, citing sources accurately, avoiding plagiarism, and respecting ethical guidelines.
AI should be used to improve, help, and enhance the writing, not write instead of the researchers. Moreover the authors should be aware to avoid:
1/ The plagiarism.
2/ Violating the ethical rules and guidelinea.
In addition, the writing and information should be reviewed because AI works axcording to try and error method and the comparison whith what the users and programmers entered before. Also, it modifies the results continuously.
I would agree with the idea that we should use ChatGPT to enhance our writing, not replace it.
To maintain the legality - and comply with the requirements of journals/publishers, we often have to observe a short declaration of this (the role of AI and that it was used only to assist with readability).
This is an example:
Declaration of generative AI in scientific writing The below guidance only refers to the writing process, and not to the use of AI tools to analyse and draw insights from data as part of the research process.
Where authors use generative artificial intelligence (AI) and AI-assisted technologies in the writing process, authors should only use these technologies to improve readability and language. Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. AI and AI-assisted technologies should not be listed as an author or co-author, or be cited as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI policy for authors.
Authors should disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.
Disclosure instructions Authors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the References list. The statement should be placed in a new section entitled ‘Declaration of Generative AI and AI-assisted technologies in the writing process’.
Statement: During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
This declaration does not apply to the use of basic tools for checking grammar, spelling, references etc. If there is nothing to disclose, there is no need to add a statement.
This is extracted from the instructions from Archives of Cardiovascular Diseases. I have included the link for the whole set of instructions to show the context:
Using ChatGPT or other AI tools ethically in scientific writing requires careful consideration and planning. First, it's essential to understand the limitations of AI and recognize that these tools are not substitutes for human intelligence and creativity. While AI can assist with tasks such as data analysis, summary generation, and grammar correction, it cannot replace the originality and insight that comes from human authors. Therefore, it's important to use AI tools as a means to augment and support human writing, rather than relying solely on them for output.
Second, ethical use of AI in scientific writing necessitates transparency regarding the roles of both humans and machines in the writing process. Authors should acknowledge the contributions of AI tools and clearly indicate what parts of the text were generated by humans and what parts were generated by machines. This helps maintain the integrity of scientific publishing and ensures accountability for the contents of the paper. Moreover, transparent acknowledgment of AI usage can foster trust in the research community and promote responsible development of AI technology.
Lastly, ethical utilization of AI in scientific writing demands vigilant attention to avoid perpetuating biases present in the training data. AI models like ChatGPT learn from vast amounts of text data, which may contain implicit biases and stereotypes. These biases can be reflected in the outputs generated by AI tools, potentially leading to unfair or discriminatory outcomes. To mitigate this risk, developers and users of AI tools must proactively monitor and address bias issues, striving for fairness and equity in the dissemination of scientific knowledge. This includes actively seeking diverse perspectives, engaging in peer review, and continually refining AI algorithms to better detect and eliminate biases. By adhering to these principles, scientists can harness the power of AI while upholding the highest standards of ethical conduct in scientific writing.