I think AI programs can provide suggestions and help with certain aspects of the writing process, but they cannot (yet) fully comprehend the intricacies of a specific scientific field. Therefore you really HAVE to double-check any facts a language model provides.
In terms of ethicality it is highly questionable at the moment. Some journals already banned citing ChatGPT or similar models as authors, but you would have to cite them if they made a significant contribution to the work.
Therefore I would discourage anyone from using such a model for a scientific paper, other than for giving broad, initial inspiration, e.g. in an introduction.
Here are some interesting articles on the subject:
I am really happy you ask this question, I also think we should use it to probably order our ideas and expressed in such a way that people can understand it better. If the technology is available, it should be used.
I think that this is a matter of the ethics of each author, but I personally do not see anything wrong with the use of an intellectual assistant, because people create various things precisely to help and improve their activities, so why is ChatGPT not this tool?
From the moment ChatGPT was released in November, researchers began experimenting with how they could use it to their benefit to help write systematic reviews, complete literature searches, summarize articles, and discuss experimental findings. I was therefore surprised to see that when addressing the use of GPT, a number of major publishers ignored the far-reaching implications and plethora of use cases, instead zeroing in on one particularly obscure issue, namely, ‘ChatGPT as Author’...
Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgement section or the Methods section if this is part of formal research design or methods.
This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer...
ChatGPT in Academic Writing and Publishing: A Comprehensive Guide
Scientific writing is a difficult task that requires clarity, precision, and rigour. It also involves a large amount of research, analysis, and synthesis of information from various sources. However, scientific writing is also hard, time-consuming, and susceptible to errors. Advanced artificial intelligence (AI) models, such as ChatGPT, can simplify academic writing and publishing. ChatGPT has many applications and uses in academic and scientific writing and publishing such as hypothesis generation, literature review, safety recommendations, troubleshooting, tips, paraphrasing and summarising, editing, and proofreading, journal selection, journal style formatting, and other applications.
In this book chapter, we will discuss the main advantages, examples, and applications of ChatGPT in academic and scientific writing from research conception to publishing.
Generative AI has been around for nearly a decade, as long-standing worries about deepfake videos can attest. Now, though, the AI models have become so large and have digested such vast swaths of the internet that people have become unsure of what AI means for the future of knowledge work, the nature of creativity and the origins and truthfulness of content on the internet...
University of Tennessee computer scientist Lynne Parker wrote that while there are significant benefits to generative AI, like making creativity and knowledge work more accessible, the new tools also have downsides. Specifically, they could lead to an erosion of skills like writing, and they raise issues of intellectual property protections given that the models are trained on human creations...
Artificial intelligence will fatally undermine the integrity of scholarly publishing...
Faced with Generative AI, each publisher has a choice to make. You can either invest heavily in ensuring that the work presented in your journals is real research that actually happened, or you can carry on as normal in the hope that the majority of work you publish is still real.
But here’s a warning: journals that don’t want to certify their research as real will steadily become repositories of fabricated junk, fatally undermined by AI. Will that be all of us? Or just most of us? That’s up to you...
AI is a Terrifying Purveyor of Bullshit. Next Up: Fake Science
"I used to think AI was a hyped-up distraction. I thought it would do a clumsy job of things, and be annoying, but mostly harmless. I’ve changed my mind.
What initiated my change of mind was playing around with some AI tools. After trying out chatGPT and Google’s AI tool, I’ve now come to the conclusion that these things are dangerous. We are living in a time when we’re bombarded with an abundance of misinformation and disinformation, and it looks like AI is about to make the problem exponentially worse by polluting our information environment with garbage. It will become increasingly difficult to determine what is true..."
One year ago — and two weeks before OpenAI released ChatGPT — Meta released a research demo called Galactica. An open source “large language model for science” that was trained on data including 48 million scientific papers, Meta touted Galactica’s ability to “summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.”
“Last time we made an LLM available to everyone (Galactica, designed to help scientists write scientific papers), people threw vitriol at our face and told us this was going to destroy the fabric of society.”
A look back at Galactica, an LLM that was trained on scientific papers but was taken down by Meta after only three days, recalls the response of Meta chief scientist Yann LeCun to unexpected criticism...
"Although one should never use material from artificial intelligence (AI) text generators, such as ChatGPT, directly, they can be used to refine and fine-tune sentences and to study sentence structuring. This is similar to the point on learning successful writing patterns. Studying AI-generated texts can help to expand your writing palette. For example, I struggled to find a suitable phrase for something that’s done ‘in one go’ — I noticed that ChatGPT often uses the phrase ‘single pass’ in such cases and I’ve since adopted that in my writing..."
Is AI ready to mass-produce lay summaries of research articles?
"A surge in tools that generate text is allowing research papers to be summarized for a broad audience, and in any language. But scientists caution that major challenges remain...
As is the case for many other nascent generative-AI technologies, humans are still working out the messaging that might be needed to ensure users are given adequate context. But if AI lay-summary tools can successfully mitigate these and other challenges, they might become a staple of scientific publishing..."