Some scientists now frequently use generative artificial-intelligence (AI) systems, such as ChatGPT, to help them write and edit manuscripts, check their code and brainstorm ideas. But the excitement about the use of such tools is tempered with apprehension, because of their propensity to make factual errors, reproduce biases in training data and provide fuel for fakery. They also rely on humans to tag reams of violent, abusive and other horrific content so that it can be filtered out, and require a huge amount of energy to train. Researchers are grappling with these issues, in part by urging more regulation and transparency...
According to my opinion, GPT is a sort of artificial intelligence and may cause the destruction of the inner potential of a researcher. So it can be used just to get a hint, not for complete work. A Researcher should avoid using shortcuts while executing his work.
While there are significant benefits to opening the world of creativity and knowledge work to everyone, these new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills. Educational institutes need to craft and enforce policies on allowable uses of large language models to ensure fair play and desirable learning outcomes...
Although chatGPT isn't able to provide original ideas (at least ideas that would make sense ).
I find it useful to assist me in easing the process of knowledge acquisition about a scientific field or subfield that isn't at core of my research but which I want to have a "good enough" understanding.
For instance I recently used it to gain a more clear view of deep learning networks applied to NLP and the key difference with ML approaches.
Of course I checked it's statements, which weren't false neither vague. Given the volume of courses/papers/tutorial/.... on this topic, I found chatGPT helpful as a kind of tutoring system
In my previous answer, I forgot to mention a very problematic issue in the use of chatGPT for scientific work. I've experienced this issue with chatGPT but I have a recent illustration of the problem with the conversation agent Claude (from Anthropic PBC, which is at the same level of last OpenAI models).
I was asking Claude about existing methods for structural analysis of multi-party conversation. He provided me with a quite credible answer, enumerating various approaches that weren't very specific but seemed appropriate.
Then I asked him about scientific references about the approaches he described. And things started to get ugly. He generated plausible papers title (that would match well) together with an URL. Unfortunately none of the URL were directing to the intended paper (either 404 or a different paper).
I though that URL might have changed and asked him to provide me with complete reference using ACM style (because why not). The reference format was correct, the titles were consistent (eg. A taxonomy of multi-party conversation structure) and even the conferences name (when provided were actual conference name in the topic). The only issue being of course that none of the article existed.
I attached a pdf of the conversation in the original viewing format. If you have time to take a look you'll notice why there is a considerable potential for deceiving. Here this wasn't on purpose obviously but this isn't very reassuring.
GPT-3 and the future of publishing & academia webinar
This powerful technology forces us to consider some fundamental questions. What opportunities does this next generation of AI give to researchers and publishers, as well as bad actors (e.g. papermills)? And what does it mean for academic work in general? In this webinar, we will hear perspectives from three sides: technology, publishing, and academia...
ChatGPT in Academic Writing and Publishing: A Comprehensive Guide
Scientific writing is a difficult task that requires clarity, precision, and rigour. It also involves a large amount of research, analysis, and synthesis of information from various sources. However, scientific writing is also hard, time-consuming, and susceptible to errors. Advanced artificial intelligence (AI) models, such as ChatGPT, can simplify academic writing and publishing. ChatGPT has many applications and uses in academic and scientific writing and publishing such as hypothesis generation, literature review, safety recommendations, troubleshooting, tips, paraphrasing and summarising, editing, and proofreading, journal selection, journal style formatting, and other applications.
In this book chapter, we will discuss the main advantages, examples, and applications of ChatGPT in academic and scientific writing from research conception to publishing.
Despite the current hype surrounding tools like ChatGPT and GPT-4, further development of artificial intelligence capabilities is clearly required before one can embrace their use as reasonably trustworthy research tools...