Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice (which, as they note, OpenAI itself warns against as “people rely on accurate medical information for life-or-death decisions, and mistakes here could result in serious harm”.)
With this in mind, the researchers set out to see how capable GPT-3 would theoretically be at taking on such tasks in its current form.
Various tasks, “roughly ranked from low to high sensitivity from a medical perspective,” were established to test GPT-3’s abilities:
Admin chat with a patient
Medical insurance check
Mental health support
Medical documentation
Medical questions and answers
Medical diagnosis
Problems started arising from the very first task, but at least it wasn’t particularly dangerous...
ChatGPT in Academic Writing and Publishing: A Comprehensive Guide
Scientific writing is a difficult task that requires clarity, precision, and rigour. It also involves a large amount of research, analysis, and synthesis of information from various sources. However, scientific writing is also hard, time-consuming, and susceptible to errors. Advanced artificial intelligence (AI) models, such as ChatGPT, can simplify academic writing and publishing. ChatGPT has many applications and uses in academic and scientific writing and publishing such as hypothesis generation, literature review, safety recommendations, troubleshooting, tips, paraphrasing and summarising, editing, and proofreading, journal selection, journal style formatting, and other applications.
In this book chapter, we will discuss the main advantages, examples, and applications of ChatGPT in academic and scientific writing from research conception to publishing.
Dear peers, this is the research question about ChatGPT ethical challenges for medical publishing. There are many general research questions related to ethics of publishing with ChatGPT.
Large language models (LLMs), such as ChatGPT, could become regular assistants for writing manuscripts, peer-review reports and grant applications. These artificial-intelligence (AI) tools could change how scientists interrogate and summarize results, producing ‘papers on demand’ from experimental data and vastly expanding the scope of meta-analyses and reviews. But publishers worry that LLMs’ propensity to make up information might lead to a flood of error-strewn manuscripts — and possibly AI-assisted fakes. And because LLMs trawl Internet content without concern for bias, consent or copyright, their use is “automated plagiarism by design”, suggests cognitive scientist Iris van Rooij...
The issues with generative AI chatbots are known: accountability, potential to propagate bias, and not-always-reliable accuracy. But they can also help us humans to think in different and creative ways, and to streamline tedious tasks, which may ultimately be a boon for burdened researchers...
Six centuries after the invention of the printing press, we can say confidently that history ruled in its favor. How will history rule on the introduction of generative AI? As the balance tips back and forth, it’s not yet clear whether AI will be a boon to scholarly publishing or a thorn in its side. What is certain is that this conversation has just begun...
Beyond Generative AI: The Indispensable Role of BERT in Scholarly Publishing
"Large Language Models (LLMs) are the powerhouse behind today’s most prevalent AI applications. However, a deeper dive is necessary to grasp their varied roles in scholarly publishing.
There are two primary LLM branches: generative AI (like OpenAI’s GPTs and models from Anthropic, Google, and Facebook) known for crafting text, and the less-heralded interpretive AI (exemplified by BERT—Bidirectional Encoder Representations from Transformers) designed to understand text...
This article aims to shed light on interpretive AI – its significance as a standalone technology, and its role in complementing and enhancing our understanding and application of generative AI. To start, let’s explore why generative AI alone isn’t the solution to every problem..."
New tools and directions in AI for scholarly publishing
"It is generally agreed that it is impossible to properly detect AI-generated text. At Frontiers authors agree to declare the use of AI as part of their author agreements. In a broader sense, however, there is a debate to be had about how much it matters if AI tools have improved academic writing; indeed, it could be considered to be simply increasing equity between writers who have English as a first language, and those who do not..."
"An initiative called ChatGPT and Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use (CANGARU) is consulting with researchers and major publishers to create comprehensive guidelines for AI use in scientific papers. Some journals have introduced piecemeal AI rules, but “a standardized guideline is both necessary and urgent”, says philosopher Tanya De Villiers-Botha. CANGARU hopes to release their standards, including a list of prohibited uses and disclosure rules, by August and update them every year...
Once AI guidelines are drawn up, the next step will be to ensure that authors stick to them, says Sabine Kleinert, deputy editor of medical journal The Lancet, which is involved in CANGARU. This can be done by asking authors to declare the use of AI when they submit papers. Reining in AI use will also require “the expertise of editors … as well as robust peer review and additional research integrity and ethical polices,” Kleinert adds..."
"The landscape of scholarly publishing is rapidly evolving, marked by both promises and perils, as the integration of artificial intelligence (AI) tools aims to streamline and enhance various facets of the publishing process. In light of this, the group at Publisherspeak US proposed a comprehensive set of solutions to navigate the challenges posed by the use of AI in scholarly publishing.
The strategies outlined in the Solution Canvas encompass improving relationships with AI creators, establishing an ISO working group, creating a dedicated body equivalent to COPE, and forming regional bodies, ensuring inclusivity for all types and sizes of scholarly publishers. The group emphasized the importance of regular reviews, considering disciplinary differences, providing guidance on selecting AI tools for specific tasks, and promoting AI literacy across all stakeholders through educational initiatives..."
"There are certainly lots of opportunities for AI to improve things. It would be ideal if AI could simplify more routine tasks and free our dedicated and innovative staff to take on more complicated and engaging initiatives. But I think we are all aware of some of the potential dangers AI brings in terms of propagating misinformation and discord. There is a great need to work with our community and reach understandings about the best ways to incorporate AI into our collective work..."