Of course as the great quote goes, with great powers comes great responsibilities. Here are some tips regarding how to navigate the use of AI tools:
Treat the AI like a gifted intern—never the PI. You review, verify, and take the heat for every line it writes.
Log everything (tool, version, prompt, date). Dump that log in your supplementary files if the journal allows; editors love the transparency.
Disclose substantive help in a sentence or two. Light grammar fixes? Usually fine to skip. Anything beyond cosmetic? Declare it.
Keep raw data and image provenance tight. If an editor asks, you should be able to cough up original microscopy files or code, not just a slick AI render.
Follow the publisher’s three golden rules: AI ≠ author, no fabricated stuff, and full responsibility remains human.
Do that, and you’ll harness the jetpack without burning your eyebrows—or your publication record.
(Note: Ai has been used to help shape the above response)
I think this depends on how it's used. Is it being used as a tool or as a source? Using it as a tool is, in my opinion, correct because it offers better graphics and a better narrative style and this information is entirely produced by the author. Using it as a source, however, means the information is partially or fully generated by the LLM, which isn't ethical.
I believe there are genuine concerns about utilizing AI tools in scientific research. For one, there’s the issue of academic honesty—sometimes people might use AI to generate parts of their papers or data without proper acknowledgment, which can be a big problem for trust and credibility. Additionally, AI models can exhibit biases based on the data they’re trained on, so the results may not always be reliable or fair. Plus, if researchers rely too much on AI, they might lose some critical thinking skills or overlook important questions that don’t fit the AI’s patterns.
On the other hand, AI can really speed up research by analyzing huge amounts of data quickly and even suggesting new hypotheses that humans might miss. This can save a lot of time and money and lead to exciting discoveries. So, while AI has great potential to improve science, we definitely need to be careful about how it’s used and make sure there are clear guidelines to avoid misuse or ethical issues.
Dear Koteshwar Ramesh Rakesh , when You use AI in creation of your answer, You MUST note it. I do not suggest You to use it here, ResearchGate is scientific community!
"At the same time, I am also a staff member of an academic institution specializing in academic research. If you have academic journals that require collaboration, we can have in-depth exchanges.
Now, I am back to your question directly dear عاطف يوسف .
Ethical Considerations in The Use of AI Tools Like ChatGPT and Gemini in Academic Research
"The rapid integration of generative artificial intelligence (AI) tools, such as ChatGPT and Gemini, into academic research has transformed scholarly workflows, offering unprecedented efficiency in tasks like literature reviews, data analysis, and manuscript drafting. However, their adoption raises significant ethical concerns, including issues of authorship, plagiarism, data integrity, and bias perpetuation. This paper explores the ethical implications of using AI tools in research, drawing on Elsevier’s Responsible AI Principles, stakeholder theory, and empirical studies. It examines challenges such as the risk of fabricated references, lack of transparency in AI-generated outputs, and potential inequities in access to advanced AI tools. Recommendations are provided for researchers, institutions, and publishers to ensure ethical use, including transparent disclosure of AI involvement, rigorous validation of outputs, and adherence to academic integrity standards. This study underscores the need for balanced integration of AI to enhance research while safeguarding ethical principles."
Article Ethical Considerations in The Use of AI Tools Like ChatGPT a...
Thank you for sharing your concern, it’s an important one and very much at the heart of how the research community is wrestling with AI right now.
I’d like to clarify that what I shared wasn’t mindlessly generated by an AI and posted as-is. These were entirely my thoughts, shaped and polished with the help of an AI tool , no different, really, from how one might ask a colleague to help rephrase an idea for clarity or flow. There’s a meaningful distinction here: the ideas, accountability, and final judgement are mine alone, the tool simply helped me say it better.
I do think our community risks painting AI with too broad a brush. The conversation should not be about banning its thoughtful, transparent use, but about where we draw the ethical line: ghostwriting entire papers versus refining wording, for example.
As someone who teaches at a university, I’d never dismiss a student outright for responsibly using AI to strengthen their expression, as long as their work, thinking, and evidence remain their own. In fact, we should be guiding young researchers on how to use these tools wisely, rather than pretending they don’t exist.
I hope you’ll agree that our collective energy is better spent on building responsible norms than on blanket suspicion.
I appreciate you sharing the research piece; it’s clear we both care deeply about the ethical use of AI in research. But I can’t help but notice something slightly ironic here. The text you shared reads almost like it was AI-generated itself: no in-text citations, generic phrasing, and missing references to major landmark works on the subject. In fact, its formatting and tone are exactly what AI tools tend to produce when prompted for a general overview.
This is, I think, precisely the issue within the scientific community right now. We risk quickly accepting anything that looks academic, whether human- or AI-made, and simultaneously dismissing thoughtful contributions that may have used AI responsibly as a tool for expression, not ghostwriting.
Dear Koteshwar Ramesh Rakesh , as You have written: "These were entirely my thoughts, shaped and polished with the help of an AI tool...", is exactly what I have supposed that was done. So, I have thought that it could be written in remarks that there was use of AI somehow. It can be noted in small letters.
It’s advisable to acknowledge the use of any form of AI...
Can academics use AI to write journal papers? What the guidelines say
"In education and research, AI can generate text, improve writing style, and even analyse data. It saves time and resources by allowing quick summarising of work, language editing and reference checking. It also holds potential for enhancing scholarly work and even inspiring new ideas.
Equally AI is able to generate entire pieces of work. Sometimes it’s difficult to distinguish original work written by an individual and work generated by AI.
This is a serious concern in the academic world – for universities, researchers, lecturers and students. Some uses of AI are seen as acceptable and others are not (or not yet)...
AI tools can undoubtedly enhance the academic writing process, but their use must be approached with transparency, caution, and respect for ethical standards.
Authors must remain vigilant in maintaining academic integrity, particularly when AI is involved. Authors should verify the accuracy and appropriateness of AI-generated content, ensuring that it doesn’t compromise the originality or validity of their work."
Ethical AI governance: mapping a research ecosystem
"How do we assess the positive and negative impacts of research about- or research that employs artificial intelligence (AI), and how adequate are existing research governance frameworks for these ends? That concern has seen significant recent attention, with various calls for change, and a plethora of emerging guideline documents across sectors. However, it is not clear what kinds of issues are expressed in research ethics with or on AI at present, nor how resources are drawn on in this process to support the navigation of ethical issues. Research Ethics Committees (RECs) have a well-established history in ethics governance, but there have been concerns about their capacity to adequately govern AI research. However, no study to date has examined the ways that AI-related projects engage with the ethics ecosystem, or its adequacy for this context. This paper analysed a single institution’s ethics applications for research related to AI, applying a socio-material lens to their analysis. Our novel methodology provides an approach to understanding ethics ecosystems across institutions. Our results suggest that existing REC models can effectively support consideration of ethical issues in AI research, we thus propose that any new materials should be embedded in this existing well-established ecosystem."
Article Ethical AI governance: mapping a research ecosystem
I believe that the real concerns about the impact of artificial intelligence on scientific research lie in the depth of the use of artificial intelligence in research, the reliance on it in writing research, and the extent of the ability to distinguish between human effort and what artificial intelligence produces, especially since there is still a lot of doubt and ambiguity in knowing how to use artificial intelligence in research without violating the basic rules of research and its ethics.
Artificial intelligence models are too often assessed against flawed goals — a stumbling block for progress...
Anshul Kundaje sums up his frustration with the use of artificial intelligence in science in three words: “bad benchmarks propagate”...
“You need to go into it with your eyes open,” she says. “You might have a domain expertise, but you really need AI expertise to understand what goes on behind the curtain.”
"While the future utility of the word-guessing machine we call “Generative AI” remains a question mark, there are a few things we know about its current state: it’s not great at consistently creating quality outputs, but it is superb at producing outputs in quantity. This rapid production of vast amounts of content is beginning to overwhelm different systems. Case in point, the NIH has been forced to put a limit on the number of grant applications any individual can submit in a calendar year, largely because of the flood of AI generated slop that they’ve had to process.
Another area impacted is online recipes, where the internet seems overwhelmed with websites offering thousands of recipes with accompanying AI food photos. Meta seems to have put some effort into developing “Inverse Cooking“, that is, using AI to create a recipe based on an image of a prepared dish (and has released code to allow others to do so). One can only ponder the recursive slop that will result from AI creating recipes from images generated by AI and so on, and so on.
So are these AI recipes any good? While they seem to be getting better in terms of not being completely absurd (less glue pizza and fewer rock based diets), they’re still not great. As the video below demonstrates, AI doesn’t quite seem to understand proportions and measurements all that well. Maybe keep this in mind when considering the use of AI to propose scientific experiments."
AI models are neglecting African languages — scientists want to change that
Scientists record 9,000 hours of languages spoken in Kenya, Nigeria and South Africa as free-access training data for AI models...
The group plans to release the digitized language data sets to the developers of artificial intelligence tools to use to train large language models (LLMs). Many LLMs are used to convert speech to text or provide automatic language translation, but a lack of training data for African languages means these tools can’t recognize them...
“She was intrigued by AI we can design and understand; the kind that we can use for science, for understanding our minds.”
AI ethics researcher Joanna Bryson remembers Margaret Boden, a pioneering AI scholar, whose influential work bridged cognitive science, philosophy and computer science. Boden has died, aged 89.
ChatGPT tends to ignore retractions on scientific papers
Study finds the chatbot doesn’t acknowledge concerns with problematic studies...
The large language model–based chatbot ChatGPT fails to highlight the validity concerns with scientific papers that have been retracted or have been the subject of other editorial notices, according to a new study...
To determine which artificial intelligence tool is best at a particular task, you can use a benchmark — a test that can be used to compare the performance of different models. For that system to work, benchmarks need to be robust. That, machine learning researchers say, is where things fall down. Increasingly, artificial intelligence models are being designed to compare favourably against benchmark tests. That process yields tools that pass the tests, but do little else, flooding researchers with tools that aren’t fit for purpose...