A person published an article, but the Journal eventually identified the manipulation of images in the published article. What are the further actions?
The journal is entitled to retract the article. Many journals I read have done just that. I wonder if lower-ranked journals might avoid this step because it may harm them.
"It seems that every month brings a fresh slew of high-profile allegations against researchers whose papers — some of them years old — contain signs of possible image manipulation...
In January, more than 50 papers published by scientists at the Dana-Farber Cancer Institute were flagged for possible image manipulation. To stop questionable figures from being published in the first place, some journals are asking authors to submit raw images. Many publishers are using AI tools that could make it faster and easier to detect, for example, spliced or duplicated images. But they are less adept at spotting more complex manipulations or AI-generated fakery..."
"Scientific publishers have started to use AI tools such as ImageTwin, ImaCheck and Proofig to help them to detect questionable images. The tools make it faster and easier to find rotated, stretched, cropped, spliced or duplicated images. But they are less adept at spotting more complex manipulations or AI-generated fakery..."
Inappropriate image manipulation in a published article
"This guidance helps editors who have been contacted about suspected inappropriate image manipulation in a published article. The flowchart includes who to contact and when to consider a retraction, correction or expression of concern...
This flowchart relates only to cases where concerns related to digital photographic images are raised (eg, duplication of parts within an image, or use of identical images to show different things)..."
Authors – including a dean and a sleuth – correcting paper with duplicated image
The corresponding author of a paper flagged on PubPeer for an apparently duplicated image will be asking the journal to publish a correction, Retraction Watch has learned.
The paper, “The BET bromodomain inhibitor exerts the most potent synergistic anticancer effects with quinone-containing compounds and anti-microtubule drugs,” appeared in Oncotarget in 2016. Its authors include Marcel Dinger, now a dean at the University of Sydney, who has said he’s working to correct review papers that cited papermill articles, and sleuth Jennifer A. Byrne, also of the University of Sydney.
Earlier this month, an anonymous user on PubPeer pointed out areas of images in figure 6B that were “much more similar than expected.”..."
Analysing reflections of light in the eyes can help to determine an image’s authenticity...
"A method that astronomers use to survey light from distant galaxies can reveal whether an image is AI-generated. By looking for inconsistencies in the reflection of light sources in a person’s eyes, it can correctly predict whether an image is fake about 70% of the time. “However, if you can calculate a metric that quantifies how realistic a deep fake image may appear, you can also train the AI model to produce even better deep fakes by optimizing that metric,” warns astrophysicist Brant Robertson..."
What’s in a picture? Two decades of image manipulation awareness and action
"This year marks the 20th anniversary of the publication of “What’s in a picture? The temptation of image manipulation,” in which I described the problem of image manipulation in biomedical research.
Two decades later, much has changed. I am reassured by the heightened awareness of this issue and the numerous efforts to address it by various stakeholders in the publication process, but I am disappointed that image manipulation remains such an extensive problem in the biomedical literature. (Note: I use the term “image manipulation” throughout this piece as a generic term to refer to both image manipulation (e.g., copy/paste, erasure, splicing, etc.) and image duplication.)..."
Exclusive: Thousands of papers misidentify microscopes, in possible sign of misconduct
"One in four papers on research involving scanning electron microscopy (SEM) misidentifies the specific instrument that was used, raising suspicions of misconduct, according to a new study.
The work, published August 27 as a preprint on the Open Science Framework , examined SEM images in more than 1 million studies published by 50 materials science and engineering journals since 2010...
Researchers found only 8,515 articles published the figure captions and the image’s metadata banners, both of which are needed to determine whether the correct microscope is listed in papers. Metadata banners usually contain important information about the experiments conducted, including the operating voltage of the microscope and the instrument’s model and parameters..."
Pakistan university’s pharmacy department chair notches two retractions
"Kashif Barkat, who heads the Department of Pharmacy at the University of Lahore in Punjab, Pakistan, has had two of his studies retracted and two more corrected, all for issues related to images in the papers. Several more of his studies are flagged on PubPeer for similar reasons...
For the two corrected articles, Barkat and his colleagues acknowledge errors in the published images but that those mistakes did not affect the main conclusions of the work..."
26-year-old article retracted for image reuse "from a previous publication of the research group...
Withdrawal: Lysophosphatidic acid stimulates the G-protein-coupled receptor EDG-1 as a low affinity agonist
"This article has been withdrawn by the authors, except M. Lee who could not be reached. The journal concluded that the control panel of the EDG-1 column and the control, LPA (5 μM), LPA (20 μM), and LPA + mIgG panels of the pCDNA column in Figure 6A were reused from a previous publication of the research group. In addition, LPA (5 μM) panel of the EDG-1 column was reused within Figure 6A as the LPS panel of the pCDNA column. Signs of background issues were also identified in Figures 4A, 5A, 5B, 6D, 7A, and 7B. No raw data were available to resolve these issues..."
AI-generated images threaten science — here’s how researchers hope to spot them
"An arms race is emerging as integrity specialists, publishers and technology companies rush to develop tools that can assist in rapidly detecting AI-generated images in scientific papers. The makers of tools such as Imagetwin and Proofig, which use AI to detect integrity issues in scientific figures, are training their algorithms on databases of AI-generated images to make them better at spotting dupes. “Fraudsters shouldn’t sleep well at night,” says Kevin Patrick, a scientific-image sleuth known as Cheshire on social media. “They could fool today’s process, but I don’t think they’ll be able to fool the process forever.”..."
Acceptable image-editing practices are partly a matter of common sense. But researchers say journals and funders could help scientists by standardizing policies...
"Clear and accessible images can be a crucial part of reporting the findings of an experiment. But often, images are poorly presented in papers: image panels lack labels or scale bars, and annotation features such as arrows are missing from the imaged objects. In some cases, photo editing is required to present images clearly. But there’s a fine line between clarifying and manipulating. Understanding the basics of how imaging techniques work can help scientists to avoid problems further down the line, says image-integrity analyst Jana Christopher. Above all, ensuring your data is in the best shape possible will make nailing the figures much faster, says visualization specialist Helena Jambor."
"...Image fraud in nuclear medicine research appears to be relatively prevalent. It is more frequently witnessed among other colleagues than self-reported by individual researchers. The findings highlight the need to fostering a culture of research integrity and for stronger preventive measures, including greater awareness, stricter journal policies, and improved control."