Have you ever used Artificial Intelligence (A.I.) tools for image generation in your academic research? If so: (1) how did you use them, (2) for what purpose, and (3) how do you evaluate the potential of this technology in the scientific context?
AI image generation tools allow you to quickly produce high-quality visuals by describing desired images in text prompts. With thoughtful use, these tools can enhance the creation of diagrams, illustrations, and graphics to engage students and enrich lectures and assignments.
Most AI image generators work whereby the user enters text that describes the image they want to produce. After submitting the text, the user views the results and can save the image for their use. In some cases, users can save images in varying sizes and formats.
I have used LLMs to help me style plots made with ggplot for example. Like changing colors or adding certain elements without looking up the syntax myself. However, only in case I already knew what it was supposed to look like.
I wouldn't let an AI generate images directly. The AI does not understand what it is generating and thus often produces inaccurate garbage. Maybe interesting to look at, but unsuitable in a scientific context.
Yes, Ariel Pereira da Silva Oliveira , I’ve used AI-based image analysis in my academic research—particularly in medical imaging and disease detection contexts. While I didn’t use AI to generate synthetic visuals like DALL·E or MidJourney, we leveraged AI to generate diagnostic insights and representations from image data using a Siamese Network architecture.
For example, in our paper titled:
“Retinal Twins: Leveraging Binocular Symmetry with Siamese Networks for Enhanced Diabetic Retinopathy Detection”,
we trained deep neural networks to learn symmetry-based features across paired retinal images. This process involved creating feature embeddings and heatmap overlays that highlight disease-specific anomalies—essentially producing AI-generated diagnostic visual representations from input scans.
Such AI-driven image synthesis—although medically grounded—is powerful for advancing interpretability, early detection, and decision support in clinical practice.
Here’s the paper for reference:
Article Retinal Twins Leveraging Binocular Symmetry with Siamese Net...
I believe the potential for AI in scientific imaging—especially in precision diagnostics, image segmentation, and pattern discovery—is enormous, provided it's grounded in domain knowledge and interpretability.
Yes, I have used AI tools to generate images that translate complex disaster-related data into clear visual maps and charts which emergency teams use to assess damage, locate resources, and prioritize actions quickly. This approach removes delays caused by manual data processing and helps decision makers focus on what matters most during critical moments. The AI models produce real-time visuals by analyzing satellite imagery, sensor data, and social media inputs to highlight affected areas and predict potential risks. These images improve communication among response teams and enhance overall coordination in disaster scenarios. See how I leveraged AI for disaster response decision support in my paper, Enhancing Urban Disaster Response through AI-driven Data Visualization for Real-time Decision Support, here: https://www.researchgate.net/publication/392695103 and https://ojs.acad-pub.com/index.php/PSEM/issue/view/255.