I believe that text-to-image and image enhancing tools are fascinating. However, we all could recently see how negatively can be the impact of such tools in hand of unprepared people, like the authors that published the paper " Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" that was retracted by Frontiers.

While I reckon that, at the moment, the AI tools seem to be limited and often not being scientific accurate, I believe that this can be improved with prompt engineering. Most of users I see using this sort of tools are not educated enough in prompting. However, "AI hallucinations" seem to be more the rule than the exception when real scientific images are needed.

Any comments and ideas on how to improve AI image generation for the scientific context? I would welcome examples of failures and success cases, if any :-)

More Fabrício A Pamplona's questions See All
Similar questions and discussions