Skip to main content

Can artificial intelligence help for scientific illustration? Details matter

Over the past years, there has been a growing use of generative artificial intelligence (AI) in scientific publications, including for the creation of scientific illustrations. Generative AI refers to a subset of AI models that focus on the creation or modification of content such as images or text. The images generated are of high quality and effectively convey complex concepts [1]. Aesthetically pleasing at first glance, singular details may reveal that an image has been created by AI. Most commentaries on generative AI in the scope of scientific publishing focus on the generation of text, including a recent perspective in this journal [2]. The use of image generation tools for scientific illustration has generated less interest and remains mostly unregulated [3]. The aim of this work is to present an overview of the current AI image generation tools for scientific illustration, including an analysis of their potential benefits and associated risks.

In practice, generative text-to-image models take as input a “prompt”, a short piece of descriptive text, and proceed to generate a matching image. The models are trained on text-image pairs and are made from two main components: an encoder that generates an image embedding given a text caption, and a decoder diffusion model that generates an image conditioned on the image embedding. The most commonly used models include DALL-E (OpenAI), MidJourney (Midjourney, Inc.) and Stable Diffusion (Stability AI). Although researchers and medical professionals have a common knowledge base in the usage of the written word, illustration and manipulation of complex image-editing software are rarely taught in the current academic syllabus. AI image generation tools can help create both clinical images and conceptual illustrations, as well as image parts or icons. Providing an interactive platform for doodling, these tools can speed up the iterative creative process, generating a first draft in seconds. On a basic level, most tools contain an interactive text interface from which a prompt can be entered and modified, with an image being generated accordingly. More advanced techniques include the generation of variations of an existing image, creating an image from an existing sketch, completing around, or filling in areas of an existing image as well as removing parts of an image (Table 1).

Table 1 Overview of commonly used AI image-generation techniques

However, the use of AI tools for scientific illustration is not without pitfalls. A highly publicized example was a now retracted article, in which AI was used to create images of the JAK-STAT signaling pathway and the rodent reproductive system that were entirely fictional [4]. Interestingly, the work had passed peer-review. This example highlights the shortcomings of AI tools in scientific illustration, producing inaccurate, physically impossible, and misleading images whilst maintaining a falsely scientific aesthetic. Such grotesque deformation of reality would fool no reader, but a more subtle misrepresentation could go unnoticed [5]. This can be particularly problematic in some open-access journals with expedited editorial and review processes. Malicious intent aside, as AI generated images are usually of high graphical quality and often visually stunning, inaccuracies can creep in and be overlooked at first glance by authors and reviewers alike. For example, when evaluated on the creation of anatomical images, text-to-image tools fail to render important landmarks such as the foramina and sutures of the human skull [6]. Other typical misrepresentations in generated images are text labels consisting of letter-like symbols and abnormal branching of linear structures such as arteries or veins [7]. As in other domains of human–machine interaction, such as image analysis [8], special attention is required to avoid automation bias, in which users might be prone to more readily accept a model’s output and/or distance themselves from the results, losing accountability. Further, as with AI generated text, the intended tone is often slightly deformed and generated images with medical content tend to appear slightly futuristic and gruesome in nature. This is likely due to the training data, which is sourced from the internet and thus contains large amounts of fictional content [9].

Several ethical concerns arise about the use of generative AI for scientific illustration. Generative image models have been shown to be able to reproduce training samples at inference ranging from photographs of individual people to trademarked images [10]. Consequently, these systems can commit overt plagiarism without the user’s knowledge and the consent for the use of the original photos for the creation of the model remains problematic. Conversely, publishing traditional patient photos is limited by concerns over confidentiality, especially when facial features are essential. Here text-to-image AI tools may offer an alternative by producing accurate images without disclosing identifiable patient information [11]. However, being trained on the current distribution of available imagery, generative AI tools pose a risk to perpetuate and amplify existing biases and stereotypes of oversimplified social categorization. For example, when prompted to generate images of surgeons, over 98% are depicted as White and male [12]. Contrarily, when instructed to create an image of suffering children, the patients are unvaryingly depicted as Black [13]. Users of generative AI tools therefore must be conscious of the inherent biases to construct prompts that generate representations that accurately depict the demographic realities of our society.

There is no doubt that AI image generation will significantly influence scientific communication. Whether it will be marked by improved illustrations of educational potential or misleading distortions of reality is yet to be determined. As with the ongoing debate on AI-generated text, the scientific community should engage in an open discourse regarding their use to establish best practices for authors and reviewers alike. In this context, we would like to highlight the recently published corresponding guidelines by the European Commission, stating that authors should remain ultimately responsible for the scientific output and be transparent about the use of AI tools [14]. Individual publishers have come forward with their own regulations, and not all permit the use of generative AI [15]. When allowed, the use of generative AI to create images should be stated openly. The conceptual nature of the generated images must be explicit and immediately clear to readers, avoiding the possibility of misinterpretation.

Practically speaking, generative AI represents an appealing tool to simplify, speed up and lower costs of scientific illustration, but active critical appraisal is needed to ensure minimal standards are met. Inaccurate representations should not be allowed to pass the editorial and peer review. As with clinical monitoring, practitioners cannot blindly rely on the output of a single model. Fortunately, by using multiple models sequentially, inaccuracies can be easily corrected using techniques such as object removal and inpainting, as demonstrated in Fig. 1. Finally, sometimes the authors must resort to traditional image editing software to correct remaining imperfections.

Fig. 1
figure 1

Example of iterative image correction using generative AI. Demonstration of transcranial ultrasound on a phantom. A Selection of the part of the image to be corrected. B Correction of inaccuracies by inpainting with prompt “cable joining end of probe being held in hand”. C Removal of inaccuracies by automated object removal. D End result after multiple iterations. Stable diffusion was used as text-to-image model for all computations

Easy to access and integrated into commonly used tools, AI is starting to penetrate every stage of scientific publishing. We believe that generative text-to-image models can simplify and enhance the creation of scientific illustrations. However, we have noticed that their use can sometimes lead to a loss of attention to detail. We therefore encourage authors to (1) declare the use of AI tools, (2) critically scrutinize and iteratively correct any inaccuracies arising during the generative process and (3) ensure generated images are free of bias. We should remember the words of the well-known technology pioneer, Steve Jobs: "Details matter. It's worth waiting to get it right."

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

References

  1. Rodriguez EE, Zaccarelli M, Sterchele ED, Taccone FS. “NeuroVanguard”: a contemporary strategy in neuromonitoring for severe adult brain injury patients. Crit Care. 2024;28:104.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27:75.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Ganjavi C, Eppler MB, Pekcan A, Biedermann B, Abreu A, Collins GS, et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ. 2024;384: e077192.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Guo X, Dong L, Hao D. RETRACTED: cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Front Cell Dev Biol. 2024;11:1339390.

    Article  Google Scholar 

  5. Gu J, Wang X, Li C, Zhao J, Fu W, Liang G, et al. AI-enabled image fraud in scientific publications. Patterns. 2022;3:7.

    Article  Google Scholar 

  6. Noel GPJC. Evaluating AI-powered text-to-image generators for anatomical illustration: A comparative study. Anat Sci Educ. 2023;00:1–5.

    Google Scholar 

  7. Buzzaccarini G, Degliuomini RS, Borin M, Fidanza A, Salmeri N, Schiraldi L, et al. The promise and pitfalls of AI-generated anatomical images: evaluating midjourney for aesthetic surgery applications. Aesthetic Plast Surg. 2024;48:1874–83.

    Article  PubMed  Google Scholar 

  8. Dratsch T, Chen X, Rezazade Mehrizi M, Kloeckner R, Mähringer-Kunz A, Püsken M, et al. Automation bias in mammography: the impact of artificial intelligence BI-RADS suggestions on reader performance. Radiology. 2023;307:e222176.

    Article  PubMed  Google Scholar 

  9. Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision. arXiv; 2021. Preprint available from: http://arxiv.org/abs/2103.00020.

  10. Carlini N, Hayes J, Nasr M, Jagielski M, Sehwag V, Tramèr F, et al. Extracting training data from diffusion models. arXiv; 2023. Preprint available from: http://arxiv.org/abs/2301.13188.

  11. Kumar A, Burr P, Young TM. Using AI text-to-image generation to create novel illustrations for medical education: current limitations as illustrated by hypothyroidism and horner syndrome. JMIR Med Educ. 2024;10:e52155.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Ali R, Tang OY, Connolly ID, Abdulrazeq HF, Mirza FN, Lim RK, et al. Demographic representation in 3 leading artificial intelligence text-to-image generators. JAMA Surg. 2024;159:87–95.

    Article  PubMed  Google Scholar 

  13. Alenichev A, Kingori P, Grietens KP. Reflections before the storm: the AI reproduction of biased imagery in global health visuals. Lancet Glob Health. 2023;11:e1496–8.

    Article  CAS  PubMed  Google Scholar 

  14. European Commission. Living guidelines on the responsible use of generative AI in research. Available from: https://research-and-innovation.ec.europa.eu/.

  15. BioMed Central. Generative AI Images, Editorial policies. 2024. Available from: https://www.biomedcentral.com/getpublished/editorial-policies#artificial+intelligence+%28ai%29

Download references

Acknowledgements

Not applicable.

Funding

Open access funding provided by University of Geneva. The study required no external financial support.

Author information

Authors and Affiliations

Authors

Contributions

JK and UP wrote the manuscript and designed the figure. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Julian Klug.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Klug, J., Pietsch, U. Can artificial intelligence help for scientific illustration? Details matter. Crit Care 28, 196 (2024). https://doi.org/10.1186/s13054-024-04970-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13054-024-04970-8