In recent years, the applications of large language models (LLMs) like GPT-4 have expanded at an exponential pace. From simplifying basic tasks such as setting reminders and answering emails to more complex ones like drafting research papers, coding software, and even assisting in artistic creations. In general, LLMs have found a foothold in a diverse array of domains. Notably, in the field of medicine, these models have shown promise in interpreting complex data sets, searching patient records, and even generating synthetic text data. Their versatility stems from their enormous training datasets and the underlying architectures, allowing them to generate human-like textual responses in real-time.

However, like all tools, LLMs come with their set of limitations. One of the prominent challenges is the “hallucination” errors, where the model might generate information that is incorrect or not present in its training data. In fields like medicine, such errors could lead to misleading interpretations and, in worst-case scenarios, detrimental patient outcomes. The crux of the issue is that while LLMs can generate plausible-sounding content, they do not inherently verify the factual accuracy of the generated output against a trusted data source.

To observe the hallucination errors of LLMs in a safe environment and also practice strategies for mitigating them, the Machine Learning Educational Subcommittee of the Society for Imaging Informatics in Medicine (SIIM) has prepared an educational notebook that you can access on SIIM’s Github page.

In this notebook we will learn about “Retrieval Augmented Generation (RAG)”, an approach that may help mitigate the hallucination errors in LLMs. This approach synergizes the powerful generative capabilities of LLMs with the accuracy of retrieval-based models. In RAG, when a query is made, the model first fetches relevant documents or data snippets (retrieval phase) from a large pool of documents (could be already available or also provided by the user) and then uses this information to generate a response (generation phase). By combining the strengths of both retrieval and generation models, RAG aims to provide more accurate and contextually relevant answers. For medical fields, using RAG can potentially ensure that responses are not only contextually rich but also grounded in accurate data, ensuring a higher degree of trustworthiness in the model’s outputs

Written by

Publish date

Oct 17, 2023

Topic

  • Artificial Intelligence
  • Large Language Models
  • Radiology
  • Research

Media Type

  • News & Announcements

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

podcast

Customer Success for Imaging Informaticists

Aug 26, 2025

In this episode, we explore the rise of Customer Success in healthcare & imaging informatics—where it came from, how it…

post

Liquid Foundation Models: Revolutionizing AI Adaptability and Efficiency

Aug 25, 2025

Yashbir Singh, PhD, Sara Salehi, MD & Yuankai Huo, PhD

Introduction In the rapidly evolving landscape of artificial intelligence, a new paradigm has emerged that promises to address some of…

post

Six SIIM Leaders Reflect on the Life and Legacy of Ruth E. Dayhoff, MD

Aug 19, 2025

In Memoriam Ruth E. Dayhoff, MD May 16, 1952 – July 29, 2025 The Dr. Ruth E. Dayhoff Award for…