In recent years, the applications of large language models (LLMs) like GPT-4 have expanded at an exponential pace. From simplifying basic tasks such as setting reminders and answering emails to more complex ones like drafting research papers, coding software, and even assisting in artistic creations. In general, LLMs have found a foothold in a diverse array of domains. Notably, in the field of medicine, these models have shown promise in interpreting complex data sets, searching patient records, and even generating synthetic text data. Their versatility stems from their enormous training datasets and the underlying architectures, allowing them to generate human-like textual responses in real-time.

However, like all tools, LLMs come with their set of limitations. One of the prominent challenges is the “hallucination” errors, where the model might generate information that is incorrect or not present in its training data. In fields like medicine, such errors could lead to misleading interpretations and, in worst-case scenarios, detrimental patient outcomes. The crux of the issue is that while LLMs can generate plausible-sounding content, they do not inherently verify the factual accuracy of the generated output against a trusted data source.

To observe the hallucination errors of LLMs in a safe environment and also practice strategies for mitigating them, the Machine Learning Educational Subcommittee of the Society for Imaging Informatics in Medicine (SIIM) has prepared an educational notebook that you can access on SIIM’s Github page.

In this notebook we will learn about “Retrieval Augmented Generation (RAG)”, an approach that may help mitigate the hallucination errors in LLMs. This approach synergizes the powerful generative capabilities of LLMs with the accuracy of retrieval-based models. In RAG, when a query is made, the model first fetches relevant documents or data snippets (retrieval phase) from a large pool of documents (could be already available or also provided by the user) and then uses this information to generate a response (generation phase). By combining the strengths of both retrieval and generation models, RAG aims to provide more accurate and contextually relevant answers. For medical fields, using RAG can potentially ensure that responses are not only contextually rich but also grounded in accurate data, ensuring a higher degree of trustworthiness in the model’s outputs

Written by

Publish date

Oct 17, 2023

Topic

  • Artificial Intelligence
  • Large Language Models
  • Radiology
  • Research

Media Type

  • News & Announcements

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

podcast

My Informatics Journey with Dr. Marc Kohli

Apr 9, 2025

In this episode of SIIMCast, we chat with Dr. Marc Kohli, professor of radiology and medical director of imaging informatics…

podcast

Portland Deep Dive: SIIM 2025, Donuts, and Data

Mar 18, 2025

SIIMCast takes a detour from its usual deep dives into imaging informatics to explore the quirky, vibrant city of Portland,…

post

Finding Our Future: Reflections on SIIM’s Past, Present, and Future

Mar 14, 2025

Nabile M. Safdar, MD, MPH, FSIIM

In 2005, many of my colleagues and I were still debating whether the new generation of flat-panel monitors could match…