In recent years, the applications of large language models (LLMs) like GPT-4 have expanded at an exponential pace. From simplifying basic tasks such as setting reminders and answering emails to more complex ones like drafting research papers, coding software, and even assisting in artistic creations. In general, LLMs have found a foothold in a diverse array of domains. Notably, in the field of medicine, these models have shown promise in interpreting complex data sets, searching patient records, and even generating synthetic text data. Their versatility stems from their enormous training datasets and the underlying architectures, allowing them to generate human-like textual responses in real-time.

However, like all tools, LLMs come with their set of limitations. One of the prominent challenges is the “hallucination” errors, where the model might generate information that is incorrect or not present in its training data. In fields like medicine, such errors could lead to misleading interpretations and, in worst-case scenarios, detrimental patient outcomes. The crux of the issue is that while LLMs can generate plausible-sounding content, they do not inherently verify the factual accuracy of the generated output against a trusted data source.

To observe the hallucination errors of LLMs in a safe environment and also practice strategies for mitigating them, the Machine Learning Educational Subcommittee of the Society for Imaging Informatics in Medicine (SIIM) has prepared an educational notebook that you can access on SIIM’s Github page.

In this notebook we will learn about “Retrieval Augmented Generation (RAG)”, an approach that may help mitigate the hallucination errors in LLMs. This approach synergizes the powerful generative capabilities of LLMs with the accuracy of retrieval-based models. In RAG, when a query is made, the model first fetches relevant documents or data snippets (retrieval phase) from a large pool of documents (could be already available or also provided by the user) and then uses this information to generate a response (generation phase). By combining the strengths of both retrieval and generation models, RAG aims to provide more accurate and contextually relevant answers. For medical fields, using RAG can potentially ensure that responses are not only contextually rich but also grounded in accurate data, ensuring a higher degree of trustworthiness in the model’s outputs

To receive credit, registrants must view the entire webinar and then complete the post-webinar survey. Webinar credits will only be awarded one time per webinar view, regardless of if the learner watches the content live or on-demand.

Note: In order to receive credits for any events/learning you attend you must select your eligible credit types found in the CE & Certification section of your MySIIM Account profile.

To access the webinar, once you are registered navigate to My Learning in your My SIIM Account profile.    

Not a member?  Join SIIM Today and Save! 

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

Webinar

Global Health AI for Radiology

Nov 12, 2025

This webinar will explore the transformative potential of artificial intelligence (AI) in radiology to advance global health, with a focus…

Webinar

From Compliance to Clinical Value: Modern Approaches to Dose Management

Oct 16, 2025

As healthcare organizations strive to balance regulatory compliance with the realities of busy clinical environments, the ability to integrate dose…

Webinar

Pixel Protectors: Security Strategies for Medical Imaging Devices, a continuing conversation for CyberSecurity Month

Oct 21, 2025

This webinar uses participant polling to engage in discussion on security preparedness, specifically focused on the need to implement and maintain a Zero Trust architecture and framework.