In recent years, the applications of large language models (LLMs) like GPT-4 have expanded at an exponential pace. From simplifying basic tasks such as setting reminders and answering emails to more complex ones like drafting research papers, coding software, and even assisting in artistic creations. In general, LLMs have found a foothold in a diverse array of domains. Notably, in the field of medicine, these models have shown promise in interpreting complex data sets, searching patient records, and even generating synthetic text data. Their versatility stems from their enormous training datasets and the underlying architectures, allowing them to generate human-like textual responses in real-time.

However, like all tools, LLMs come with their set of limitations. One of the prominent challenges is the “hallucination” errors, where the model might generate information that is incorrect or not present in its training data. In fields like medicine, such errors could lead to misleading interpretations and, in worst-case scenarios, detrimental patient outcomes. The crux of the issue is that while LLMs can generate plausible-sounding content, they do not inherently verify the factual accuracy of the generated output against a trusted data source.

To observe the hallucination errors of LLMs in a safe environment and also practice strategies for mitigating them, the Machine Learning Educational Subcommittee of the Society for Imaging Informatics in Medicine (SIIM) has prepared an educational notebook that you can access on SIIM’s Github page.

In this notebook we will learn about “Retrieval Augmented Generation (RAG)”, an approach that may help mitigate the hallucination errors in LLMs. This approach synergizes the powerful generative capabilities of LLMs with the accuracy of retrieval-based models. In RAG, when a query is made, the model first fetches relevant documents or data snippets (retrieval phase) from a large pool of documents (could be already available or also provided by the user) and then uses this information to generate a response (generation phase). By combining the strengths of both retrieval and generation models, RAG aims to provide more accurate and contextually relevant answers. For medical fields, using RAG can potentially ensure that responses are not only contextually rich but also grounded in accurate data, ensuring a higher degree of trustworthiness in the model’s outputs

Written by

Pouria Rouzrokh, MD, MPH, MHPE, Mayo Clinic AI Lab

Publish date

Oct 17, 2023

Topic

  • Artificial Intelligence
  • Large Language Models
  • Radiology
  • Research

Media Type

  • News & Announcements

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

podcast

My Informatics Journey – Paul Nagy

Apr 15, 2024

In this engaging episode of SIIMcast, we revisit the roots with Dr. Paul Nagy, a trailblazer in imaging informatics and a…

podcast

My Informatics Journey – Rick Wiggins

Apr 15, 2024

In the latest episode of SIIMcast, part of our mini-series “My Informatics Journey,” we are thrilled to feature Dr. Rick Wiggins,…

post

Effective Strategies for Post-Production Monitoring of AI Algorithms in Radiology

Apr 1, 2024

The integration of artificial intelligence (AI) algorithms into radiology practices is rapidly increasing, with many promising applications on the horizon.…