In recent years, the applications of large language models (LLMs) like GPT-4 have expanded at an exponential pace. From simplifying basic tasks such as setting reminders and answering emails to more complex ones like drafting research papers, coding software, and even assisting in artistic creations. In general, LLMs have found a foothold in a diverse array of domains. Notably, in the field of medicine, these models have shown promise in interpreting complex data sets, searching patient records, and even generating synthetic text data. Their versatility stems from their enormous training datasets and the underlying architectures, allowing them to generate human-like textual responses in real-time.

However, like all tools, LLMs come with their set of limitations. One of the prominent challenges is the “hallucination” errors, where the model might generate information that is incorrect or not present in its training data. In fields like medicine, such errors could lead to misleading interpretations and, in worst-case scenarios, detrimental patient outcomes. The crux of the issue is that while LLMs can generate plausible-sounding content, they do not inherently verify the factual accuracy of the generated output against a trusted data source.

To observe the hallucination errors of LLMs in a safe environment and also practice strategies for mitigating them, the Machine Learning Educational Subcommittee of the Society for Imaging Informatics in Medicine (SIIM) has prepared an educational notebook that you can access on SIIM’s Github page.

In this notebook we will learn about “Retrieval Augmented Generation (RAG)”, an approach that may help mitigate the hallucination errors in LLMs. This approach synergizes the powerful generative capabilities of LLMs with the accuracy of retrieval-based models. In RAG, when a query is made, the model first fetches relevant documents or data snippets (retrieval phase) from a large pool of documents (could be already available or also provided by the user) and then uses this information to generate a response (generation phase). By combining the strengths of both retrieval and generation models, RAG aims to provide more accurate and contextually relevant answers. For medical fields, using RAG can potentially ensure that responses are not only contextually rich but also grounded in accurate data, ensuring a higher degree of trustworthiness in the model’s outputs

Written by

Publish date

Oct 17, 2023

Topic

  • Artificial Intelligence
  • Large Language Models
  • Radiology
  • Research

Media Type

  • News & Announcements

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

post

Topological Deep Learning: Transforming Medical Imaging

Oct 3, 2024

Topological Deep Learning Is a groundbreaking approach that is transforming medical imaging. By combining the power of deep learning with…

post

Capturing Generosity: A Night of Giving at the SIIMfund 2024 Silent Auction

Sep 26, 2024

The SIIMfund 2024 event brought together a vibrant community of imaging informatics professionals, friends, and family who were filled with…

post

Insights From the September Webinar on Enhancing Healthcare Workflows with Artificial Intelligence

Sep 11, 2024

The recent SIIM’s Artificial Intelligence webinar held on September 4th, discussed how advanced AI platforms are revolutionizing healthcare workflows. Moderated…