SIIM kicked off its Enterprise Imaging Webinar Series exploring machine learning operations (MLOps) and important challenges that must be addressed for the future of AI in medicine. Although there has been a rapid evolution of ML algorithms developed for healthcare purposes, less attention has been given to the operational components of the ML lifecycle needed for its successful implementation in healthcare settings. These include model deployment and integration, data ingest and processing, and monitoring model accuracy and reliability to name a few. The panel of experts astutely observed that the discussion would likely raise more questions than answers, emphasizing that recognizing these challenges is the first step towards improvement.

Let’s start with a more formal definition of what constitutes MLOps: this discipline represents a methodical approach to incorporating ML algorithms into the healthcare setting, guaranteeing precise and efficient data processing, and upholding the ongoing supervision and maintenance of these systems. The primary aim of MLOps is to ensure the development of ML models with high accuracy, and their subsequent implementation, administration, and periodic updates are carried out in a regulated, compliant, and secure framework. The necessity of MLOps in medicine arises from its potential to not only improve patient outcomes through more accurate diagnoses, personalized treatment plans, and predictive analytics for disease prevention, but MLOps are needed for:

  1. Scalability and Operational Efficiency: MLOps offers methodologies and infrastructure to effectively handle and expand intricate ML models ensuring they are equipped to process extensive data sets and computational demands.
  2. Model Reliability and Performance: MLOps ensures continuous monitoring and maintenance of models to maintain accuracy and reliability, addressing challenges like data drift and changing environments.
  3. Regulatory Compliance and Risk Management: MLOps ensures that ML models comply with regulatory standards and helps in managing risks associated with deploying these models in real-world settings.

In this open discussion of MLOps, a large portion of the conversation was dedicated to the assessment of a clinical AI tool and quality assurance. What metrics do you use to measure these systems? Who exactly does the initial assessment and continuous monitoring? If we have a CT head hemorrhage detection system, is the assessment simply asking the radiologist to click a box of “agree” vs “disagree” or will radiology reports be assessed posthoc in bulk by a human reader or maybe a large language model? What happens if the outcome allows for varying degrees of agreeableness and the answer is not simply “agree” vs “disagree”? How long should the assessment period be prior to purchasing and full deployment?

There’s no magic bullet of course, but Matt Hayes, Senior PACS Manager at Radiology Partners, suggests having these conversations with vendors early and often. The first two questions every informatics teams should ask a vendor before deployment is what metrics can be monitored after model deployment and where that information is housed. Ameena Elahi, IS Application Manager at the University of Pennsylvania Health System, and Raym Geis, Senior Scientist at the ACR Data Science Institute and practicing radiologist, suggest developing a team that involves a clinical champion along with some other key faculty and staff members to assess the model in a formal, possibly blinded manner for at least 90 days including some stress testing with edge cases (e.g. significant motion or metal artifact). Quality assurance also includes stability of the system as Slyvia Devlin, Director of Clinical Applications Operations at Radiology Partners, emphasizes; if the system is constantly experiencing downtime, crashing, freezing, or even just difficult to use, it doesn’t matter if the model has a >99.99% accuracy and is projected to increase efficiency by 50%, as it will never be fully implemented.

MLOps teams will also need to develop a process to ensure the right kind of data is actually fed to the model on an appropriate timescale. Using our CT hemorrhage detection from above, how is that CT brain triggered to be read by the AI system. Is it based on certain DICOM header information? Does the radiology technologist protocoling the scan activate the AI system? Does the radiologist reading the scan trigger it? Is it up to the referring clinician and their indication for ordering the scan? The process by which data ingest occurs ties into usability as the more difficult or time consuming it is to for the correct data to get to the end point, the less likely it is a clinical AI tool will successfully be implemented.

In addition to technical considerations, the discussion also delved into the economic aspects of MLOps, particularly ROI. As pointed out by Steve Langer, a Professor of Diagnostic Physics and Imaging Informatics at Mayo Clinic, the SIIM Machine Learning Industry Liaison Subcommittee has explored ROI in the context of diagnostic assistance, and at this time, there is no financial ROI for diagnostic assistance. There may be gains in efficiency for a radiology department, but no direct change in revenue. Therefore, it’s imperative to actually determine how often these clinical AI models are being used and to what extent the end user finds these tools useful when integrated into workflows. This understanding of ROI is important for increasingly common conversations with executives in the C-suite who see these technologies as a new shiny tool or our physician colleagues in other specialties who hear a sales pitch for new AI product and want radiologist who read their studies to implement them. Radiologists and informatics teams involved in MLOps need to be ready for these conversations and have a process by which they respond to requests for new AI services.

The first SIIM Enterprise Imaging Webinar Series of 2024 underscores the growing need for MLOps in healthcare. In many ways, this is a new frontier for imaging informatics teams, and it’s an exciting time to be in this space as there is no shortage of challenges that will need to be addressed in the near future. I want to personally thank each one of the members from the panel for a lively and engaging discussion. Join us in next month’s webinar to continue the 2024 SIIM Enterprise Imaging Webinar Series, and feel free to share your thoughts and questions in the comments below!

Written by

Greg Grecco, Student, Indiana University School of Medicine

Publish date

Jan 30, 2024

Topic

  • Administration and Operations
  • Applications
  • Artificial Intelligence
  • Clinical Data Informatics
  • Data Sets & Management
  • Generative AI
  • Large Language Models
  • Machine Learning Challenges
  • Quality Improvement / Assurance
  • Research
  • Systems Management

Media Type

  • Blog

Audience Type

  • Clinician
  • Developer
  • Imaging IT
  • Researcher/Scientist
  • Vendor

podcast

My Informatics Journey – Paul Nagy

Apr 15, 2024

In this engaging episode of SIIMcast, we revisit the roots with Dr. Paul Nagy, a trailblazer in imaging informatics and a…

podcast

My Informatics Journey – Rick Wiggins

Apr 15, 2024

In the latest episode of SIIMcast, part of our mini-series “My Informatics Journey,” we are thrilled to feature Dr. Rick Wiggins,…

post

Effective Strategies for Post-Production Monitoring of AI Algorithms in Radiology

Apr 1, 2024

The integration of artificial intelligence (AI) algorithms into radiology practices is rapidly increasing, with many promising applications on the horizon.…