AI aids efforts to cut nuisance alerts for health care teams: Study

Study overview. Credit: Journal of the American Medical Informatics Association (2024). DOI: 10.1093/jamia/ocae019

A new study from Vanderbilt University Medical Center demonstrates the promise of artificial intelligence to help refine and target the myriad computerized alerts intended to assist doctors and other team members in day-to-day clinical decision-making.

These pop-up notifications advise users on anything from drug contraindications to gaps in patient care documentation. However, exclusion criteria and targeting of these alerts are wanting, and up to 90% are ignored, contributing to “alert fatigue.” From an information technology perspective, throwing human experts at the targeting problem looks slow, expensive, and somewhat hit-and-miss.

“Across health care, most of these well-intentioned automated alerts are overridden by busy users. The alerts serve an essential purpose, but the need to improve them is clear to everyone,” said lead author Siru Liu, Ph.D., assistant professor of Biomedical Informatics at VUMC.

Liu, senior author Adam Wright, Ph.D., professor of Biomedical Informatics and director of the Vanderbilt Clinical Informatics Center, and a research team reported the study in the Journal of the American Medical Informatics Association.

Liu developed a machine-learning approach to analyze two years of data on user interactions with alerts at VUMC. Based on patient characteristics, a model accurately predicted when specific alerts would be dismissed by users.

She then used various processes and methods to peer inside the predictive model, understand its reasoning, and generate suggested improvements to alert logic. This step, termed explainable artificial intelligence, or AIX, involved transforming the model’s predictions into rules explaining when users are less likely to accept alerts. For example, “if the patient is a hospice patient, then the user is less likely to accept the breast cancer screening alert.”

Out of 1,727 suggestions analyzed, 76 were found to match later manual updates to VUMC alerts, and another 20 were found to align with best practices as determined through interviews with clinicians. The authors calculated that these 96 recommendations would have eliminated 9.3% of the nearly 3 million alerts analyzed in the study, cutting disruptive pop-ups while maintaining patient safety.

“The alignment of the model’s suggestions with manual adjustments made by clinicians to alert logic underscores the robust potential of this technology to enhance health care quality and efficiency,” Liu said. “Our approach can identify areas overlooked in manual reviews and transform alert improvement into a continuous learning process.”

Beyond refining alerts, she added, the methodology uncovered situations indicating problems in workflow, education or staffing. In this way, the approach might more broadly improve quality: “The transparency of our model unveiled scenarios where alerts are dismissed due to downstream issues beyond the alerts themselves.”

Liu and colleagues have several related projects under consideration, including a multisite prospective study of the effects on patient care of machine learning for CDS improvement; designing an interface for CDS experts to visualize the AIX process and evaluate model-generated suggestions; and exploring capabilities of large language models like ChatGPT for optimizing CDS alerts based on user comments and current research literature.

More information:
Siru Liu et al, Leveraging explainable artificial intelligence to optimize clinical decision support, Journal of the American Medical Informatics Association (2024). DOI: 10.1093/jamia/ocae019

Provided by
Vanderbilt University Medical Center

Citation:
AI aids efforts to cut nuisance alerts for health care teams: Study (2024, February 22)
retrieved 22 February 2024
from https://medicalxpress.com/news/2024-02-ai-aids-efforts-nuisance-health.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.