Artificial intelligence (AI) is reshaping healthcare and expected to have a dramatic impact on patient safety as AI is further developed and refined.1–7 AI is expected to improve patient safety in key areas, such as by increasing efficiency of clinical decisions and care,2,4,6 reducing human error,2,3,6 offering risk prediction and early detection of change in patients’ condition,2–4 supporting system-level safety,2,4 and offering insights by aggregating many data sources.2,4 Despite the optimism surrounding AI, the following patient safety concerns have been raised: AI models that are trained on biased or incomplete data,1,2,4,6–8 staff become overly reliant on and biased toward the recommendations provided by AI,1,6,8 staff not trusting the “black box,”2,6,7 AI has given erroneous recommendations for individual patients with incomplete and/or inaccurate records,6,8 staff are overwhelmed by being given too much information/notifications,2 and AI-based tools that are implemented prior to sufficient testing and validation.4,6–8
PA-PSRS Reports of Events Involving AI
Given the concerns about AI and patient safety, we conducted a preliminary exploration of the Pennsylvania Patient Safety Reporting System (PA-PSRS) for events that involve AI, either causing the event or preventing/detecting the event. Based on a limited sample of event reports, we found that AI was primarily involved in monitoring of patients for exiting their bed, interpretation of data collected from patient monitoring devices, reading of images (e.g., X-ray, CT), and note dictation. We found that events were occurring at both small and large hospitals, and across a range of care areas (e.g., cardiovascular unit, emergency department, general medicine, imaging, orthopedic, surgical unit).
In most of the event reports within our sample, AI was used to prevent/detect issues. For example, AI was involved in numerous events where a human failed to detect a significant image finding (i.e., false-negative), but AI detected the misread. In other instances, AI analyzed patient behavior to predict/detect a patient exiting their bed, out of staff’s concern for them being at risk for either a fall or disorientation (the AI technology is more advanced than a traditional bed alarm). In these bed-related events, the reports frequently described scenarios where the AI technology alerted staff to a prediction that the patient was planning to exit their bed, but staff were still unable to reach the patient before their fall. Among the instances where AI was used to prevent/detect issues, a majority of the event reports described AI as being proactively implemented, but in some events it was described as being reactively implemented with the goal of preventing the same issue from occurring again.
We also identified limited instances where AI caused or contributed to the events by either misreading an image or a monitoring device where a new AI program began producing a much greater quantity of information that caused staff to be overwhelmed with notifications and this reportedly delayed their identification of an urgent finding.
Future Directions and Conclusion
Numerous sources have warned the healthcare community that, despite the intended benefits of AI, it could create a broad range of risks to patient safety.1,2,6,7,9 Despite the potential risks, we have identified very few PA-PSRS reports that describe AI as a cause or contributing factor to patient safety events. However, AI-related events may be underdetected and their true frequency could be higher than what is reflected in our current data. For example, AI involvement may not be recognized during monitoring and analysis of PA-PSRS reports if reporters do not mention “artificial intelligence,” “AI,” or the name of an AI-enabled software/device.10 Similarly, reporters may not always recognize when AI is involved, such as when staff use an AI-enabled software/device but are unaware that AI was being used or that the design of AI was somehow contributing to the event.9 Lack of understanding how AI contributed to an event likely will result in important information being absent from the patient safety event report. Finally, events involving AI might also be underreported in more nuanced circumstances, such as when AI-enabled technology (i.e., clinical decision support, CDS) provides a nonoptimal recommendation or an erroneous recommendation that the clinician follows, but is not identified until a much later date.
Leaders and staff at healthcare facilities need to be vigilant in detecting and reporting patient safety events involving AI.9,11 The reporting of events will enable the Patient Safety Authority to identify statewide patterns and share lessons learned across Pennsylvania.
Disclosure
The author declares that they have no relevant or material financial interests.
This article was previously distributed in an October 8, 2025, newsletter of the Patient Safety Authority, available at https://patientsafety.pa.gov/newsletter/Pages/newsletter-oct-2025.aspx
About the Authors
Matthew A. Taylor (MattTaylor@pa.gov) is a research scientist on the Data Science & Research team at the Patient Safety Authority, where he conducts research, uses data to identify patient safety concerns and trends, and develops solutions to prevent recurrence.