Blog

Key focus areas of Medical Device AI Compliance

In this blog, we will discuss the key focus areas of Medical Device AI compliance, focusing on the FDA and European regulatory frameworks. Elements are taken from the FDA guidance on Good Machine Learning Practices (GMLP), pre-determined change control plans and transparency, combined with elements of the Team-NB Questionnaire on AI under the MDR/IVDR framework as well as requirements from the AI ACT. 

Risk Management

A core element of any medical device technical file is the risk file, where device specific hazards and hazardous situations need to be identified. For AI- based medical devices this will include AI-specific risks. These risks include data management risks that are to be covering the complete lifetime of the device. 

AI risks could include risks on data quality, bias, overtrust, data drift, retraining, etc. A good starting point for identifying and addressing AI specific risks is BS/AAMI 34971 for it includes a discussion on specific AI risks and practical examples of machine related hazards, foreseeable events, hazardous situations, harm and risk control measures. 

The AI ACT uses the same concepts of harm as ISO 14971 but adding disruption of critical infrastructure and infringement of obligations regarding fundamental rights.  

Model selection and training

The regulatory process demands justification for model selection decisions. Developers must document why a particular model architecture was chosen and provide evidence that the selected model is suitable for the intended purpose. This includes validation data showing the model achieves the required accuracy and precision for its medical application. 

Evidence collection to support claims about model performance is a regulatory requirement. Regulators expect comprehensive documentation of the AI system's training process, performance metrics, and limitations to ensure it meets safety and effectiveness standards. 

Change control plans

AI models can be retrained after deployment in order to improve performance. It is important that beforehand protocols are developed specifying the conditions under which retraining is allowed and describing boundaries and threshold values making sure that the update is safe to use. These plans are assessed as part of the regulatory approval of the AI device, allowing the manufacturer to retrain the device without the need for additional approval as long as the change is within the defined boundaries. 

Deployment and post-market surveillance

After development, ensuring safe and effective deployment often requires evidence collection, sometimes including clinical data, to demonstrate safety. Companies must implement robust monitoring systems to track real-world performance and develop processes for capturing decisions and comparing them with clinical outcomes. 

Unlike many consumer AI applications, AI in healthcare cannot implement autonomous continuous learning. Instead, structured approaches to model retraining with human oversight are required. Regulatory bodies are increasingly emphasizing post-market surveillance requirements for AI devices, including monitoring ongoing performance and implementing processes to identify and resolve problems that emerge during real-world use. 

Quality culture and human-AI interaction

Successful AI implementation in medical devices depends on a quality-oriented organizational culture that embeds quality considerations into every development, deployment, and monitoring aspect. As the FDA notes, this "culture of quality and organizational excellence" significantly reduces the risk of unsafe or ineffective devices. 

Equally important is the focus on transparent AI systems that ensure human operators can understand how the AI works and recognize potentially problematic outputs. Regulators are increasingly concerned with human-AI team performance, not just algorithmic accuracy, but how effectively humans and AI work together in clinical settings. This approach acknowledges that AI in healthcare doesn’t operate independently but functions as part of a human-AI team for patient safety. 

QMS extension

It is important that all described elements are supported by QMS processes so that the execution of these elements are recorded and controlled. Manufacturers will need to extend their QMS with corresponding procedures. For AI ACT compliance, an update of ISO 42001 AI Quality Management System is being prepared as part of the first harmonization request supporting the AI ACT. 

 

Coenraad Davidsdochter, MSc
Post date: June 20, 2025
Tags
How can we help you? Contact us