Blog

AI Compliance foundation for medical devices

Coenraad Davidsdochter, MSc Coenraad ist ein Experte mit mehr als 20 Jahren Erfahrung in der Entwicklung von Software für den medizinischen Bereich. In den letzten 10 Jahren hat er hauptsächlich in den Bereichen Regulatory Affairs, Qualitätssicherung, Informationssicherheit, DSGVO und Datenschutz bei ICT Healthcare Technology Solutions (ehemals BMA) gearbeitet.

Artificial intelligence (AI) has become integral to many medical devices in today's rapidly evolving technological landscape. However, organizations integrating AI capabilities in healthcare products often underestimate the regulatory implications, creating significant business risks. We explore regulatory compliance challenges for AI-enabled medical devices and offer strategies to mitigate compliance GAP’s and ensure there is sufficient evidence for safe use of these devices.

Medical devices incorporating AI in the healthcare industry are devices with additional complexities when compared to traditional software devices. While they share common risks, the AI component introduces specific challenges that require careful consideration throughout the product lifecycle, like:

    • How is the AI model developed, and what risks are associated with the development process?
    • How is the AI system deployed and monitored in real-world settings?
    • What procedures govern model retraining and performance evaluation?

The United States FDA pioneered a "Good Machine Learning Practices" approach that takes a total product lifecycle perspective, addressing all elements where risks can occur, which way of thinking has been adopted by many other jurisdictions and was recently adopted by the IMDRF with only slight modification. Building on these principles, FDA published further guidance on  predetermined change control plans and transparency.

 In the meantime, in  Europe, the AI ACT has been published that will be appliable to medical devices undergoing Notified Body review from 2027, when the AI ACT will be applicable to high-risk AI devices which will include all medical devices that need to undergo Notify Body conformity assessment under the MDR/IVDR. The AI act is basically covering the same areas as the FDA guidance but adds environmental impact and ethical considerations. Current devices on the market in Europe are reviewed under MDR/IVDR only for which Team-NB published a questionnaire to help manufacturers identify implementation and documentation needs, including suggested standards to be used for compliance.

Once the AI ACT is applicable to medical devices, there is to be a single conformity assessment covering both the MDR/IVDR and AI ACT. For this assessment, the technical documentation required by the AI ACT is to be integrated in the MDR/IVDR technical file and AI ACT QMS requirements are to be integrated in the manufacturers ISO 13485 based QMS.

Specific standards to support AI ACT compliance are under development, including well known medical device topics such as QMS, risk management and usability validation

As a key set of guidelines on general practice AI have been delayed and AI Act major implementation deadlines loom at the end of the summer, the Commission is considering a "stop the clock" proposition for the AI ACT. For this concerns mostly General-purpose AI systems, Member states and the Commission and, there is no direct link with high-risk AI systems and potential consequences for medical device compliance deadlines are still to be seen

Critical elements of AI regulatory compliance

Model selection together with data collection and management form the foundation of any regulatory compliance for AI medical device development, for this will determine the performance characteristics of the AI device. It is critical that the model is selected that bests suit the intended purpose and that training data represents the intended patient population, considering age, gender, and other demographic and medical condition related variables. 

Regulatory bodies expect complete separation between training, testing, and validation datasets. Generally, about 80% of the datasets are allocated for training and 20% for testing and validation. This separation ensures independent verification of AI performance in healthcare applications. Additionally, all selection criteria and dataset characteristics must be thoroughly documented to demonstrate due diligence in data preparation.

It is important that these activities are driven by QMS processes to ensure controlled, documented, and auditable execution of these activities.

Another critical area is the risk assessment of the AI device, for specific AI related risks are to be included in the medical device risk assessment. A good resource to use is BS/AAMI 34971 Application of ISO 14971 to machine learning in artificial intelligence which includes strategies for risks assessment and provides valuable examples of risk assessment and mitigation strategies.

Tags

Benötigen Sie weitere Informationen?

Haben Sie Fragen oder benötigen Sie weitere Informationen zu diesem Thema? Nehmen Sie Kontakt mit uns auf. Wir helfen Ihnen gerne weiter.

Kontaktformular
Haben Sie Fragen oder benötigen Sie weitere Informationen? Kontaktieren Sie uns. Wir freuen uns auf Ihre Anfrage. Kontaktformular