For most of us, it’s time for the summer holidays. When packing your bags, it is always good to have some quality literature with you so you can read your way through the lazy days. Only, this summer for AI medical device manufacturers these rustic days won’t be as rustic anymore, for AI is hot this summer. This year, on your reading list the EU AI Regulation proposal and DRAFT BS/AAMI 34971 Guidance on the application of ISO 14971 to AI and machine learning cannot be missed because both documents are currently in consultation mode till the beginning of August. So, during the holidays roll up your sleeves and read through these documents carefully for there is still time to Stand up and Speak up and gather all your forces and make your voice heard!
Scope of the European AI Regulation
The AI Regulation covers AI systems in general and is not specifically aimed at the medical devices. It describes requirements for so called high-risk devices which will cover all Medical AI devices that need a notified body assessment for the MDR or IVDR. The regulation describes concepts of transparency and human oversight that, together with the requirements for users of high-risk AI systems must give assurance that the AI system operates in a controlled way. To aid the development of AI devices, the regulation introduces the concept of AI regulatory sandboxes that helps to ensure a sound training and development of high-risk AI systems, while protecting the privacy of the data subjects.
To determine what one's obligations are in a certain jurisdiction it is important to have clear definitions of the different roles mentioned in the regulations as well as one clear definition of what Artificial intelligence actually is. We have seen that in several documents several definitions are not aligned.
In the EU regulation 2021/0106(COD) it is defined as follows:
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Techniques listed in annex I:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
This is different from the definition we used in our previous blog about AI. In that blog we excluded look up tables and statistical approaches. However now that the draft regulation states them explicitly to be in scope, we will also include them in our definition.
Also, the definition used in the BS/AAMI 34971 is slightly different as in that document a difference is made between machine learning and artificial intelligence rather than the definition that machine learning is part of AI:
Capability of a system to perform tasks or develop data processing systems that perform functions normally associated with human intelligence
Function of a system that can learn from input data of strictly following a set of specific instructions.
You see that the details of the definition differ but they both cover the same scope in their way.
Differences in obligations/ definitions
Let's move on to the more interesting part of at least the part which will lead to more discussion. There is some terminology in both regulations (MDR and AI regulation) with different definitions.
When you look at both regulations and compare the definitions the following observations can be made:
- In the AI regulation are 23 new definition which are not stated in the MDR/IVDR. There are 15 differences which are substantial different and there are only 6 definitions which are similar.
- The economic operators described in both regulations are similar however the described obligations differ. Especially the obligation of the distributor has some differences between the 2 regulations:
- The obligations depend upon the classification of the AI software in the AI regulation.
Like in the MDR, the classification in the AI regulation is dependent on risk associated with the intended use. AI software can be classified in class I, IIa, IIb or III according to the MDR (Annex VIII) where class I honestly is rather unlikely. According to the AI regulation (Article 6), AI software can be classified in low risk and high-risk software where healthcare solution will probably all be defined as high-risk. Theoretically it might be possible a healthcare AI solution will be classified in the low-risk class.
Where a distributor under the MDR has a fixed set of obligations the obligations for a distributor of a high-risk device are more extensive than for a distributor of a low-risk device. Since medical device software can be classified in both one should have a close look for which device the stricter obligations are valid. However, the most medical (IVD) devices will be classified as a high-risk device (class IIa or higher according to MDR) it is safe to assume that the set of obligations will be applicable for most distributors of AI software in healthcare applications.
A distributor under the MDR has to verify conformity with said regulation by checking the presence of the CE mark, the information supplied by the manufacturer and the availably of the DoC. All tasks which are relatively easy to perform. The obligations in the AI regulation however, state that the distributor has to verify that the manufacturer and the importer have complied with the obligations set out in the regulation. A task which in practice is far more difficult then verifying the presence of some documents.
There are more cases where the AI regulation is stricter compared to the MDR. If you already have an AI product on the market, we suggest taking a look in both regulations, compare both wordings exactly and implement those changes in your QMS or technical documentation.
- Article 11 points out several minimum requirements of the technical documentation which are not mentioned in the MDR
- e.g., procedures on how human oversight is covered
- Article 13 is about the transparency of predetermined changes to the user. These must be mentioned in the IFU
- Article 17 adds obligatory procedures to the QMS related to data management.
- Article 40 defines the use of harmonised standards, but look out that there will be harmonised standards with the MDR, and different standards may be hamonised with the AI regulation
- Article 54 on processing of personal data adds new requirements to those set out in the MDR. However, many of those requirements are already covered by the GDPR but still there are new requirements in there which you should look at.
Practical guidance on risk management
The EU AI proposal leaves it to the manufacturer to provide prove for compliance. There is however little information provided on how they are expected to do that. Fortunately, there is practical guidance in the draft BS AAMI 34971 document on how to apply risk management to AI medical devices and machine learning. In this guidance, not only the risks concepts are explained when it comes to AI and machine learning, but also examples are provided. Special attention should be paid to Annex B where examples of AI-related hazards, examples of events and circumstances and examples of relationships between hazards, foreseeable sequences of events, hazardous situations, harm, and potential risk control measures are provided.
If you however, after reading the EU AI proposal or draft BS AAMI 34971, have the idea that some of the wording is just not quite right, consider yourself lucky because both documents are currently in consultation mode and remarks can be submitted.
Different standards/ guidance documents in different regulatory domains
But why would you bother to submit anyway? Well, this new EU AI Regulation proposal is not specifically aimed at the healthcare domain, but has a much broader scope including military, law enforcement, education, employment, etc. For this new regulation there will be conformity assessments that will be integrated with the MDR/IVDR conformity assessment if you’re so luck that your Notified Body will apply for designation under this new AI regulation. But there also will be specific harmonized standards to be used that are outside the medical domain, and for which also equivalent standards may exist within the medical domain. Would ISO 13485 or ISO 14971 be accepted as standard for the quality system and risk management for this AI regulation? We, as (IVD) medical device manufacturers have the MDCG guidance document MDCG 2021-5 (see also blog by Coenraad on this topic), but will the views expressed in this guidance also be respected by the AI Regulation Notified Bodies? There were times that manufacturers could get non-conformities for not using a specific standard…
End of June 2021, the WHO guidance on ethics and government of artificial intelligence for health was released. A lot of the concepts and ideas of the EU AI proposal are also reflected in this document as if the two have been written in conjunction, although the WHO document focusses on health. In addition to the EU AI proposal, the WHO document also warns for AI technologies that might disrupt the relationship between provider and patient and the possible negative effect of AI systems on the healthcare work-force and warns for too much power for large technology companies that have already collected huge amounts of data. Although the WHO document is primary aimed at ministries of health, it is a good read for everyone involved with manufacturing, regulating medical devices, or dealing with the regulatory consequences of medical AI systems.
There are also some particular situations which are unclear on how the regulation will be applied. Imagine some inside the union downloads an AI software form a website which is hosted by a supplier outside the EU. Who is the importer? The one who buys it first inside the EU? Or the person who provides on a website which is accessible from the EU? Especially in case of apps you can download form the Appstore this can lead to interesting situations.
Now is your chance to change something
The AI regulation is at the time of writing in consolidation mode which leaves you as reader and maybe even AI medical devices manufacturer the possibility to comment on it. So, if there are points which you do not seem appropriate or feasible, “Stand up and Speak up” and mobilize your forces to keep AI in medical devices manageable. If Qserve can help you in that, you know where to find us!