When AI Systems Meet Medical Devices

AI Systems in Medical Devices

AI is changing healthcare. It can detect patterns in images, predict patient deterioration, prioritise care pathways and automate routine decisions. But when AI performs or assists with medical tasks it sits squarely inside two substantial regulatory regimes: the EU Medical Devices Regulation (MDR/IVDR) and the EU Artificial Intelligence Act (AI Act). EU regulators expect manufacturers to show both clinical safety and algorithmic governance for AI used for medical purposes.

This article looks at the key challenges that arise where the two regimes meet.

Ten friction points regulators will watch closely

1. Classification ambiguity: Is it a device, software, or an AI system?

At first glance this is bookkeeping. In practice it matters enormously: qualification under the MDR depends on intended purpose, while the AI Act flags as “high-risk” AI embedded in devices subject to third-party assessment. However, many borderline products are fact-sensitive. Small differences in labeling, marketing or deployment can change the entire compliance path and the legal consequences.

Why that matters: a misclassification can lead to enforcement action, product withdrawal or onerous retrospective conformity assessments.

2. Lifecycle vs. one-time approvals: Who governs updates?

The AI Act treats model updates, retraining and drift as lifecycle phenomena that must be governed continuously. The MDR focuses on verification, validation and crucially whether changes trigger new conformity procedures. The question of what counts as a “substantial modification” under the AI Act and when that demands fresh conformity assessment is both technical and strategic.

Why that matters: the ability to update a model post-market (to improve performance or address bias) has huge product and commercial value but it can also trigger reassessment and interrupt product availability.

3. Quality systems: Two grammars in one process

ISO 13485-style QMS is the first choice for medical devices. The AI Act expects AI-specific quality controls (data governance, model traceability, logging). Integrating the two without creating gaps or inconsistencies is a non-trivial governance exercise that often requires reshaping responsibilities inside the organisation.

Why that matters: poorly aligned QMS evidence may satisfy one regulator and fail the other creating an undesireable compliance exposure.

4. Risk management: Clinical safety vs algorithmic harm

ISO 14971 deals well with device hazards. It was not designed to capture algorithmic risks like dataset bias, model inversion, data poisoning or systemic discrimination. The AI Act adds a separate expectation for algorithmic risk analysis and mitigation. Bridging the two, and deciding which residual risks are acceptable, requires both clinical judgement and a clear legal strategy.

Why that matters: algorithmic failures can produce clinical harm and fundamental-rights violations. Both are enforceable and reputationally catastrophic.

5. Data governance: Provenance, representativeness and privacy

Both the MDR & the AI Act expect sound data practices, but from different angles. The MDR looks for data that supports clinical evidence, while the AI Act demands demonstrable dataset quality, representativeness and documentation. Layer the GDPR into the mix and you are balancing lawful processing, minimisation, and the desire for broad, diverse datasets that reduce bias.

Why that matters: poor data practices can derail regulatory approval, prompt investigations, and raise civil-liability exposure.

6. Transparency and human oversight: How much, and to whom?

The AI Act imposes transparency obligations and a requirement that high-risk systems allow meaningful human oversight. The MDR requires clear Instructions for Use and safe clinical workflows. The practical problem is aligning clinician-facing information with the AI Act’s more technical transparency needs without creating a confusing or legally risky set of documents.

Why that matters: insufficient or misleading transparency increases the risk of misuse, regulatory queries and liability claims.

7. Cybersecurity: New attack surfaces

AI adds novel vulnerabilities and the attack surface now includes models, training pipelines, and datasets. Regulators expect cybersecurity controls that protect not only patient data but model integrity, because interfering with a model can directly translate into patient harm.

Why that matters: cybersecurity incidents can trigger simultaneous MDR vigilance and AI Act reporting obligations and they can rapidly escalate reputational harm.

8. Performance evaluation: Clinical endpoints vs statistical metrics

Regulators will ask for clinical outcomes evidence and rigorous technical validation (calibration, subgroup performance, drift metrics). Clinical studies can serve both purposes, but aligning study design, endpoints and statistical plans so they meet dual expectations is complex.

Why that matters: weak or misaligned evidence can mean delayed market access or constrained indications for use.

9. Conformity assessment: Who audits you, and for what?

For high-risk AI in medical devices the MDR’s notified-body pathway will typically be the vehicle for conformity assessment and that means a single audit trail needs to demonstrate compliance with both regimes. But Notified Bodies are still operationalising how they assess AI Act items, and differences in approach between buyers, suppliers and auditors create real commercial uncertainty.

Why that matters: lack of early alignment with your Notified Body can force expensive rework or narrow the scope of certification.

10. Post-market monitoring: One dashboard or several?

Both regimes demand active monitoring after market entry, but they measure different things and set different thresholds. The AI Act brings expectations for continuous algorithmic monitoring (drift, fairness metrics) on top of MDR vigilance (adverse events, clinical performance). Integrating those systems and determining what triggers regulatory reporting or corrective action is harder than it sounds.

Why that matters: poor monitoring design can result in missed safety signals, late reporting, and enforcement risk.

How we can help

We are happy to work with your (in-house) compliance team to identify legal gaps and practical compliance questions arising at the AI–MDR intersection.

Whether you need a focused gap analysis, a prioritized remediation plan, drafting & revision of technical-file language or policies, targeted staff training, or support engaging your Notified Body and other stakeholders, we can provide pragmatic, defensible solutions tailored to your product and commercial objectives.

We work alongside engineering, clinical and legal stakeholders to translate regulatory uncertainty into clear, auditable actions, confidentially and with an eye to minimising market disruption.

Contact us to arrange a short scoping call and we’ll discuss your situation and next steps.

Disclaimer:
The content of this blog is provided for general informational purposes only and does not constitute legal advice. While we strive to ensure that the information is accurate and up to date, it may not reflect the most current legal developments or the specific circumstances of your organization. Readers should not act upon any information contained in this blog without first seeking professional legal counsel. No attorney-client relationship is created through your use of or reliance on the information provided herein.