Artificial intelligence is transforming the medical device sector by powering diagnostic tools, guiding robotic surgeries, and delivering personalised treatment recommendations. However, innovation is outpacing regulation. With the EU AI Act now present and the revised Product Liability Directive (PLD) in force, medical device manufacturers face a new, complex legal landscape marked by overlapping obligations and evolving liability risks.
In theory, the EU aims to harmonise AI safety, transparency, and liability rules. In practice, a fragmented patchwork of compliance requirements, legal uncertainties, and open questions about responsibility, especially where AI systems act with autonomy, prevails.
Here’s what manufacturers need to know:
1. High-Risk by Default: Facing a Dual Regulatory Burden
The EU AI Act classifies any AI system used in medical devices that impacts diagnosis, treatment, or patient monitoring as “high-risk.” This designation triggers stringent requirements: comprehensive risk management, quality assurance, data governance, logging and transparency, and mandatory human oversight mechanisms.
Crucially, AI-enabled medical devices must also comply with the EU Medical Device Regulation (MDR). This creates a dual conformity assessment process, one under the AI Act, one under the MDR. Although the AI Act includes “horizontal” provisions to harmonize assessments, duplication remains substantial, particularly for adaptive AI or software that evolves through updates. Manufacturers should anticipate greater complexity and resource demands during compliance.
2. Legal vs. Technical Autonomy: Liability in an Increasingly Autonomous Environment
The AI Act requires “appropriate human oversight” of high-risk AI systems. Yet, in healthcare settings marked by resource constraints, what constitutes effective oversight remains unclear.
AI systems such as surgical robots or decision-support tools may operate semi-autonomously or independently within defined parameters. As technical autonomy grows, traditional liability frameworks become strained. For example, if an AI-driven robot makes a harmful decision based on probabilistic models, or a diagnostic tool errs due to biased data, who bears legal responsibility?
This scenario is more than theoretical. Courts will soon confront cases where no party directly “caused” the harm: the device functioned as intended, hospitals complied with instructions, and the AI operated within programmed limits. Current EU law provides no clear doctrinal resolution for such situations.
3. Withdrawal of the AI Liability Directive: A Regulatory Gap
Until early 2025, the European Commission pursued an AI Liability Directive (ALD) to address gaps in fault-based claims involving AI. This initiative aimed to create harmonized rules tailored to emerging autonomous technologies.
However, the ALD was quietly withdrawn amid political disagreement and fears of overlap with national tort laws. This withdrawal leaves a regulatory vacuum. Without an EU-level fault liability framework for AI, victims of AI-related harm must rely on fragmented national negligence laws or the revised Product Liability Directive (PLD), each ill-equipped to fully address autonomous, evolving AI systems.
4. Product Liability: An Imperfect Fit for Complex AI
The updated PLD introduces AI-specific amendments: expanding the definition of “product” to include software and digital components; creating rebuttable presumptions of causality in evidentially difficult cases and affirming strict liability for defective AI products without needing fault.
Yet significant challenges remain:
- Causation and Defectiveness: Strict liability applies only if the product is “defective.” Defining defect in AI contexts is legally and technically fraught. For example, is a machine learning model that misdiagnoses 2% of cases defective if it still outperforms human experts? What threshold establishes unreasonableness?
- Multi-Party Liability: AI systems frequently rely on modular inputs, such as software, training data, system integration, developed by different entities. Assigning fault or causation across a dispersed supply chain remains complex and uncertain.
- Adaptive AI and Post-Deployment Evolution: The PLD presumes liability is tied to the product’s state at the time of market placement. Adaptive AI models that “learn” or modify behavior post-release challenge this assumption. Are manufacturers liable for defects arising from subsequent adaptations, or does liability reset?
5. Explainability and Transparency: Navigating the “Black Box” Dilemma
High-risk AI must meet explainability and transparency standards under the AI Act. However, many high-performance medical AI tools, especially deep learning models that identify subtle imaging or genetic patterns, are intrinsically opaque.
This creates legal tension: manufacturers must disclose meaningful information about system operation and limitations while protecting intellectual property and managing inherently inscrutable models. If an AI recommendation leads to harm, and neither clinicians nor developers can explain the underpinning logic, where does liability rest?
6. Post-Market Monitoring: Continuous Oversight, Unclear Liability
The AI Act requires ongoing post-market monitoring of AI system performance, complementing existing MDR obligations. Manufacturers must not only assess device performance in real-world use but ensure AI components behave consistently within acceptable risk parameters over time.
The PLD, however, is silent on liability arising from post-market adaptations. For instance, if demographic shifts or clinical practice changes cause model performance degradation and manufacturers fail to detect this, is the product defective or liable? The law remains ambiguous.
7. Practical Steps for Manufacturers
Given the uncertain and evolving legal context, mere regulatory compliance is likely insufficient to manage liability risk.
Manufacturers should also:
- Conduct comprehensive legal and technical audits of AI components.
- Implement rigorous version control and update tracking systems.
- Maintain detailed documentation covering training data, performance metrics, and limitations.
- Embed fail-safe mechanisms and ensure meaningful human override capacity.
- Proactively plan for robust post-market surveillance, including monitoring for model drift in clinical settings.
Conclusion: Beyond Compliance
The EU AI Act and revised PLD represent significant steps toward AI safety and accountability. Nonetheless, they remain incomplete and imperfect for the complexity of life-critical AI systems in healthcare.
Manufacturers of AI-enabled medical devices must look beyond surface-level compliance. The burden of liability is shifting subtly but decisively onto producers, demanding a proactive, nuanced approach that integrates legal foresight with technical rigor.