AI in Healthcare: Tackling Regulatory Complexity

AI in Health Care

In July 2025 the European Commission published its final report ‘Study on the Deployment of AI in Healthcare’, a 241-page analysis of how AI tools are adopted (or not) in clinical practice. As the report emphasizes, modern healthcare systems face urgent pressures such as aging populations, chronic diseases, staff shortages and cost constraints, and AI holds promise to improve efficiency and diagnosis. But despite rapid growth in AI research and products, actual clinical use remains slow. The study categorizes the challenges as technological (data quality, integration), legal/regulatory, organizational, social and cultural. In particular, it notes that healthcare is among the most heavily regulated sectors, and ‘legal and regulatory complexities’ are a key barrier to deployment.

This blog article briefly outlines the study’s scope and key findings, then dives into the regulatory complexity. We examine how multiple new and existing EU rules, notably the EU AI Act, the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), the GDPR, the European Health Data Space (EHDS), and updated EU Product Liability rules overlap and intersect.

Purpose and Key Findings of the Study

The EU study aimed to map the landscape of AI in European healthcare and learn from it. The report identifies four categories of barriers:

  1. Technical/Data issues: difficulties in accessing high-quality, interoperable health data; lack of standards.
  2. Legal/Regulatory issues: a patchwork of rules governing AI products, data protection, medical devices, etc.
  3. Organizational/Business issues: funding gaps, unclear reimbursement for AI tools, staff training.
  4. Social/Cultural issues: clinician trust, patient acceptance, digital literacy and professional liability fears.

While the technology exists, the ‘Study on the Deployment of AI in Healthcare’ stresses that Europe’s regulatory framework is not always coordinated between different disciplines, which is certainly no understatement. Innovators face a complex maze of EU requirements, some overlapping, some new, and some still evolving, which can delay or deter AI deployment altogether. However, the study also highlights many positive cases where hospitals overcame obstacles via legal support units or interdisciplinary committees. It concludes that the EU is well-placed to support a safe, ethical, and scalable rollout of health AI, but that action is needed to harmonize data standards, improve regulatory clarity, and monitor progress.

Overlapping Regulatory Frameworks

A central finding is that AI implementers often must contend with multiple simultaneous regulatory regimes. Healthcare providers and AI companies in particular face a layer cake of obligations, including:

  1. The EU AI Act (Regulation (EU) 2024/1689), that classifies AI systems by risk. High-risk AI (e.g. systems used in critical decision-making) must follow strict requirements for data quality, documentation, risk management, transparency and human oversight. Importantly, any AI that is part of a regulated product is automatically “high-risk”. For example, Article 6 of the AI Act says that if an AI system is a safety component of a product (like software in a medical device) or the product itself, and that product must undergo third-party conformity assessment (as medical devices do), then the AI is high-risk. High-risk classification triggers obligations such as a mandated risk management system, technical documentation, cybersecurity standards, impact assessments, as well as transparency obligations (Article 13 requires that users be able to understand the AI’s output) and human oversight (Article 14 requires design for oversight).

  2. EU Medical Device and IVD Regulations (MDR 2017/745 and IVDR 2017/746) are the existing laws governing medical devices and diagnostics. Software that is intended for medical purposes can qualify as a ‘medical device’ under MDR (or an ‘in vitro diagnostic’ under IVDR). The MDR in particular introduced Rule 11, a classification rule specifically for software. Under Rule 11, standalone medical software, like an AI diagnostic algorithm, is classified based on its intended use. Software that provides information for diagnostic/therapeutic decisions or monitors physiology is automatically in a higher device class. Conformity to MDR/IVDR means meeting the General Safety and Performance Requirements (Annex I of each Regulation), conducting clinical evaluations, and usually undergoing review by a Notified Body (third-party assessor) before placing the product on the EU market. For higher-class devices this process is quite rigorous, much like a formal approval process.

  3. The GDPR (Regulation (EU) 2016/679), as EU’s core data protection law. Virtually all health AI systems will process personal data, often highly sensitive health data. GDPR requires a lawful basis for processing and imposes strict principles (data minimization, purpose limitation, accountability, transparency). It grants patients rights (access to data, explanations of automated decisions, etc.). Deployers must ensure that AI projects are GDPR-compliant,  e.g. by obtaining valid patient consent or ensuring special rules are met for health data (Article 9 GDPR), and providing transparent information to users. The GDPR also has rules on data security and breach notifications that apply to any AI handling personal data.

  4. The European Health Data Space Regulation (Regulation (EU) 2025/327) is the brand-new framework for health data, adopted in 2025 and coming into force in 2027. The EHDS creates mechanisms specifically for the secondary use of electronic health data for research, innovation, public health, etc. It does not replace GDPR but works alongside it. Under EHDS, each Member State will set up Health Data Access Bodies that issue ‘data permits’ when researchers or companies want to use health data (beyond direct care). The regulation explicitly provides a legal basis (in line with GDPR Article 9) for such secondary processing of health data, with safeguards.. It also provides patient opt-out rights for certain uses. In practice, an AI company needing large patient datasets from EU hospitals will have to navigate EHDS permit procedures in addition to complying with GDPR consent/rights rules. This adds another layer of complexity.

  5. As of late 2024 the EU has updated its Product Liability Directive (Directive (EU) 2024/2853) to cover software and AI. The new Product Liability Directive explicitly treats software (AI and digital services) as ‘products’. This closes a gap, because previously it was unclear whether AI software counted as a product for no-fault liability. Under the revision, it also introduces presumptions to ease proof of causation in AI cases. For example, Article 10 PDL now allows courts to presume that a defective product caused damage if the defect and type of damage are consistent, especially if the claimant faces ‘excessive difficulties’ proving it. In effect, injured parties can more easily claim against AI developers.
    However, this new regime is still being transposed by Member States, and a separate AI-specific liability directive (the ‘AI Liability Directive’) was recently withdrawn. The upshot is uncertainty.  AI firms must prepare for stricter liability in the EU, but exactly how responsibilities will be allocated (and what defences will apply) is still evolving.

Each of these frameworks overlaps with the others for many AI health applications. For instance, any AI-based diagnostic software is likely both a medical device (MDR) and a high-risk AI system (AI Act). It would need CE marking (through the MDR process) and also require to prepare a high-risk AI technical file (AI Act Article 11), amounting effectively to double documentation. If it uses patient data, GDPR and EHDS rules both apply, one must safeguard patient privacy under GDPR and also follow EHDS procedures for data access. And if the AI leads to a misdiagnosis, the developer could face product liability claims under the updated Directive and professional liability for the clinician under e.g. tort law provisions

More Clarity is Desirable

Recognizing these hurdles, the study offers some concrete suggestions to bring more clarity and coordination to the EU AI-healthcare landscape, which is a first step in the right direction.

Key recommendations include:

  • Harmonized standards and guidelines. A recurring theme is that common EU standards for health data and AI would reduce friction. The study explicitly recommends establishing common technical standards for data governance, formats and interoperability. For example, standardizing data formats and protocols across Member States would make it easier to share patient data securely (addressing GDPR/EHDS issues) and to develop AI models that generalize. The report also calls for guidelines on bias mitigation and data quality. In parallel, the EU is already acting, because the Commission has issued a formal standardisation request to CEN/CENELEC to develop standards on AI system oversight (e.g. for human-in-the-loop design). Such standards, once adopted, should help firms understand exactly what is expected.
  • Centralized guidance and support (‘one-stop shop’). Stakeholders stressed that clearer guidance would help a lot. The report notes that 67% of surveyed clinicians felt that having well-defined regulatory roles and pathways was ‘good practice’.
  • Capacity-building in healthcare institutions. At the hospital level, the study observes that having legal/compliance staff dedicated to digital health helps speed things up. The report suggests that more EU funding should support centres of excellence or professional networks in healthtech law. These could act as knowledge hubs to share best practices (for example, how other hospitals achieved MDR/AI Act certification) and to train clinicians on regulatory basics.
  • Regulatory sandboxes and adaptive frameworks. Although not unique to healthcare, the study highlights the value of experimental environments. The AI Act itself allows for regulatory ‘sandboxes’ where high-risk AI can be tested under supervised conditions. The reports recommends that the Commission and Member States leverage these sandboxes specifically for healthcare AI, with clinical testing regimes. In parallel, it urges that guidelines and delegated acts (e.g. on human oversight) be finalized rapidly to give clarity.
  • Harmonizing EU legislation. More generally, the report calls for better EU-level coordination between different legislative streams. For example, it highlights the need to align the AI Act with the MDR/IVDR. In practice, this might mean having ‘one assessment’ where possible. If a Notified Body is already evaluating an AI medical device under MDR, it could also tick off AI Act requirements in the same review.
  • Liability and insurance clarity. To reduce liability anxiety, the study implies that more clarity is needed on how the new Product Liability rules will apply. Until then, developers should be encouraged to adopt best practices to mitigate legal risk.

The Takeaway

The EU report states the obvious: the current regulatory landscape feels frustrating and risky for healthcare providers and AI companies. Promising projects stall under the weight of overlapping obligations, teams divert scarce resources to paperwork instead of clinical validation, and executives worry about patient safety, reputational harm and unpredictable liability while clinicians fear using tools that aren’t clearly authorised.

Until the legal picture is clearer, we can help by providing pragmatic, outcome-focused support that reduces immediate risk and keeps innovation moving. Practically, that could mean:

  • an urgent regulatory triage to identify which regimes apply (AI Act, MDR/IVDR, GDPR, EHDS, product liability rules),
  • a prioritized legal compliance roadmap, and
  • rapidly-deployabledocuments (technical documentation, DPIAs, consent language, post-market surveillance plans).

We coordinate with clinical and IT teams to design human-oversight and audit trails that satisfy multiple rules simultaneously, advise on certification processes and Notified Body interactions, navigate EHDS access procedures for secondary-use data, and help allocate and mitigate contractual and insurance risk across suppliers and purchasers. Our approach is to translate uncertainty into clear, staged actions so organisations can continue safe, evidence-based deployment while broader regulatory harmonisation progresses.

Contact us today to discuss your matter.

Disclaimer:
The content of this blog is provided for general informational purposes only and does not constitute legal advice. While we strive to ensure that the information is accurate and up to date, it may not reflect the most current legal developments or the specific circumstances of your organization. Readers should not act upon any information contained in this blog without first seeking professional legal counsel. No attorney-client relationship is created through your use of or reliance on the information provided herein.