When AI assists in care, who carries the legal risk?
Rethinking liability in Clinical AI
12/18/20254 min read
Artificial intelligence is no longer a side project in healthcare - it is fast becoming part of everyday clinical work, from image interpretation to treatment planning. That shift brings to the forefront a practical legal question: when AI helps make (or steer) a clinical decision, who is responsible if things go wrong?
Damned if they do, damned if they don’t?
In May 2024, the Master of the Rolls, Sir Geoffrey Vos, warned that professionals using AI could soon be “damned if they do, and damned if they don’t.” If AI becomes the responsible standard of care in certain specialties (think radiology or pathology), clinicians may face exposure not only for misusing AI but also for failing to use it when its benefits are obvious. That possibility sits alongside informed consent duties, where patients should be told about reasonable alternatives, including, increasingly, validated AI support.
How quickly we reach that point will track adoption and guidance. Expect courts to look toward what a “responsible body” of practitioners is doing, and toward practical direction from organisations like NICE and the Royal Colleges on where AI genuinely adds value (and where it shouldn’t be used). To that end, it is interesting to note that the European Society for Medical Oncology (ESMO) has recently published its landmark guidance on the use of LLMs and other AI tools in oncology practice, marking one of the first discipline-specific regulatory-style guidance documents for clinical AI in cancer care.
Is the Bolam test fit for AI-assisted medicine?
Since 1957, Bolam test has asked whether a doctor acted in line with a responsible body of peers. The question of how will this be assessed in the era of clinical AI is by no means clear cut. What if a model’s recommendation conflicts with accepted practice? The instinctive answer (i.e., “human judgement trumps a computer”) isn’t the end of the analysis. If robust evidence shows that an AI system outperforms humans in specific tasks (for example, finding subtle patterns in imaging), is it negligent to ignore it? Equally, is it negligent to follow an opaque “black - box” output that a clinician cannot interrogate in real time?
Individual clinicians vs institutions: who bears the risk?
While some may argue that AI is just another tool or that consulting it is akin to taking advice from a senior colleague, two issues immediately come to mind:
Opacity: Clinicians may not be able to explain why AI recommended X over Y at the moment a decision is made.
Autonomy: Some systems are moving toward higher degrees of automation (think surgical robotics). Even with oversight, control can be distributed in ways that don’t map neatly onto traditional supervision.
That’s why many argue it is unfair to load all liability onto individual clinicians for outcomes largely shaped by a system they cannot fully audit. An alternative would be to place more responsibility on institutions: duties to validate models prior to deployment, ensure ongoing performance monitoring, train staff, maintain audit trails and pull unsafe systems fast.
A stronger version of this approach is strict liability for providers where an AI-related failure harms a patient, acknowledging that institutions are better placed than individuals to insure, manage enterprise risk and demand quality from vendors.
The EU’s direction of travel: treat AI as a product
While the UK’s regulatory path is still developing, the EU's I Act establishes safety and transparency obligations, and a new Product Liability Directive extends “product” to include software and AI. The effect? More risk shifts toward developers and other economic operators. As a result:
Claimants don’t need to prove negligence - they must show the system was defective (failed to meet safety expectations or legal requirements).
To tackle the black-box problem, providers can be compelled to disclose technical information in an accessible form where there’s a plausible case of harm.
In complex cases where causation or defect is disproportionately hard to prove, the burden of proof can shift to the defendant.
Whether the UK mirrors this approach is an open question - but the direction is clear: accountability is moving closer to the point of AI design, validation and lifecycle management.
What about the UAE?
The UAE doesn’t yet have a single, AI - specific liability statute for healthcare. Instead, clinical AI sits at the intersection of (i) medical liability law, (ii) health data laws, and (iii) Emirate - level healthcare regulations and AI standards. Together, these set the guardrails for how AI could be deployed in practice.
Aside from the federal laws governing medical negligence, the Department of Health (DoH) in Abu Dhabi has published the Responsible AI Standard (2025). This system - wide standard requires use of AI to be safe, effective and socially responsible AI. It applies to development, procurement and deployment by all licensed entities in the Emirate. Expect emphasis on continuous evaluation, monitoring and human oversight. Similarly, DoH’s 2025 TB screening standard is explicit that AI supports - not substitutes - clinical judgment, and sets concrete safety/validation requirements.
The bottom line
AI is likely to change how we judge the standard of care, allocate risk and handle evidence in clinical negligence. Fundamentally, clear rules aren’t just about protecting patients - they are what give clinicians and innovators the confidence to use the technology.
Crucially, the laws and regulatory guidelines will need to evolve without slowing the science - if we want to keep the existing pace, we will need pragmatic frameworks, refined through real-world use, clear accountability and audit trails, and regulatory guidance that updates as models, data and workflows evolve.
Further reading / source: partlially adapted from an editorial on AI and liability in healthcare (Kellar, 2024).
Waypoint Legal Consultancy
Legal advice tailored to healthcare innovation.
© 2025. All rights reserved.
