AI, data privacy and UK health

Multiple regulators currently shape AI expectations, but whether the UK adopts a standalone law remains uncertain, although continued regulatory focus is assured, writes Gowling WLG’s leader of health & care sector (UK) Robert Breedon and co-lead of data protection and cyber security Loretta Pugh

© Pexels/Pixabay

© Pexels/Pixabay

The UK Government's Modern Industrial Strategy commits to AI growth zones designed to attract investment in AI infrastructure and capability. In parallel, the MHRA has created the UK National Commission on the regulation of AI in healthcare to design a fit‑for‑purpose framework, with the ambition of making the UK a truly AI‑enabled healthcare system.

Accelerating use of AI

Wearables are now mainstream, giving people continuous readouts, steps, heart rate, calories and other indicators, nudging behaviour through tracking and gamification. 

Real‑world benefits are already visible. Data collection and analysis can transform management of long‑term conditions. Yet digital maturity is uneven. 

Principal risks

Large, sensitive datasets demand careful governance. The core questions are straightforward – how data is collected, who controls it, whether consent is required, how processing is conducted lawfully and responsibly, and which safeguards protect privacy. These highlight three common areas of risk:

  • Data protection and security breaches – health information is an attractive target. Data breaches undermine trust, stall innovation and trigger legal and reputational consequences
  • Algorithmic bias – training AI systems on incomplete or skewed datasets can worsen inequalities, leading to missed diagnoses or inaccurate risk scores for under‑represented groups
  • Automated decision‑making – speed and scale can come at the expense of transparency and human judgement, dampening clinical and public acceptance.

UK data privacy law

Health data is a special category of personal data under UK GDPR rules and processing requires a lawful basis, additional condition and higher security. The new Data (Use and Access) Act 2025 introduces reforms, including mandatory information standards for health and social care IT, a statutory footing for smart data schemes, and a trust framework for digital verification services. It also incorporates provisions on data use in health and social care, automated decision‑making and a broader definition of scientific research.

Multiple regulators shape AI expectations. The Information Commissioner enforces data protection, the CQC oversees healthcare quality and safety, and the MHRA regulates medical devices. Whether the UK adopts a standalone AI law remains uncertain, but continued regulatory focus through guidance and enforcement is assured.

Build privacy and security in 

Ensuring compliance and effectively mitigating key areas of risk requires data protection and privacy to be considered right at the outset of a project, embedding privacy and rights considerations before processing begins and throughout the lifecycle. 

In practice, that means conducting Data Protection Impact Assessments for high‑risk processing, minimising data and applying anonymisation or pseudonymisation where feasible, implementing appropriate technical and organisational security measures, providing intelligible privacy information, and putting data‑sharing agreements in place across partners. Supplier due diligence also matters through interrogating performance claims, security controls and ‘privacy posture'.

Algorithmic bias

Bias is best reduced early. Prioritise high‑quality, representative training data for the intended populations; document provenance; and ensure individuals are informed where patient data trains models. 

Conduct periodic bias testing and audits, record methods and remediation, and set retention limits aligned with purpose and law. Because bias audits may themselves process sensitive data, ensure you have a lawful basis and meet a special category condition with strong safeguards.

Automated decisions

Article 22 of UK GDPR generally gives individuals the right not to be subject to decisions based solely on automated processing that have legal or similarly significant effects, subject to narrow exceptions. In healthcare, risk stratification, predictive diagnostics, resource allocation, and automated decision-making can improve quality and efficiency but must be explainable and governable. Meaningful information about the logic involved and the significance and consequences for the patient need to be provided to meet requirements around transparency and accountability. 

Where human review is involved, it must be genuine and informed, not a perfunctory rubber stamp of what the model outputs. 

Earning and keeping trust

Trust grows with clarity about how models are developed, which data they rely on, and how outputs inform care. It is sustained when practice matches policy: standards are met consistently, security is demonstrable, explanations are comprehensible, and rights are easy to exercise.

While mitigating potential risks and inhibiting greater collaboration between health and tech sectors, there is significant potential for economic growth and improvements in patient care.

Ambient voice technologies: from early enthusiasm to informed adoption

Ambient voice technologies: from early enthusiasm to informed adoption

26 January 2026

Phil Shelton, director of industry partnerships at Health Innovation East, looks at tackling documentation pressures on staff time

Paddling in the same direction

26 January 2026

How NHS Leaders can effectively engage health and wellbeing boards on neighbourhood health

Double vision

By Liz Wells 26 January 2026

NHS staff in Manchester are reaping the benefits of a digital twin of six hospitals, which is being used to better manage the estates