In their responses to the MHRA consultation on the regulation of AI in healthcare, the bodies recommended:
- End-to-end assurance across the AI lifecycle: Regulation must require proportionate pre-market evidence, transparent communication of limitations and mandatory post-market surveillance to detect performance drift and bias, with clinicians retaining oversight throughout.
- Workforce capacity as a patient safety requirement: Safe AI deployment depends on a trained, resourced workforce. National workforce planning, funded training pathways, recognised roles and protected time must be integral to regulation.
- Clear system-wide accountability: Regulation should be clear on where responsibility lies between manufacturers, healthcare organisations and professionals, including expectations for transparency, training, post-market monitoring and liability.
Dr Stephen Harden, president of the RCR, said: ‘Clinical radiologists and clinical oncologists see both the promise and risks of AI every day. Regulation must support professional judgement, be underpinned by robust evidence and provide clear accountability.'
Mark Knight, president of IPEM, said: ‘AI must be regulated as a safety-critical technology. That requires clear standards across the AI lifecycle and a workforce with the capability and authority to assure these systems in clinical practice.'
Katie Thompson, president of SCoR, said: ‘Radiographers are central to the safe use of AI in imaging and radiotherapy. Regulation must recognise frontline practice and invest in workforce capacity to ensure patient safety.'
RCR, IPEM and SCoR are calling for AI regulation that aligns innovation with patient safety, workforce realities and clinical accountability.
