Home IssueArtificial Intelligence UK Regulations Need an Update to Make Way for Medical AI

UK Regulations Need an Update to Make Way for Medical AI

by Nick Wallace
by

The Royal Free Hospital in London recently came under fire from the UK’s data protection regulator, the Information Commissioner’s Office (ICO), over its use of patient data for clinical testing of an artificial intelligence (AI) application intended to help diagnose kidney injuries by automatically analyzing test results as they are supplied to clinicians. ICO reprimanded the hospital for not directly notifying patients that their data was being used in the trial. But medical AI promises to help doctors spot problems like lung disease and diabetic attacks much earlier, and trials like the one at the Royal Free are an important step towards making this level of care available to NHS patients. To ensure future medical AI trials can go forward, the government should include this kind of testing in the existing set of legally-defined scenarios where notification rules do not apply, lest they become an obstacle to providing care.

According to the UK’s Caldicott guidelines, which provide a framework for common law rules on patient confidentiality, doctors do not have to notify patients when their data is shared to provide “direct care.” ICO took the view that testing an AI application prior to use in patient care does not qualify as “direct care,” whereas the hospital had been operating under the belief that it did. A wider range of uses is permissible with ministerial approval, under the Health Service (Control of Patient Information) Regulations 2002. The regulations are also known as “section 251 support” in reference to section 251 of the National Health Service Act 2006, which gives limited powers to the Secretary of State for Health to decide some aspects of medical confidentiality regulations, including when it is “in the interests of improving patient care,” under advice from the Confidentiality Advisory Group (CAG), which provides legal counsel on uses of healthcare data.

Whether it was ICO or the Royal Free that correctly interpreted the definition of “direct care” is beside the point: AI is an important area of medical innovation, and it is necessary to first test it with real data before using it to treat patients. Rather than leave the matter to tedious arguments over common law, or let future tests of medical AI be held up by the need to contact every patient, the Secretary of State for Health should use section 251 to establish a legal basis for data sharing for clinical tests of tools intended for direct patient care.

ICO also chastised the hospital for not using a smaller sample than the 1.6 million patient records it did use, claiming this would have reduced the privacy impact. Whether the trial involves a hundred patients or a hundred million, their data is either safe or it is not. No breach has occurred, and no patient data has been put at risk. Patients’ data was only used for testing: given that these were records the hospital already held anyway, the privacy risk would have been no different if Royal Free had hired a data management company to organize its records and check them for errors. The hospital did not let Google—whose subsidiary DeepMind developed the AI software—use the data for developing the app, and Google relied instead on fake data.

NHS England has some 1,500 data sharing agreements with third parties, and the Royal Free Trust says it has “a number of data sharing agreements with organizations which provide a range of different services for the Trust.” The amount of data processing in health care will only continue to grow, and there is little value to patients in receiving notifications, or worse, requests to opt-in to sharing their data, for every single third-party vendor a hospital uses to assist in providing care. This is why there are already rules in place to maintain necessary routine data sharing—they simply need to be updated to account for technological change brought about by AI. Effective use of AI depends on access to data, and unless policymakers want the technology to remain a curious exception among healthcare tools, they should update the regulations to allow hospitals to test it.

Unfortunately, that does not seem to be the priority for ICO. The “lessons” proffered in a blog post on this matter by the Information Commissioner send out completely the wrong message, telling the health sector that “new cloud technologies mean that you can, not that you should.” Given the possible benefits of medical AI, and the extremely low risk of harm to patients compared to most other medical innovations, this advice is as unhelpful as it is clichéd. If hospitals can improve the way doctors treat patients who have suffered damage to their vital organs, then they definitely should. Indeed, one might argue that regulators should compel hospitals to notify patients if they do not use the most advanced tools at their disposal to provide the best possible care.

Clinical trials like these are necessary for any new medical tool. With medical AI, the proposed benefits are modestly promising, and the risks are low. Government should bring the regulations up to date in order to encourage the advancement of this technology in healthcare, instead of letting outdated rules stand in its way.

Image credit: University of Liverpool Faculty of Health and Life Sciences

You may also like

Show Buttons
Hide Buttons