The Center for Data Innovation spoke to Brhmie Balaram, the Head of AI Research and Ethics at NHSX AI Labs, the digital transformation unit of the UK’s National Health Service (NHS). The NHS is actively involved in incubating, funding, and rolling out a range of digitized healthcare initiatives, in part owing to the centralized structure of its data libraries, and the country’s strong research culture and booming technology sector. Balaram discussed the ways in which artificial intelligence (AI) can improve healthcare delivery, the role of AI ethics in the medical sector, and how to move AI research from the lab into the real world.
Ben Mueller: What is NHSX AI Lab and how did the organization get started?
Brhmie Balaram: The NHSX AI Lab was established in 2019 to explore the potential of AI to augment health and adult social care services and to improve the public’s health and wellbeing. It aims to accelerate the adoption of safe, ethical and effective AI-driven technologies in health and care. As part of our remit, we’ve set up a suite of programs in partnership with health and regulatory bodies in the UK to address the challenges of putting AI-driven technologies into practice. Alongside this, the NHS AI Lab is working with healthcare practitioners, the public, and policymakers to build appropriate confidence in these technologies and ensure that they are trustworthy.
Mueller: What role can AI play in the future of healthcare in the UK?
Balaram: AI can improve patient experience, support the health and care workforce, and help the NHS systems run more efficiently. For example, we can use natural language processing to help read unstructured doctors’ notes or deploy computer vision to support the diagnosis of diseases and conditions using images, such as X-rays and CT scans. AI can also be used for forecasting purposes to help us make the best use of capacity and resources. We want to ease the pressure that the NHS, the social care sector, and their staff are under through deploying AI where possible, without compromising on quality of care. Ultimately, our goal is to realize the potential of AI to transform people’s outcomes for the better.
Mueller: How have you operationalised AI ethics in the context of healthcare delivery?
Balaram: The AI Ethics Initiative was introduced in February 2021 to invest in and support research and practical interventions that complement and strengthen existing efforts to validate, evaluate, and regulate AI-driven technologies in health and care. We are mindful of the huge body of work that already exists on defining ethical principles and that can guide implementation; our focus now is on putting principles into practice. We’ve therefore invested in creating models, such as algorithmic impact assessments (AIAs), that can inspire appropriate confidence in these technologies. AIAs, for example, would enable us to prompt the researchers and commercial developers to consider the legal, social and ethical implications of their proposed AI solutions at an early stage when they’re first requesting access to centralised imaging data. There will be a public engagement element to these AIAs, which would enable patients and the public to contribute their perspectives on how the solution may impact the people these solutions are being deployed for. Alongside this, we’re supporting the development of guidelines and standards for organizations to help them with the auditing of AI solutions, as well as for innovators to provide a steer on the inclusivity and generalizability of datasets for training and testing.
Our investments have also been centred on addressing health inequalities, and specifically countering racial and ethnic health inequalities given evidence from the United States about how the roll-out of some of these technologies have disadvantaged Black and Hispanic patients, for example. We’ve partnered with the Health Foundation to hold a research competition, enabled by the National Institute for Health Research (NIHR), to consider how we could mitigate the potential harms of AI for minority ethnic communities and, furthermore, prompted researchers to explore how we could potentially leverage this technology to start closing gaps in health outcomes.
Mueller: What are some of the difficulties you encounter in moving from AI research to scaled-up real-world use?
Balaram: The main difficulty that we’re trying to proactively address is one of under-or over-confidence in AI technologies. We want staff to feel like they have enough confidence to use these technologies, but we don’t want them to overconfident in these technologies to the point where they are overriding their judgment or placing more trust than is warranted in a certain product. This balance that we’re trying to strike is what we refer to as “appropriate confidence.” Part of the way that we achieve this is through ensuring that the products are trustworthy and that individual healthcare practitioners have sufficient knowledge about the system and its performance and/or limitations. Organizations at a national level, such as regulatory bodies, and a local level, such as hospital trusts, have responsibility for demonstrating that the products are trustworthy because they have regulatory approval or because commissioners followed good practices for procuring technologies. Individuals working in health and care can be supported by organizations like Health Education England to acquire the knowledge they need and help clarify expectations of them, such as what role they can play in post-market surveillance.
Mueller: What are examples of tangible AI applications NHSX is hoping to roll out in the near future?
Balaram: The NHSX AI Lab is delivering the AI in Health and Care Award, in partnership with the Accelerated Access Collaborative (AAC) and NIHR. Through the award, we’ve been able to support the research activities of innovators at various stages of development, so that they can gather the evidence they need for regulatory approval or to be scaled across the NHS. One of the technologies at an early stage is SamurAI, which is exploring the technical feasibility of using AI to provide advice on when to start, stop, or change the use of antibiotics so that they are only used when absolutely necessary. Technologies at a more advanced stage include Healthy.io, which enables patients to use their smartphones to detect the signs of early kidney disease; Brainomix, which uses AI to interpret brain scans of acute stroke; and Aidence, which helps radiologists detect early lung cancer.