Home IssueArtificial Intelligence The UK’s Agile, Sector-Specific Approach to AI Regulation Is Promising

The UK’s Agile, Sector-Specific Approach to AI Regulation Is Promising

by Ayesha Bhatti

The UK government released its response to the UK AI Regulation White Paper consultation on 6th February 2024, outlining a new pathway for “agile” artificial intelligence (AI) regulation. It confirmed plans to place greater responsibility on existing sector-specific regulators to oversee the development of AI but also raised the possibility of future binding requirements on the most highly capable general-purpose AI systems. Overall, the government maintains a pro-innovation stance towards AI, however, time will tell whether individual regulators adhere to this vision or have enough resources to carry it out.

Take a look at some of the key takeaways.

Increased Funding for Responsible AI

The government announced increased funding for AI. This includes:

  • £10 million towards preparing and upskilling regulators to appropriately address AI risks, and to harness AI opportunities;
  • £90 million towards the launch of nine new research hubs across the UK;
  • £2 million towards the Arts and Humanities Research Council (AHRC) to support new research projects defining what responsible AI looks like across sectors like policing, education, and the creative industries;
  • £19 million towards 21 projects developing innovative, trusted, and responsible AI and machine learning solutions that accelerate deployment of these technologies and drive productivity; and
  • £9 million investment through the government’s International Science Partnerships Fund to bring together researchers and innovators in the UK and the United States on the development of safe, responsible, and trustworthy AI.

That is a lot of new funding for AI, with a sizable amount going to regulators and developing practical tools to monitor and address sector-specific AI risks. Regulators will need expertise in AI, and funding to upskill regulators who will better equip them to establish and enforce sector-specific rules for AI. It is also promising to see funding for solutions encouraging the deployment of AI. This will go far in the Secretary of State for Science, Technology and Innovation’s aspirations for AI to transform the provision of public services and the economy. Increasing the uptake of AI in the public sector is possibly the best showcase of the positive effects of AI and how it can improve the UK’s overall welfare. The £9 million investment in the US-UK collaboration is also certain to maintain the UK’s goals as a leader in AI safety, as well as work towards the possibility of a shared, international understanding of what safe, responsible, and trustworthy AI means.

30 April Deadline for Key Regulator Responses

The government announced a boost for transparency and confidence to both businesses and citizens by requiring a response from key regulators such as Ofcom and the Competition and Markets Authority (CMA) as to their approach on managing AI. This response must arrive by April 30.

Given the greater emphasis now placed on industry regulators, these responses will likely serve as the groundwork for other sectors. However, certain regulators may take too much of a risk-averse approach based on old risks, as explained in a previous post analyzing the latest speech given by the CMA Chair. In addition, regulators should collaborate with industry to understand the context within which AI fits into the sector, and the specific opportunities and risks it poses in that context.

New Steering Committee

The announcement of a new steering committee in the spring to support and guide the activities of a formal regulator coordination structure within government will help in the abovementioned issues, encouraging sector-specific approaches to AI opportunities and risks.

Future Targeted Binding Requirements

The government also laid out the case for future targeted binding requirements on the most highly capable general-purpose AI systems. This will undoubtedly create uncertainty now that it is evident legislation will be put forward to cover these systems. It is important that AI is not unnecessarily curtailed by embedding it in immovable legislation. The definition of AI, as well as what is meant by general-purpose AI, will be crucial to ensuring a highly tailored legislative framework that enables AI innovation.

Promisingly, the government clarified that it would not rush to legislate or implement quick-fix rules that are not forward-looking, instead favoring a context-based approach empowering existing regulators. This is a particularly timely reassurance given the recent unanimous vote to approve the EU AI Act, notorious for its prescriptive, rigid understanding of the AI landscape. By contrast, the UK is demonstrating its clear understanding of the needs of the industry and the necessity to remain agile to meet the pace of innovation.

Initial Reception

As part of the government response, business leaders also shared their views on the announcements, with support shown from Microsoft UK and Google DeepMind. The latter, in particular, welcomed the direction of UK regulation on AI, as well as promoting the collaboration of DeepMind with the government to establish itself as “a global leader in AI research and set the standard for good regulation.”

A Strong Move In The Race For AI Safety Leadership

The government response is a positive one. It rightly prioritizes innovation by increasing funding and avoiding premature hard legislation. However, the prospect of future legislation to cover general-purpose AI systems may introduce uncertainty as businesses try to preempt what could fall within its scope. The critical issue will be how policymakers strike the balance between opportunity and risk, such that society may reap the incredible benefits of AI as soon as possible whilst also maintaining adequate safety. The positive response from key businesses is encouraging, as it suggests even greater collaboration between the government and the private sector, bridging the gap in expertise that regulators may have as they attempt to deal with industry-specific AI risks.

Similarly, leaving regulators with a deep knowledge of their industries to lead AI regulation will likely yield more practical rules than using a one-size-fits-all approach. These regulators will be better equipped to understand sector-specific issues and respond rapidly to any emerging risks without overstepping their scope.

Overall, the UK’s response shows its commitment to developing a pro-innovation regulatory framework that harnesses opportunity whilst also addressing risk.

Image by iStock.

You may also like

Show Buttons
Hide Buttons