Home IssueArtificial Intelligence EU Finally Moving Forward with Machine Learning Act

EU Finally Moving Forward with Machine Learning Act

by Patrick Grady
by

European policymakers have come to realize that the novel risks posed by artificial intelligence are, in fact, applications of machine learning.

The proposal for an Artificial Intelligence Act, published in 2021, promised to be “risk-based” by imposing regulatory burdens only on AI systems that threaten fundamental rights and safety. In the time since, policymakers and civil society have realized just how ubiquitous AI is. It’s the software in washing machines, the Spotify recommendation system, and the engine behind almost all modern robotics. In mostly benign ways, AI powers EU industries and furnishes the daily lives of EU citizens; penalizing its use will hinder innovation and hurt consumers. Fortunately, the EU is reconsidering its original definition and moving the scope of the AI Act from a broad definition of AI toward a more narrow definition of machine learning.

The Commission’s original proposal defined artificial intelligence (AI) as “software that is developed with one or more of the techniques and approaches listed in Annex I.” Techniques in Annex I include “machine learning approaches” but also “Logic- and knowledge-based approaches,” “Statistical approaches, Bayesian estimation, search and optimization methods.” This expansive definition covers most software and—if used in one of the eight “high risk” domains—would entail severe regulatory burdens (up to €400,000 for compliance costs alone), which ultimately consumers would bear.

When criticizing the use of AI, policymakers often—even unknowingly—refer to machine learning (ML) systems. An ML system is an AI system that improves by observing data, building a model based on the data, and using the model as a hypothesis about the world and as a program to solve problems. AI concerns about transparency, autonomy, and responsibility are only pertinent to ML. ML systems can misfire badly and exacerbate biases when used for recruitment, translation, image recognition, and policing—although studies show it is easier to debias such systems than their human operators and, as organizations gain more experience with ML, “misfires” will become rarer.

So it was a mistake for EU policymakers to penalize the use of AI in the AI Act when ML is their principal concern. The cost to the European ecosystem would be substantial—of deterred investment, costlier AI, and forgone applications. When its unicorns and most promising AI startups are already turning elsewhere, the EU cannot afford to be left behind. Sensibly, the leading MEPs now are pushing to redefine AI in the AI Act as a system that uses “learning, reasoning or modelling”—effectively limiting the scope to machine learning. Limiting the Act’s scope to ML still requires balancing safety with innovation, but this is a step in the right direction.

Policymakers are coming to the conclusion that it is machine learning, rather than artificial intelligence more broadly, that poses novel risks to consumers. They should continue to ensure the final regulation reflects this. Indeed, it should have been called the Machine Learning Act.

You may also like

Show Buttons
Hide Buttons