BRUSSELS— In its legislative proposal to regulate artificial intelligence (AI), the European Commission is using a broad definition of AI that would deter innovation by penalizing technologies that don’t pose novel risks, according to a new report from the Center for Data Innovation. The Center’s report urges the EU to take a technology-neutral approach and level the playing field between processes that use AI and those that do not.
“Policymakers have long valued the principle of technology neutrality, but the AI Act violates that principle,” said Patrick Grady, a policy analyst at the Center for Data Innovation who authored the new report. “We absolutely need the legislation to be tech neutral, which means that it should avoid favoring one technology over another while addressing novel risks. But the current scope includes AI systems we’ve been using over the past 30 years, not just the ones that come with new risks.”
The European Parliament only has a few months to adjust the AI Act before negotiating the legislative proposal with the European Council; the Center calls on EU policymakers to revise the AI Act’s definition of AI so that its scope only covers AI technology with new risks that are not posed by non-AI systems; namely, uninterpretable machine learning. To achieve this, the Center proposes that the AI Act use the following definition instead:
Artificial intelligence system’ (AI system) means a system that, based on parameters unknown to the provider or user, infers how to achieve a given set of objectives using machine learning and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the real or virtual environments with which the AI system interacts.”
The report breaks down how the act delineates and regulates “high-risk” uses of AI. Systems in the high-risk category must undergo conformity assessments and transparency requirements as well as fulfill post-market monitoring obligations. The report notes that conformity assessments alone may cost up to €400,000, which will ultimately be borne by consumers and by EU businesses competing in global markets.
The report examines how the AI Act’s definition of AI applies to a broad set of AI systems that do not need regulatory intervention. According to the Center, the current version of the legislation’s scope includes a lot of basic software—including linear regression models and statistical programs used in spreadsheets—that do not impose new risks.
“Poor legislation is worse than no legislation. If the EU cannot fix the legislation’s scope problem, it should revisit the legislation entirely,” said Grady. “Failing to do so would invite a period of grave AI deterrence in Europe.”