Home IssueArtificial Intelligence The EU’s Approach to AI Standards Is Protectionist and Will Undermine Its AI Ambitions

The EU’s Approach to AI Standards Is Protectionist and Will Undermine Its AI Ambitions

by Nigel Cory
by and

The EU’s proposed Artificial Intelligence Act (AIA) will oblige companies to comply with yet-to-be-developed technical standards in order to demonstrate they meet the essential requirements of the bill. However, the EU’s plan for establishing these standards is too rushed, excludes global experts, and has unrealistic goals. Unless the EU remedies these issues, the AI Act’s forthcoming standards will undermine AI uptake in the bloc and its goal of ensuring safe AI.

The EU’s approach to AI standards is misguided for three reasons.

Firstly, the European Commission set an unworkable deadline of January 31, 2025, for two standards bodies (CEN and CENCELEC) to provide their final report to the Commission. The 10 areas identified in the Commission’s draft standardization request cover risk management, “appropriate” governance and quality datasets, transparency, human oversight, accuracy, robustness, and cybersecurity. This process is complicated (see below). It is incredibly unlikely that Europe’s standard bodies will be able to convene experts, get them to identify what standards support these ten areas, prioritize their development, develop the standards, have experts assess whether the standards comply with the Commission’s request, and then adopt them as CEN-CENELEC standards in two years.

The Development Process for Harmonized Standards in the European Union (Image credit: Artificialintelligenceact.eu)The major reason standards development takes time is that the issues and technologies that standards deal with are hard and complicated. Getting agreement about the solutions across a global body of experts in their respective fields of expertise requires a lot of thought, discussion, incubation, testing, and review. How exactly do you measure AI governance, accuracy, and robustness? These are tricky technical questions. The unnecessary haste needed to meet the two-year deadline—prioritizing certain areas and skipping feedback—would compromise the quality of the standards.

Second, the European Commission is misguidedly relying on two European standards bodies, CEN and CENCELEC, instead of leveraging the global body of experts that has been working at international standards bodies, namely the joint work by the International Organization for Standardization and the International Electrotechnical Commission, since 2019. The EU is simply not home to all the AI expertise needed to develop these standards. Never mind that it should want to develop and use global standards to ensure it influences truly global standards. Worse, it chose to sideline the European Telecommunications Standards Institute (ETSI) over accusations of non-European influence, which already has been working on AI standards. Excluding these global experts from its standards development will inevitably lead to lower quality standards.

Unfortunately, this decision is just the latest example of the European Commission living up to its Standardization Strategy in preferencing local, rather than global, standards bodies in an effort to protect European “values.” The value most clearly on display is protectionism as the Commission undermines the World Trade Organization’s principles (and trade law commitments) for international standards, namely transparency, openness, impartiality and consensus, effectiveness and relevance, coherence, and the development dimension.

Instead, the Commission is using technology standards as a protectionist tool in its quest for cybersovereignty. The Commission wants to marginalize ETSI, the ISO, and IEC as they involve experts from American and other foreign firms and thus do not trust them. There are many problems with this view, but the main one is that the open and inclusive international standards system is valuable exactly due to its ability to draw on experts from around the world. The EU should avoid the Galapagos Island syndrome of country/region-specific standards that devasted the Japanese Internet industry in the 1990s and 2000s.

Thirdly, the AI Act sets unrealistic goals for AI standards. While lawmakers have dropped some of the most unfeasible ideas in the AI Act, such as requiring organizations to use “error-free” data sets and provide interpretability of their AI systems, other goals, such as ensuring AI systems respect health and fundamental rights, also remain out of reach. While these goals sound good in theory, nobody knows how to achieve them technically or if it is even possible. Starting from scratch, the EU is in danger of creating weak or unreliable metrics that fail to protect consumers or are unworkable for complicated machine learning systems, thereby discouraging innovation and outlawing the most innovative AI.

If the EU wants to be a leader in AI, it needs to create standards that foster innovation, not hinder it. Already, the EU is lagging behind its neighbors in AI development—the UK, for example, has more AI start-ups than the EU’s two biggest contributors (Germany and the Netherlands) combined. AI standards development is complex and deserves careful consideration from global experts to ensure technical requirements are workable and do not undermine the current state of AI. The EU’s approach fails to acknowledge this. It should extend the standardization process, consult with the international community, and revise technical requirements left in the bill that are not workable in practice.

You may also like

Show Buttons
Hide Buttons