Home IssueArtificial Intelligence How the EU Should Revise its AI White Paper Before it is Published

How the EU Should Revise its AI White Paper Before it is Published

by Daniel Castro
European Commission building (Berlaymont)

The European Commission is planning to release a white paper to support the development and uptake of artificial intelligence (AI). Early drafts of this white paper suggest that the Commission may call for additional AI regulations that would make it more expensive and more difficult for European businesses to use AI systems in many areas of the economy. Given the EU’s desire to be a leader in AI, and to use AI to bolster its global competitiveness, the Commission should avoid heavy-handed rules that would slow adoption of this emerging technology.

In some areas, the Commission appears to be on the right track. The EU has a wide body of existing legislation that provides robust consumer protections and oversight of digital systems. AI systems are already covered by these extensive laws, including the GDPR, and these laws are likely sufficient to handle any new consumer concerns that might arise from AI. In addition, the Commission appears to recognize the risk of creating excessively prescriptive rules for AI that could place significant administrative burdens on the private sector or allowing member states to create rules that fracture the single market.

Unfortunately, there are still a number of proposals under consideration that would undermine the development and deployment of AI systems, including insufficiently distinguishing between high-risk and low-risk AI applications; requiring a broad set of AI applications to pass evaluations before they can go to market; and requiring a broad set of AI systems to be trained on data sets adhering to European standards.

First, the Commission should not create a broad definition of high-risk AI applications. Singling out entire sectors as high-risk and covering them with sweeping rules would limit the deployment of AI in these sectors. Even sectors which may have some high-risk AI applications also have low-risk ones. For example, the public sector uses a variety of AI applications, many of which would be low-risk, such as deploying automated chatbots to answer frequently asked questions or using AI-based analytical tools to analyze geospatial datasets.

Second, the Commission should not require a broad set of AI products and services to undergo ex-ante conformity assessments before being allowed on the European market, because doing so would make it more expensive and time-consuming for companies to introduce new AI applications. These reviews might also require companies to disclose proprietary data or other intellectual property. The combination of higher costs, delays, and risk of losing intellectual property might even deter some companies from launching AI products and services at all in Europe, choosing instead to focus on more friendly markets.

Third, the Commission should not require that a broad set of AI systems be trained on datasets that conform to specific EU rules on traceability and data quality. Requiring that companies only use certain EU-approved datasets for training AI systems would significantly limit the available data that companies operating in the EU could use, making these businesses much less competitive with their global peers. Moreover, if companies had to retrain their AI systems to operate in the EU, this would introduce additional costs which would be passed on to European consumers and make EU companies less competitive globally. This requirement would also likely exclude many foreign companies from the European market, reducing competition and options for consumers and businesses. Finally, limiting only using European datasets would put consumers at risk because European data is neither representative nor diverse enough to be used to develop systems deployed globally and would be at odds with Europe’s intentions to lead in trustworthy AI.

The European Commission should recognize that in most cases it is not AI systems that should be regulated, but rather specific activities. For example, companies should be obligated to follow fair hiring practices regardless of whether they use AI applications as part of their recruitment process. As such, the best approach for AI would be for the European Commission to advise member states to avoid creating national rules that would disrupt the digital single market, encourage the continued development and testing of voluntary industry best practices, and only consider new regulations in high-risk scenarios where there is clear evidence of consumer harm. To that end, the Commission should review carefully the feedback from companies participating in the High-Level Expert Group’s piloting phase to ensure any rules it does create take into account the specificity and diversity of AI systems.

While the European Commission will provide for public consultation on its white paper, it is still important that the initial public version, expected in mid-February, sets the right tone. Therefore, it is important that the Commission make additional revisions before publishing.

Image credits: Pixabay.

You may also like

Show Buttons
Hide Buttons