Home IssueArtificial Intelligence If the DSM Disintegrates, the EU Will Fall Further Behind in AI

If the DSM Disintegrates, the EU Will Fall Further Behind in AI

by Eline Chivot
Kees Verhoeven

Creating a digital single market remains critical to European success in the digital economy. The EU has adopted many legislative proposals over the past five years in pursuit of this goal, but member states have recently made various moves towards adopting their own AI initiatives, many of them that will hurt overall EU AI development. Businesses developing and using AI need scale to achieve their full potential and that depends in large part on a harmonized EU market. Conflicting national laws will make achieving this harder and thus make it harder for firms in the EU to compete with their U.S. and Chinese counterparts. The EU should assert its leadership in further developing and strengthening its AI strategy in three ways.

First, EU policymakers should ensure that any national guidelines, labeling schemes, or reporting on criteria for responsible and ethical use of AI remain voluntary. A number of member states have started these types of initiatives. For example, the Danish Minister for Industry, Business and Financial Affairs recently presented a new voluntary labeling system for ethical and responsible use of data, and Malta launched a voluntary AI certification program based on its AI ethical framework. Should these become mandatory, they would be an obstacle to the digital single market: one type of AI system which may be labeled as ethical or as being low risk in one member state may not be in another. Country-specific standards and obligations could add to the administrative burden and compliance costs for companies, making it more difficult for them to bring AI products and services to market. Or EU citizens may find they are unable to use an autonomous vehicle or AI-powered mobile app across borders.

Second, when crafting the forthcoming legislation for a European approach to AI, EU policymakers should not get sidetracked by a patchwork of proposals which member states have started to promote—not least because some of them are flawed or unnecessary. For example, Germany’s Data Ethics Commission recently recommended legislation to regulate the development of algorithmic systems and the use of data with prescriptive rules including on transparency and user rights obligations. This one-size-fits-all approach ignores the complexity of algorithmic systems, their various types, uses, and impacts, and the uniqueness of each sector that develops them. And it would clash with existing laws, such as the GDPR. Just recently, a German federal officer suggested to adopt tougher rules on decision-making systems—beyond what the GDPR provides. In the Netherlands, a policymaker proposed creating a mandatory register for AI systems whose automated decisions have a significant impact on people’s lives. These initiatives suggest that some countries could adopt more stringent rules than others, exposing companies to legal disarray, and slowing down progress towards a common European approach. Just as the EU did with the GDPR in creating an EU-wide approach to privacy, it should do the same with regard to AI and preempt national governments from creating a patchwork of conflicting regulations.

Third, before creating any new laws or regulations on AI, policymakers in member states and at EU level should listen to industry and related experts to ensure rules are reasonable and appropriate. For example, the European Commission’s High-Level Expert Group on AI’s has produced an assessment list for companies to use to ensure they are producing ethical AI systems, but feedback provided by companies that have tested these guidelines shows that the list needs revising. One trade association noted that the assessment list should be “more practical and flexible,” such as by providing recommendations on how to properly compose ethics boards or conduct audits. Another complained that the assessment list is too “closely related to—and in some instances even overlapping with—the requirements set in the GDPR” which could impose duplicative work on companies. The EU will not become a global AI leader if policymakers do not engage with the private sector to craft pragmatic and effective rules.

With an AI strategy, a coordinated action plan on AI, and its forthcoming legal framework for AI, the EU has taken steps to ensure that policies for AI will be part of a single framework—the fundamental condition for businesses to develop, scale, and reap the benefits of AI across borders. To move forward as one common, credible, and competitive force in the digital age, the EU should ensure member states do not convert voluntary arrangements to legislative mandates, that EU digital policies preempt those of member states, and that policymakers continue to consult industry. If Brussels fails to bring together the disparate interests of member states and harmonize its AI policy initiatives, the EU will likely to continue to fall behind the United States and China in AI development and use.

Image credits: Wikipedia

You may also like

Show Buttons
Hide Buttons