Home IssueArtificial Intelligence Calls for Global Governance of AI Miss the Mark

Calls for Global Governance of AI Miss the Mark

by Joshua New

Many policymakers have grown concerned about high-profile stories of algorithms gone awry, prompting calls for not just national regulation of artificial intelligence (AI), but global. At the 2016 meeting of G7 Information and Communications Technology (ICT) ministers, Japan called for establishing basic rules for AI. A year later in Italy, G7 ICT ministers declared the importance of “exploring multi-stakeholder approaches to policy and regulatory issues” associated with AI. More recently the European Group on Ethics in Science and New Technologies (EGE), an advisory body for the European Commission, called for creating “a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence.” And the European Economic and Social Committee, which advises the European Parliament, recommended that the European Union should establish “clear global policy frameworks for AI.” Trying to establish global regulations for AI would be a mistake.

These proposals and others like fail to justify why AI warrants binding international rules. Moreover, most proposals do not offer specifics, and those that do are typically unworkable from a regulatory perspective. And countries pushing for these proposals often view AI through the lens of the “precautionary principle” and would shape regulation accordingly, leading to limited innovation. This would be the equivalent of establishing global restrictions on the use of genetically modified organisms decades ago due to speculative fears, robbing humanity of the massive benefits they have provided.

Even if global regulations could be designed to enable innovation, the fact is that there is no need to regulate technologies at the international level. And in this case, AI is simply software, and software is not regulated internationally. Furthermore, it makes little sense to single out AI itself for regulation, as it is merely a tool to accomplish tasks in a wide variety of sectors, including finance, transportation, and healthcare, all of which are already subject to regulation. If policymakers believe AI poses challenges for international finance that existing laws do not effectively address, for example, then countries should enact new global financial regulations.

One motivation for calls for global regulation are concerns that China, which is aggressively pursuing global leadership in AI, will shape the technology in ways that are inconsistent with democratic values. However, even if there were a global regulatory framework to address this risk, China would not agree to it. If countries have ethical concerns about Chinese AI systems, they can simply pass laws governing AI within their own borders. Domestic organizations using AI systems would have to comply, and Chinese AI firms would have to develop systems that comply with these laws to compete.

The fact that these international proposals typically offer impractical or harmful policy recommendations is another reason to be skeptical. For example, the EGE report states “‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by the outcomes of deliberative democratic processes… [autonomous systems] should be designed so that their effects align with a plurality of fundamental human values and rights.” This sounds innocuous, but only on its surface. Does this mean that there would have to be a vote to determine if companies are allowed to use AI systems to automate work tasks? If people were to think losing a job to automation is a violation of human rights, could this lead to a global ban on this kind of AI? If any nation is that afraid of progress and wants to limit AI systems this way, it should be free to do so and pay the price in the form of slower economic growth. But the rest of the nations in the world should be free to pursue AI as they see fit. Likewise, if a government wants to develop AI that violates human rights, there is little reason to believe that a global agreement would stop this. Many nations already systemically violate human rights without the use of AI.

This is not to say that the EU, G7 countries, and other multinational bodies should not work to influence the development of AI to mitigate potential risks, such as algorithmic bias. But rushing into shared agreements to limit the development and use of AI is not productive. Rather policymakers should ensure shape the development of AI by helping enact policies to help their nation lead in the domestic development and adoption of AI. Countries that lead in AI R&D, have skilled workforces capable of developing and using AI, and embrace AI-friendly policies, will enjoy far more influence in how the world uses AI than countries that simply race to regulate and limit it. For example, the EU will have more success in ensuring consumers can obtain explanations for algorithmic decisions by funding research on algorithmic explainability than by limiting companies from using algorithms that cannot be easily explained to consumers.

Fortunately, the tide may be shifting. In March 2018, G7 Innovation Ministers issued a statement on artificial intelligence that focuses on fostering the development of AI on a global scale, rather than regulating it. The statement describes how G7 members will support economic growth from AI, increase trust in and adoption of AI, and promote inclusivity in AI development and deployment, such as by supporting efforts to raise public awareness about the benefits of AI, investing in applied R&D related to AI, and supporting voluntary industry-led technical standards. This is a welcome step in the right direction, but it remains to be seen whether this enlightened approach will become the norm. Moreover, to the extent that global regulation regarding AI is needed, any such rules should ones that enable, rather than shackle AI innovation. In particular, global trade rules that limit the ability of nations to limit cross-border data flows, given the importance of access to large datasets for AI. Policymakers should pursue this more productive approach to governing AI, rather than race to restrict it.

Image: Italian G7 Presidency 2017

You may also like

Show Buttons
Hide Buttons