Home IssueArtificial Intelligence U.S. Policymakers Should Learn From Countries Choosing Not to Regulate AI

U.S. Policymakers Should Learn From Countries Choosing Not to Regulate AI

by Daniel Castro
UK and Indian flag with a binary code background.

Every week more policymakers in the United States add to the chorus of voices calling for new laws to regulate AI. One reason for adopting this position may be because of a combined Brussels and Beijing effect—policymakers in both the EU and China have vocally espoused their goals for regulating AI, creating the false perception that new laws are necessary and inevitable. But while European and Chinese policymakers may share a common vision for broad government control of this emerging technology, lawmakers in other parts of the world have taken a decidedly different approach. U.S. policymakers would be wise to heed this alternative view.

There are two emerging policy approaches to regulating AI. In one camp there is China, the European Union, and Canada, where policymakers have proposed a comprehensive regulatory framework for AI. In China, regulators have drafted broad ethical requirements for AI systems, such as giving users the right to opt out of AI decisions as well as a newly proposed set of rules for generative AI that would, among other things, prohibit these systems from generating content that subverts state power. In the European Union, policymakers are finalizing the details of the AI Act, a law that would categorize AI systems into one of three risk categories and impose stringent rules on systems that pose the highest risk. Finally, in Canada, the Artificial Intelligence and Data Act (AIDA) would impose new obligations on certain high-risk AI systems and prohibit dangerous AI applications, although the details of the proposal have not yet been decided.

In the other camp are the United Kingdom and India. Both nations have explicitly noted that they have no intention of proposing new legislation. The UK has released a whitepaper outlining its “pro-innovation” approach to AI, which includes not introducing new legislation. Similarly, India’s Minister of Electronics and Information Technology recently issued a statement explaining that “the government is not considering bringing a law or regulating the growth of artificial intelligence in the country.”

Notably, this hands-off approach is not the result of an overly optimistic view of AI. The UK’s whitepaper describes concerns that AI will “damage our physical and mental health, infringe on the privacy of individuals, and undermine human rights” while India’s statement highlights its worries such as “bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm.”

These countries recognize that new technologies do not necessarily require new laws and that the harms of new regulations could outweigh any potential benefits. For example, the UK whitepaper explains that “by rushing to legislate too early, we would risk placing undue burdens on businesses.” Instead of new laws, the UK government has outlined key principles—such as transparency, accountability, and redress—its regulators should follow when enforcing existing rules and plans to actively monitor and assess its approach to respond to new risks and address barriers to innovation. India similarly is proceeding cautiously, with its National AI Strategy calling for more research to address questions around transparency, privacy, and bias. India has since made ethics research an important component of its AI Centers of Excellence and focused on addressing questions around human rights, inclusion, and diversity in international forums like the Global Partnership on AI.

U.S. policymakers can learn from both the UK and India. First, there is no need to rush to create a new regulatory framework because existing laws apply. Federal regulators acknowledged in a joint statement last month that “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” Second, a light-touch approach to regulating AI can ensure U.S. businesses thrive in the emerging AI economy. The argument that a lack of trust due to insufficient regulation is holding back AI adoption has no evidence. On the contrary, chatGPT set a historic record for the fastest adoption, growing to 100 million active monthly users in two months. Third, no country can address these issues on its own. Rather than each country creating its own rules for AI, which will only slow deployment, they should work together on joint research in areas like privacy-enhancing technologies and developing common technical standards for measuring bias, transparency, and risk.

It’s tempting to follow the crowd, but in this case, there are two very different paths forward on AI. The United States should not jump on the bandwagon of countries seeking to impose new laws on AI and instead work to build a coalition of allies who are willing to take, at least for the foreseeable future, a light-touch approach to regulating AI.

Image credit: Canva

You may also like

Show Buttons
Hide Buttons