In response to China announcing draft regulations for generative AI to bolster the Chinese Communist Party’s (CCP) influence on global AI rules, Senate Majority Leader Chuck Schumer (D-NY) unveiled a proposal last month to regulate AI in the United States. He has not yet announced the exact details of the proposed bill, but concerns about AI combined with distrust of the technology sector will put pressure on Congress to shift the United States away from the light-touch, targeted approach to regulation that has long served it well in favor of a new approach that embraces the precautionary principle.
However, Senator Schumer should keep faith in—and advocate for—a light-touch approach to AI regulation because it provides a much-needed counter to the heavy-handed regulatory approach from China and the EU. China’s approach is to regulate AI to ensure that it conforms to the authoritarian goals of the Chinese Communist Party (CCP). Likewise, the EU wants to place strict rules on the use of AI because of fears that the technology will otherwise be used for harmful purposes antithetical to European values.
The United States can promote a better approach that builds on its legacy of a light-touch approach to digital technologies. Other countries like the United Kingdom and India are already showing a pragmatic, innovation-friendly approach is possible, but U.S. leadership on the global stage is sorely needed. U.S. leadership on digital policy issues has always been important. For example, in the 1990s the Clinton administration championed the multistakeholder approach to Internet governance, creating a balance between the different needs of government, commercial, and civil society stakeholders. For years, the U.S. government has backed a free and open Internet, helping to resist efforts by other countries to censor content online and impose data localization requirements. What it didn’t do was create a federal Internet law, and it doesn’t need to create a federal AI law either.
Instead, Congress should continue to support sector-specific regulations, where necessary, to help pave the way for AI innovation while limiting harms. It is a strength, not a weakness, of the U.S. system that executive branch agencies promulgate regulations because federal regulators are best placed to identify and target the risks that are specific to each domain. For instance, the Department of Transportation (DOT) is best placed to regulate the use of autonomous vehicles while the Food and Drug Administration (FDA) is best placed to regulate AI-based medical devices.
That’s not to say there isn’t a role for Congress to help create an innovation-friendly regulatory environment that fosters transparency, responsibility, and accountability. There are many things it can do including creating and passing bills that pursue algorithmic accountability, a national privacy framework, increasing the technical expertise of federal regulators, and charging federal agencies with developing sector-specific AI strategies that support the responsible deployment of AI, to name just a few. But broad regulation of AI should not be on the table right now.
The United States still has an outsized say on global AI norms and plenty of allies. Senator Schumer should hold the U.S. position firm and resist going down the innovation-harming precautionary path.
Image credits: Wikimedia Commons