The pressure on President-elect Biden to reverse President Trump’s approach to AI regulation when he takes office will be hard to resist. The Trump administration has advanced a light-touch approach toward regulation to ensure continued U.S. leadership in the field. But growing concerns, especially from progressives, about AI bias in law enforcement, housing, and employment may lure Biden to embrace the precautionary principle and rescind new guidance from the Office of Management and Budget (OMB) that has directed federal agencies to take an innovation-friendly approach to AI regulation. Changing direction entails serious risk to U.S. innovation and competitiveness.
The OMB’s guidance reaffirms the 10 principles the White House drafted last year, such as fostering growth and engendering trust in AI systems, reducing unnecessary barriers to innovation, and safeguarding core American values. But it also reflects a shift from principles to practice by establishing a framework for federal agencies to assess potential regulatory and non-regulatory approaches to emerging AI issues.
For example, the new guidance instructs agencies to precede any regulatory action with an impact analysis that clearly articulates the problem an agency is seeking to address, whether it be a market failure (e.g., asymmetric information), protecting privacy or civil liberties, preventing unlawful discrimination, or advancing the United States’ economic and national security. While indicating the potential for limited, focused regulations in certain areas, the guidance promotes a governance framework that requires agencies to impose regulation only when the benefits of doing so outweighs the costs to AI-driven innovation and growth.
If Biden changes gears and allows the misguided notion that AI is inherently problematic to cloud the discussion on AI regulation, the result will be that the United States becomes like Europe: aspiring to be an AI leader, but in reality being more of an AI follower. OMB’s new guidance is based on a positive view of AI. It correctly holds that AI overwhelming benefits society but poses modest and not irreversible risks. The guidelines instruct agencies when deciding whether and how to regulate in an area that may affect AI applications to “adopt a tiered approach in which the degree of risk and consequences of both success and failure of the technology determines the regulatory approach, including the option of not regulating.”
In other words, there is always likely to be at least some risk from an AI system failing or making an error, but there are a variety of approaches agencies can take to detecting and remediating them. The goal is to find an approach that strikes a balance between addressing these risks and spurring AI innovation. Regulators do not need to choose one over the other.
The danger is that a negative narrative about AI will lead policymakers to create unnecessary barriers to developing and adopting AI. For example, it is completely legitimate for policymakers to regulate autonomous weapons systems to ensure their safe use. But it is another matter for policymakers to limit the military from using any AI systems—even in disaster recovery—because the system might be biased or otherwise faulty.
The Biden administration should recognize that market forces, public opinion, tort law, existing laws and regulations, and light-touch targeted interventions can usually manage the risks from AI systems. The OMB guidance rightly points out that agencies do not necessarily need to issue new regulations to address risks from AI systems—they have non-regulatory options. They can issue policy guidance to encourage innovation in a specific sector, conduct experiments and pilot programs to inform future decisions, or develop voluntary standards based on industry consensus.
What Biden should not do is take a leaf out of the EU’s book by implementing new unnecessary rules, such as those requiring AI products and services to undergo regulatory review before they can be introduced in the market, focusing on AI ethics, or requiring algorithmic transparency. Not only would such approaches unnecessarily slow AI adoption, but they would make it harder for the United States to compete against China.
Ultimately, the President-elect should recognize that he cannot have his cake and eat it too: mitigating potential risks AI systems pose through unnecessarily restrictive regulations will come at a cost to AI innovation and adoption. If the United States is going to thrive in the AI economy under his presidency, Biden will have to resist going down the precautionary path.
Image Credits: Gage Skidmore