Earlier this summer, Congress considered a pause on enforcing state AI laws in the One Big Beautiful Bill, but ultimately dropped the measure from the final text. Soon after, the White House released its AI Action Plan, urging the removal of “red tape and onerous regulation” that hinders AI, including at the state level. As AI spreads across critical sectors, the United States faces an increasingly fragmented regulatory landscape. Without federal preemption to rein in state efforts, states are rushing to impose audits, transparency mandates, and sector-specific obligations—often with overlapping or conflicting rules that extend beyond state borders.
Consider some of the policies that federal preemption could resolve.
Audits and Impact Assessments
States such as New York, Illinois, Colorado, and California have proposed laws requiring companies to conduct independent audits for bias or disparate impact, perform annual impact assessments, and report on AI system design and operation. For example, New York’s SB 21169A would mandate third-party audits assessing discrimination, accuracy, and privacy, among other factors. While these state measures aim to enhance oversight and transparency, their varying standards and extraterritorial reach force companies to face conflicting audit requirements across states, ultimately increasing costs and complicating nationwide operations. As a result, companies may find themselves diverting resources from actual risk mitigation to procedural compliance. This shift reduces the effectiveness of oversight by emphasizing formal compliance over practical risk management—such as voluntary adoption of the National Institute of Standards and Technology’s AI Risk Management Framework—potentially leaving some risks insufficiently addressed.
AI Model Transparency
Some states have proposed legislation requiring detailed disclosures about AI models, such as their training data and known limitations. Colorado’s SB 24-205, for instance, would require developers to disclose high-level summaries outlining the categories of data used to train high-risk AI systems, whereas other states have proposed different documentation and disclosure requirements. This patchwork forces companies to navigate both public-facing and regulator-only disclosures, multiplying costs and risks. The required scope also varies—California’s AB 2013 may compel revealing trade secrets or security-sensitive details, creating new risks for companies and setting a dangerous precedent that foreign governments, including China, may exploit to justify similar provisions. Inconsistent state-level disclosure rules—particularly when they touch on proprietary information—make it more difficult and riskier for companies to comply.
Consumer Disclosures
Some states require companies to disclose their use of AI to consumers. Illinois’ HB 3021, for example, would mandate disclosure in commercial interactions where AI could be mistaken for a human. Yet, other states such as Maine impose different thresholds, notification methods, or sector-specific requirements. Whereas Illinois’ HB 3021 would require disclosure whenever a company uses AI, Maine’s LD 1727 requires it only when AI use could mislead or deceive a reasonable consumer. This forces businesses to tailor consumer disclosures on a state-by-state basis, complicating user experience design and creating fragmented consumer rights across the country.
AI Restrictions and Bans
Some states have advanced laws limiting AI use in hiring, credit scoring, housing, or surveillance. For example, North Carolina’s HB 970 targets algorithmic rent-setting in real estate, whereas many other states, like Florida, have no such restrictions. This uneven regulatory landscape discourages companies from developing or deploying AI in these sectors. Additionally, it hinders the scalability of AI solutions, particularly in sectors like healthcare, finance, and housing, where consistency is crucial.
Some policymakers have considered exempting certain categories of state AI laws from federal preemption, but most state efforts target areas with little consensus and significant cross-border impact. Carving them out would only preserve fragmentation and compliance burdens. The core issue is not whether audits or transparency mandates serve valid goals, but that their inconsistent application across states creates unnecessary complexity.
Congress should preempt these high-impact, low-consensus areas of state AI policy. A uniform federal framework would reduce duplicative compliance, prevent states from setting de facto national standards, and provide the predictability needed to support innovation, consumer protection, and market stability.
Image Credit: Diego Delso, delso.photo, CC BY-SA 4.0, via Wikimedia Commons