The debate over artificial intelligence regulation has reached a critical juncture, with a high-stakes battle unfolding between federal and state authorities. As Washington edges closer to establishing national standards, the core conflict isn’t about the technology itself but rather who will dictate its governance.
States Lead While Federal Action Lags
In the absence of robust federal AI safety standards, states have stepped into the regulatory void, introducing dozens of bills designed to protect residents from AI-related harms. Notable examples include California’s SB-53 and Texas’s Responsible AI Governance Act, which prohibits intentional misuse of AI systems. This state-level action reflects a growing urgency to address AI risks before they escalate.
The tech industry, however, strongly opposes this decentralized approach, arguing it creates an unworkable patchwork that stifles innovation. Industry-backed groups claim a fragmented regulatory landscape will hinder competitiveness, particularly in the race against China. This argument is echoed by some within the White House who favor either a uniform national standard or no regulation at all.
Federal Preemption Efforts Gain Traction
Behind the scenes, powerful forces are actively pushing for federal preemption—effectively stripping states of their authority to regulate AI. Lawmakers in the House are reportedly considering language in the National Defense Authorization Act (NDAA) to block state AI laws. Simultaneously, a leaked draft of a White House executive order (EO) demonstrates strong support for overriding state efforts.
The proposed EO would establish an “AI Litigation Task Force” to challenge state laws in court, direct agencies to evaluate state rules deemed “onerous,” and push the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) toward national standards. Critically, the EO would place David Sacks—Trump’s AI and Crypto Czar—in a co-lead role, granting him significant influence over AI policy.
Industry Funding Fuels Opposition to State Regulation
Pro-AI super PACs, backed by major tech investors like Andreessen Horowitz and OpenAI president Greg Brockman, have poured hundreds of millions into local and state elections to oppose candidates who support AI regulation. Leading the Future, one such PAC, has raised over $100 million and launched a $10 million campaign to pressure Congress into creating a national AI policy that preempts state laws.
Industry advocates argue that existing laws, such as those addressing fraud or product liability, are sufficient to handle AI harms. This stance favors a reactive approach: let companies innovate rapidly and address problems in court as they arise. However, critics argue that this approach leaves consumers vulnerable to unchecked risks.
The State-Federal Dynamic: A Necessary Tension?
Despite efforts to block state regulation, lawmakers and attorneys general have pushed back, arguing that states serve as vital “laboratories of democracy” capable of addressing emerging digital challenges more quickly than the federal government. To date, 38 states have adopted over 100 AI-related laws, primarily targeting deepfakes, transparency, and government use of AI.
Rep. Ted Lieu (D-CA) is drafting a comprehensive federal AI bill covering fraud penalties, deepfake protections, whistleblower protections, and mandatory testing for large language models. While he acknowledges the bill may not be as strict as some proposals, he believes it has a higher chance of passing in a divided Congress.
The standoff between federal and state authorities underscores a fundamental question: How do we balance innovation with safety and accountability in the age of AI? The coming months will determine whether states retain their regulatory autonomy or if a sweeping federal preemption takes hold. The outcome will shape not only the future of AI governance but also the broader relationship between federal and state powers in the digital age.
