The Trump administration unveiled a legislative framework Friday designed to establish a unified national policy for artificial intelligence (AI) in the United States. The plan seeks to preempt state-level AI laws, consolidating power in Washington and potentially undermining recent state-led efforts to regulate the rapidly evolving technology.

The core argument behind this centralization is that a fragmented regulatory landscape hinders American innovation. According to a White House statement, “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.” The framework proposes a federal approach that overrides stricter state regulations, prioritizing AI scaling and development.

Shifting Responsibility: Parents Over Platforms

One key element of the framework is a notable shift in responsibility. Rather than placing stringent obligations on AI companies, it emphasizes parental control regarding issues like child safety. The proposal calls on Congress to equip parents with tools to manage their children’s digital environments, such as account controls and device usage limitations. While it acknowledges the need to reduce risks of sexual exploitation and self-harm, it stops short of mandating concrete, enforceable requirements for platforms.

This approach reflects a broader trend towards lighter regulation, championed by figures like White House AI czar David Sacks, a venture capitalist known for his pro-growth, “accelerationist” views. The framework aims for a “minimally burdensome national standard,” accelerating AI adoption across industries.

Preemption of State Laws and Liability Shields

The framework actively seeks to preempt state AI regulations, preserving state authority only over general laws such as fraud, child protection, zoning, and state use of AI. It explicitly draws a hard line against states regulating AI development itself, framing it as an “inherently interstate” issue tied to national security and foreign policy.

Critically, the framework proposes shielding AI developers from liability for unlawful conduct involving their models. This provision prevents states from penalizing developers for third-party misuse of their technology, a key demand from the AI industry.

Industry Response: Support for National Standards

Many within the AI industry are celebrating the framework, seeing it as a pathway to faster innovation. Teresa Carlson, president of General Catalyst Institute, stated that startups have been asking for exactly this: “A clear national standard so they can build fast and scale.” The framework removes the obstacle of navigating conflicting state laws, easing the regulatory burden on AI companies.

Concerns Over Accountability and Oversight

Critics argue that this centralization diminishes states’ roles as early regulators, stifling experimentation and oversight. Brendan Steinhauser, CEO of The Alliance for Secure AI, accused the administration of doing “the bidding of Big Tech at the expense of regular, hardworking Americans.” The framework lacks provisions for independent oversight, enforcement mechanisms, or liability frameworks for novel harms caused by AI.

The administration’s stance on copyright and free speech further complicates matters. While it acknowledges fair use for AI training data, it also emphasizes preventing government censorship, potentially hindering regulation of misinformation or election interference.

Anthropic Lawsuit: First Amendment Clash

The framework’s emphasis on protecting “lawful political expression” comes as Anthropic sues the government, alleging First Amendment infringement after the Department of Defense labeled it a supply-chain risk for refusing to allow military surveillance applications. This clash highlights the administration’s broader push against “woke AI,” as Trump has publicly criticized Anthropic for ideological neutrality.

In conclusion, the Trump administration’s proposed AI framework prioritizes national standardization and industry growth over state-level regulation and comprehensive oversight. The shift towards parental responsibility and liability shields for developers raises concerns about accountability and the potential for unchecked AI development.