For nan first time, Washington is getting adjacent to deciding really to modulate artificial intelligence. And nan conflict that’s brewing isn’t astir nan technology, it’s astir who gets to do nan regulating.
In nan absence of a meaningful national AI modular that focuses connected user safety, states person introduced dozens of bills to protect residents against AI-related harms, including California’s AI information measure SB-53 and Texas’s Responsible AI Governance Act, which prohibits intentional misuse of AI systems.
The tech giants and buzzy startups calved retired of Silicon Valley reason specified laws create an unworkable patchwork that threatens innovation.
“It’s going to slow america successful nan title against China,” Josh Vlasto, co-founder of pro-AI PAC Leading nan Future, told TechCrunch.
The industry, and respective of its transplants successful nan White House, is pushing for a nationalist modular aliases nary astatine all. In nan trenches of that all-or-nothing battle, caller efforts person emerged to prohibit states from enacting their ain AI legislation.
House lawmakers are reportedly trying to usage nan National Defense Authorization Act (NDAA) to artifact authorities AI laws. At nan aforesaid time, a leaked draught of a White House executive bid besides demonstrates beardown support for preempting authorities efforts to modulate AI.
A sweeping preemption that would return distant states’ authorities to modulate AI is unpopular successful Congress, which voted overwhelmingly against a similar moratorium earlier this year. Lawmakers person based on that without a national modular successful place, blocking states will time off consumers exposed to harm, and tech companies free to run without oversight.
Techcrunch event
San Francisco | October 13-15, 2026
To create that nationalist standard, Rep. Ted Lieu (D-CA) and nan bipartisan House AI Task Force are preparing a package of national AI bills that screen a scope of user protections, including fraud, healthcare, transparency, kid safety, and catastrophic risk. A megabill specified arsenic this will apt return months, if not years, to go law, underscoring why nan existent unreserved to limit authorities authority has go 1 of nan astir contentious fights successful AI policy.
The conflict lines: NDAA and nan EO
Trump displays an executive bid connected AI he signed connected July 23, 2025. (Photo by ANDREW CABALLERO-REYNOLDS / AFP) Image Credits:ANDREW CABALLERO-REYNOLDS/AFP / Getty ImagesEfforts to artifact states from regulating AI person ramped up successful caller weeks.
The House has considered tucking connection successful nan NDAA that would forestall states from regulating AI, Majority Leader Steve Scalise (R-LA) told Punchbowl News. Congress was reportedly moving to finalize a woody connected nan defense measure earlier Thanksgiving, Politico reported. A root acquainted pinch nan matter told TechCrunch negotiations person focused connected narrowing nan scope to perchance sphere authorities authority complete areas for illustration kids’ information and transparency.
Meanwhile, a leaked White House EO draught reveals nan administration’s ain imaginable preemption strategy. The EO, which has reportedly been put connected hold, would create an “AI Litigation Task Force” to situation authorities AI laws successful court, nonstop agencies to measure authorities laws deemed “onerous,” and push nan Federal Communications Commission and Federal Trade Commission towards nationalist standards that override authorities rules.
Notably, nan EO would springiness David Sacks – Trump’s AI and Crypto Czar and co-founder of VC patient Craft Ventures – co-lead authority connected creating a azygous ineligible framework. This would springiness Sacks nonstop power complete AI argumentation that supersedes nan emblematic domiciled of nan White House Office of Science and Technology Policy, and its caput Michael Kratsios.
Sacks has publically advocated for blocking authorities regularisation and keeping national oversight menial, favoring manufacture self-regulation to “maximize growth.”
The patchwork argument
Sacks’s position mirrors nan viewpoint of overmuch of nan AI industry. Several pro-AI ace PACs person emerged successful caller months, throwing hundreds of millions of dollars into section and authorities elections to reason candidates who support AI regulation.
Leading nan Future – backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale – has raised much than $100 million. This week, Leading nan Future launched a $10 cardinal campaign pushing Congress to trade a nationalist AI argumentation that overrides authorities laws.
“When you’re trying to thrust invention successful nan tech sector, you can’t person a business wherever each these laws support popping up from group who don’t needfully person nan method expertise,” Vlasto told TechCrunch.
He based on that a patchwork of authorities regulations will “slow america successful nan title against China.”
Nathan Leamer, executive head of Build American AI, nan PAC’s defense arm, confirmed nan group supports preemption without AI-specific national user protections successful place. Leamer based on that existing laws, for illustration those addressing fraud aliases merchandise liability, are capable to grip AI harms. Where authorities laws often activity to forestall problems earlier they arise, Leamer favors a much reactive approach: fto companies move fast, reside problems successful tribunal later.
No preemption without representation
Alex Bores speaking astatine an arena successful Washington, D.C., connected November 17, 2025. Image Credits:TechCrunchAlex Bores, a New York Assembly personnel moving for Congress, is 1 of Leading nan Future’s first targets. He sponsored nan RAISE Act, which requires ample AI labs to person information plans to forestall captious harms.
“I judge successful nan powerfulness of AI, and that is why it is truthful important to person reasonable regulations,” Bores told TechCrunch. “Ultimately, nan AI that’s going to triumph successful nan marketplace is going to beryllium trustworthy AI, and often nan marketplace undervalues aliases puts mediocre short-term incentives connected investing successful safety.”
Bores supports a nationalist AI policy, but argues states tin move faster to reside emerging risks.
And it’s existent that states move quicker.
As of November 2025, 38 states person adopted much than 100 AI-related laws this year, chiefly targeting deepfakes, transparency and disclosure, and authorities usage of AI. (A caller study recovered that 69% of those laws enforce nary requirements connected AI developers astatine all.)
Activity successful Congress provides much grounds of nan slower-than-states argument. Hundreds of AI bills person been introduced, but fewer person passed. Since 2015, Rep. Lieu has introduced 67 bills to nan House Science Committee. Only 1 became law.
More than 200 lawmakers signed an open missive opposing preemption successful nan NDAA, arguing that “states service arsenic laboratories of democracies” that must “retain nan elasticity to face caller integer challenges arsenic they arise.” Nearly 40 authorities attorneys wide also sent an unfastened letter opposing a authorities AI regularisation ban.
Cybersecurity master Bruce Schneier and information intelligence Nathan E. Sanders – authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship – reason nan patchwork title is overblown.
AI companies already comply pinch tougher EU regulations, they note, and astir industries find a measurement to run nether varying authorities laws. The existent motive, they say, is avoiding accountability.
What could a national modular look like?
Lieu is drafting an complete 200-page megabill he hopes to present successful December. It covers a scope of issues, for illustration fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for ample connection exemplary companies.
That past proviso would require AI labs to trial their models and people results – thing astir do voluntarily now. Lieu hasn’t yet introduced nan bill, but he said it doesn’t nonstop immoderate national agencies to reappraisal AI models directly. That differs from a akin bill introduced by Sens Josh Hawley (R-MS) and Richard Blumenthal (D-CN) which would require a government-run information programme for precocious AI systems earlier they deployed.
Lieu acknowledged his measure wouldn’t beryllium arsenic strict, but he said it had a amended chance astatine making it into law.
“My extremity is to get thing into rule this term,” Lieu said, noting that House Majority Leader Scalise is openly dispute to AI regulation. “I’m not penning a measure that I’d person if I were king. I’m trying to constitute a measure that could walk a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.”
2 hours ago
English (US) ·
Indonesian (ID) ·