Last week, U.S. House Energy and Commerce Chairman Brett Guthrie, R-Ky., announced a full committee markup of the committee's budget reconciliation text. Set for Tuesday, 13 May at 2 p.m. ET, a memorandum from the majority staff to committee members and staff, which was published Sunday, 11 May, includes language that could lead to a halt on enforcement of state-level laws related to artificial intelligence systems.

In part 2 (page 9), subsection (c) states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of enactment of this Act."

Additional text can be found under section c (page 6), which includes text on the "rule of construction" of the moratorium.

In his press release announcing the full markup, Guthrie said that through the reconciliation process the committee "is working to ... unleash American energy and innovation."

On Monday afternoon, Energy and Commerce Committee Democrats criticized provisions in the reconciliation bill.

Commerce, Manufacturing, and Trade Subcommittee Ranking Member Jan Schakowsky, D-Ill., said the AI moratorium "gives Big Tech free reign to take advantage of children and families" and "allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI."

California Privacy Protection Agency Executive Director Tom Kemp, whose agency is currently drafting state regulations related to automated decision-making technology, said, "When we block responsible safeguards in the face of rapid technological change, we make ourselves — and future generations — less safe from privacy harms." Kemp called on "Congress to strike this provision and uphold its longstanding approach to federal privacy and technology legislation: establish a baseline for protections while preserving states' authority to adopt stronger laws."

IAPP Managing Director, Washington, D.C., Cobun Zweifel-Keegan, CIPP/US, CIPM, analyzed the proposal, noting that "the ban on enforcement appears designed to avoid applying to any technology-neutral law, which means it would potentially not affect civil rights, consumer protection, privacy or other laws that treat harmful outcomes the same whether facilitated by AI systems or not."
As can be seen in the IAPP U.S. State AI Governance Legislation Tracker, there is no shortage of AI-related bills across the country.

In a post for Lawfare, Kevin Frazier and Adam Thierer wrote that the proliferation of AI bills "could undermine the nation's efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying." They call on Congress "to get serious about preemption."

In comments provided to the IAPP, Goodwin Partner Omer Tene said, "There's certainly great concerns in industry about the avalanche of state legislative bills regulating various aspects of AI. If privacy regulation is a patchwork, this is emerging to resemble an artwork comprising microplastics."

It is unclear whether the committee will accept the 10-year moratorium or whether it will face legal challenges.

Tene adds that "federal moratoriums are rare and have typically focused on very specific technologies, which are regulated at the federal level (e.g. commercial drones). Here, Congress purports to block regulation of an incredibly broad, all encompassing technology, and to do so absent any semblance of federal regulation."

Tene points to legal precedent. "In my opinion, it's in tension with the Tenth Amendment anti-commandeering principle as applied in Murphy v. NCAA."

Husch Blackwell Partner David Stauss, CIPP/E, CIPP/US, CIPT, FIP, PLS, also questioned whether such a moratorium would be legal.

He points out that "some states, such as Kentucky, have passed laws regulating the state's use of AI systems. Other states, such as Kansas, have passed laws restricting the state's use of DeepSeek. Based on the limited information available, even that type of regulation would be prohibited."

How many laws would be affected is up in the air, but depending on how the federal government defines terms, the scope could be considerable.
"A lot would depend on how the terms are defined," Stauss said. "Keep in mind that the Colorado AI Act uses a broad definition of AI based on the OECD definition, while other proposed state definitions have used narrower definitions intended to cover only things like large language models. While Colorado has a broad definition, it includes many carveouts such as firewalls, storage, and calculators, to name a few. If the federal definition is really broad, then all sorts of laws could be implicated, even product liability and medical malpractice laws as extreme edge cases."

In addition to the plethora of proposed AI bills, it is unclear what kind of effect such moratorium would have on existing privacy laws.

"The right to opt out of profiling in the state consumer privacy laws would be preempted given that those are defined specifically by reference to automated decision making."

Notably, the California Privacy Protection Agency recently published modified draft text for its proposed automated-decision making technology regulations. These are open to public comment until 2 June.

Stauss added, "While we tend to focus on the algorithmic discrimination, provenance, and disclosure laws, states have passed laws on many different AI-related topics such as the use of AI in elections, deep fakes, name, image and likeness, CSAM, and healthcare. Numerous state insurance regulators have also adopted the NAIC model bulletin, with insurance being a state-regulated industry."

Consumer Reports announced its opposition to the proposed moratorium Monday afternoon. "Congress has long abdicated its responsibility to pass laws to address emerging consumer protection harms; under this bill, it would also prohibit the states from taking actions to protect their residents" CR Policy Analyst for AI Issues Grace Gedye said.

"This incredibly broad preemption would prevent states from taking action to deal with all sorts of harms, from non-consensual intimate AI images, audio, and video, to AI-driven threats to critical infrastructure or market manipulation, to protecting AI whisteblowers, to assessing high-risk AI decision-making systems for bias or other errors, to simply requiring AI chatbots to disclose that they aren't human."

This article has been updated to include response from House Committee Democrats, the CPPA and Zweifel-Keegan's analysis.

Jedidiah Bracy is the editorial director for the IAPP.