White Logo
a

The Future of AI Regulation

All aboard!

It’s mid-December in Washington, D.C. and the last policymaking train of the year is about to leave Capitol Hill. Over the weekend, lawmakers unveiled the National Defense Authorization Act (NDAA), the annual defense bill that, for many decades, has been the last must-pass legislative vehicle to which lawmakers attempt try to attach their policy priorities.

The draft of the NDAA did not include many of the financial services policy riders lawmakers and industry groups were hoping to attach. And, to the surprise of many, it also excluded language that would have kept state governments from writing their own artificial intelligence (AI) regulations. As readers may recall, House Republicans included a ban on state-based AI action in their version of the One Big Beautiful Bill Act (OBBA), but that provision eventually was left behind at passage of the OBBBA in the Senate. The NDAA was another opportunity for Republicans to try to curb state AI regulations.

What happened this time around — and, in the absence of an edict from Congress — what happens next?

Let’s take a look.

What Happened to the NDAA AI Provision?

When Republicans tried to attach the AI ban to the OBBBA, the provision was less-than-germane to a budget reconciliation bill in the Senate. That’s because provisions of a budget reconciliation package must directly deal with one of three things: taxes, spending, or the nation’s debt. As a result, the Senate was forced to drop the provision. The NDAA, on the other hand, does not come with similar rules, so why was the provision left out this time if GOP leaders support it? (Which they do.)

As it turns out, opposition from Democrats is strong, but plenty of Republicans outside of Washington, D.C. have cautioned Congress from acting on this matter, too.

This past November, the National Association of Attorneys General sent a letter on behalf of a bipartisan coalition of 36 state attorneys general to U.S. House and Senate leaders, urging them to reject proposals for a federal moratorium on state AI regulation. They argued AI poses significant risks to the public, especially vulnerable populations like children and seniors.

More than 200 state lawmakers also sent a letter to members of Congress in late November urging them to oppose a ban. “The federal preemption provision under discussion could nullify a wide range of laws that states have already adopted to address urgent digital issues,” they warned. “In recent years, legislatures across the country have passed AI-related measures to strengthen consumer transparency, guide responsible government procurement, protect patients, and support artists and creators. These laws represent careful, good-faith work to safeguard constituents from clear and immediate AI-related harms.”

According to The Hill, Rep. Marjorie Taylor Greene (Ga.) — the new thorn in President Donald Trump’s side — Arkansas Gov. Sarah Huckabee Sanders (R), and Florida Gov. Ron DeSantis (R) also oppose a ban. “There should not be a moratorium on states’ rights for AI,” Rep. Green wrote on social media. “States must retain the right to regulate and make laws on AI and anything else for the benefit of their state. Federalism must be preserved.”

What’s Next at the Federal Level?

On Monday, President Donald Trump announced on social media that he plans to take this issue into his own hands. Specifically, he said he will soon issue an executive order (EO) to limit states from regulating AI.

The president’s goal is to maintain the United States’ relative advantage on AI innovation and development. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS,” he proclaimed on Truth Social.

While the order has not yet been released, back in November Bloomberg did some digging into what such an EO may look like. Reporter Oma Seddiq said the EO would likely direct the U.S. attorney general to establish a task force that would challenge state AI laws should they “unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful.” States with AI laws that are considered burdensome or restrictive would also lose eligibility for some federal broadband funding under the EO the Trump administration has drafted, Seddiq reported.

Of course, as with all executive orders, the edict would lack the force of law. Additionally, as The New York Times said, efforts by the White House to block state laws could be challenged in court. “The president cannot pre-empt state laws through an executive order, full stop,” said Travis Hall, the director for state engagement at the Center for Democracy and Technology. “Pre-emption is a question for Congress, which they have considered and rejected, and should continue to reject.”

In other words: states could defy the president, but it is likely President Trump will try to exact some revenge if they do.

While President Trump has pledged action, Punchbowl said congressional GOP leaders will at least consider dropping the idea into every must-pass piece of legislation.” Punchbowl also noted Rep. Jay Obernolte (R-Calif.), who chaired the House’s AI task force, continues to work on a bill that would preempt state laws. While he said it has a “good chance of passing,” he has not been able to attract Democratic support.

Rep. Obernolte is not the only lawmaker who has introduced AI-related legislation in Congress. The Brennan Center for Justice is tracking several pieces of legislation. These bills each would do one (or more) of these things:

  • Impose restrictions on or clarify the use of AI systems;
  • Require evaluations of AI systems and/or their uses;
  • Impose transparency, notice, and labeling requirements;
  • Establish or designate a regulatory authority or individual to oversee AI systems;
  • Protect consumers through liability measures;
  • Direct the government to study AI;
  • Impose restrictions on or requirements for the data underlying AI systems;
  • Modify procurement policies that would affect government use of AI; or
  • Direct the government to use or augment its use of AI.

In the meantime, as the law firm White and Case noted in September, “Currently, there is no comprehensive federal legislation or regulations in the US that regulate the development of AI or specifically prohibit or restrict their use.”

What’s Next at the State Level?

The threat of a federal moratorium did not stop state lawmakers from acting in 2025 to impose new restrictions on AI. Indeed, according to the National Council of State Legislatures (NCSL), this year lawmakers in all 50 U.S. states introduced legislation on the topic. The vast majority — 38 states — adopted or enacted around 100 measures concerning AI.

These laws represented a range of concerns, including content ownership and intellectual property, workers’ rights and protections, and personal injury and harassment. NCSL offered several examples of state-based action, including:

  • Arkansas, which enacted legislation that clarifies who owns AI generated content is and specifies that generated content should not infringe on existing copyright or intellectual property rights.
  • Montana, which enacted legislation that specifies that the state government cannot take actions that restrict the ability to privately own or make use of computational resources for lawful purposes, unless deemed necessary to fulfill a compelling government interest.
  • New Jersey, which adopted a resolution urging generative AI companies to make voluntary commitments regarding employee whistleblower protections.
  • New York, which amended the civil service law to strengthen worker protections, such as requiring when an AI system is used by the state government that it cannot affect the existing rights of employees pursuant to an existing collective bargaining agreement and requiring that an AI system does not result in displacement or loss of a position.
  • North Dakota, which enacted a law to prohibit individuals from using an AI-powered robot to stalk or harass other individuals.
  • Oregon, which now specifies that a non-human entity, including an agent powered by AI, cannot use specific licensed and certified medical professionals’ titles, such as a registered nurse and certified medication aide.

Additionally, according to NCSL, this year “at least half the states enacted legislation addressing deepfakes, which use generative AI to create seemingly realistic, but fabricated, images and sounds.” NCSL said new laws also focus on election campaigning, nonconsensual intimate images, and simulated child sexual abuse material.

As talk of a presidential EO swirled, in November NCSL proclaimed, “Lawmakers’ efforts to regulate and invest in AI are expected to continue in 2026.” (The NCSL noted that it is working to ensure a federal preemption of state work by Congress is not enacted into law.)

There’s a lot at stake in the battle between Washington, D.C. policymakers and those in state capitals. As a Congressional Research Service report pointed out, a Goldman Sachs study estimates that AI could drive a 0.9 percent cumulative increase in the U.S. economy in the short run. Another study suggests that AI adoption could expand the economy by about 35 percent over the long term.

While many factors will affect AI’s impact on the economy, consumers, workers, public safety, equity, and more, policymaking certainly is one of the biggest forces facing the industry. President Trump understands that fact, which is why he is — to the relief of the technology industry — expected to submit his EO soon.

Will states abide by his edict? Only time, and, more likely, federal courts, will tell.