
Last week, U.S. House Speaker Mike Johnson (R-La.) and Democratic Leader Hakeem Jeffries (D-N.Y.) launched a bipartisan Task Force on Artificial Intelligence (AI) that will develop legislative proposals to ensure the United States “continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.”
The news comes as some U.S. voters are showing more and more skepticism about this new technology, especially regarding how it relates to financial services and other consumer markets like healthcare. According to a survey by Pew Research Center, for example, only 18 percent of Americans are more excited about AI than they are worried. The rest of U.S. adults are either more scared than excited or equally excited and scared.
But in the wake of the political divisiveness that has brought federal legislating to a near standstill, many states aren’t waiting for Washington to act. Let’s take a look at how states are handling AI-related policy.
State Lawmakers Have Introduced Hundreds Of AI-Related Bills
According to AXIOS, since the beginning of the year, more than 400 AI-related bills have been introduced in at least 40 state legislative bodies across the country. That number is about six times higher than the number (67) that had been introduced in state legislatures by early 2023. In January 2024 alone, when most state bodies began their new legislative sessions, 211 AI-related bills were introduced.
AXIOS says state lawmakers are introducing an average of 50 new AI bills a week.
The bills target everything “from bias and discrimination to facial recognition technology and deepfakes,” AXIOS says. Some pieces of legislation tackle issues that are unique to that state. Legislation in Tennessee, for example, would address copyright concerns coming from the local music industry. Meanwhile, according to Government Technology magazine, California lawmakers are trying to protect actors and voice over artists from the impact AI will surely have on the movie industry. (AI was a prominent issue in the recent Hollywood writers’ and actors’ strikes.)
Data privacy is one major concern that seems to span all states. The Brennan Center for Justice notes at least 12 states — California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Nevada, Oregon, Tennessee, Texas, and Virginia — now regulate how organizations can use automated processing systems to profile consumers based on their personal data. “While AI is not explicitly mentioned” in these laws, the Brennan Center says “the automated decision-making addressed in these laws includes the use of algorithms such as AI.”
The states with the most bills under consideration are New York (65), California (29), Tennessee (28), Illinois (27), and New Jersey (25). In fact, only two states whose legislatures are currently in session don’t have any AI legislation under consideration: Alabama and Wyoming. AXIOS noted Connecticut already has a law on the books that requires ongoing assessments to ensure AI doesn’t cause discrimination or disparate impact.
To give structure to these efforts, The Council of State Governments, a nonpartisan organization that promotes collaboration and coordination among state policymakers, has outlined a set of principles. These guidelines ask state lawmakers to:
- Ensure the design, development, and use of AI are informed by a “collaborative dialogue with stakeholders” from a range of disciplines.
- Protect individuals from the unintended impacts of AI and from abusive data practices.
- Ensure consumers have power over how an AI system collects and uses data about them and that individuals can opt out of an AI system in favor of a human alternative.
- Protect against discrimination and ensure “AI systems are designed in an equitable way.”
- Ensure any organization or individual that develops and deploys an AI system complies with rules and standards governing those systems and are held accountable if they do not meet the standards.
But legislators aren’t the only policymakers considering taking action at the state level.
How Governors Are Handling AI
In October 2023, President Joe Biden issued an executive order that established safety and security measures to govern AI development. Chief executives in each state also are using their executive powers to shape AI policy in their state.
Last June, Gov. Wes Moore (D-Md.) signed an executive order that outlined his state’s principles for AI development. These guidelines are similar to the ones outlined by The Council of State Governments. While Gov. Moore calls for efforts to protect data privacy, ensure safety and security, and promote fairness and equity, his order also acknowledges that, “when used responsibly and in human-centered and mission-aligned ways, AI has the potential to be a tremendous force for good.” Gov. Moore also said his state would commit “to exploring ways AI can be leveraged to improve state services and resident outcomes.”
A few months after Gov. Moore issued his policy outline, Gov. Gavin Newsom (D-Calif.) signed an executive order that outlined how his administration would study the development, use, and risks of AI and options for regulation and oversight. Specifically, the order directed state agencies to analyze whether AI poses a threat to the state’s energy infrastructure and develop a framework to analyze generative AI’s impact on the state’s vulnerable communities. In Pennsylvania, Gov. Josh Shapiro (D) has outlined 10 principles to guide AI development.
Efforts to address AI are not coming only from Democrats.
Last month, Gov. Glenn Youngkin (R-Va.) signed Executive Order 30, which implements educational guidelines for schools and their use of AI and that set policy and information technology guidelines the governor says will “safeguard the state’s databases while simultaneously protecting the individual data of all Virginians.” The order came with a promise to invest $600,000 in proposed funds to launch new AI pilots.
Gov. Kevin Stitt (R-Okla.) is another GOP policymaker who is getting out in front of AI. He signed an executive order last fall that created a task force to study the “potential uses, benefits and security vulnerabilities of artificial intelligence and generative artificial intelligence.”
In his order, Gov. Stitt sounded much more excited about the benefits of AI than the drawbacks. “AI has the potential to revolutionize the way our society operates,” Gov. Stitt says. “The private sector is already finding ways to use it to increase efficiency. Potential exists for the government to use AI to root out inefficiencies and duplicate regulations, and it is an essential piece of developing a workforce that can compete on a global level.”
In many ways, the governors’ varying stances are starting to reflect an emerging partisan divide among voters when it comes to AI … Though the views of Republican and Democratic voters do not neatly align with how their respective party’s leaders are governing.
Voters Are Worried About AI, But Disagree On Solutions
Late last month, a new nonprofit organization called the AI Policy Institute (AIPI) released a survey that found the vast majority of Americans, 76 percent, would support a candidate who says they favor AI regulation. Additionally, 55 percent of respondents said they want any AI policy solutions to be “bipartisan.”
Those findings are in line with the Pew survey discussed above that revealed most Americans have a healthy skepticism of AI.
When it comes to the actual impact AI will have on their lives, however, Politico noted the AIPI poll showed a partisan divide. For example, GOP voters are generally slightly less favorable toward regulating AI than Democrats. That leaning comes despite the fact that Republican voters are more likely to say AI will harm the working class, the middle class, and “people like” them. Meanwhile, 66 percent of Democrats said AI will “good” for people “like them” and 64 percent said it will be good for society in general.
The AIPI survey’s findings were similar to the findings from of an Ipsos poll released more than eight months ago, in May 2023.
That survey found 43 percent of Democrats have a favorable view of AI while just 31 percent of Republicans do. Additionally, 56 percent of Democrats told Ipsos they think AI will have a positive impact on the lives of average Americans. Only 36 percent of Republicans would venture that claim. And when given a list of words to describe how they feel about AI, most Democrats said “curious” while the plurality of Republican said “uninterested.”
The Ipsos poll also asked about AI regulation. When asked which stance came closer to their opinion, 65 percent of Republicans said, “it is the responsibility of the individual company developing the AI to ensure it is accurate and not harmful” while 55 percent of Democrats said, “it is the government’s responsibility to set rules and limit risks of AI.”
In other words, while voters align with their party leaders on the policy actions they should take — regulation versus less — Democratic voters seem to be at odds with the state lawmakers, governors, and federal policymakers they elected to represent them when it comes to the benefits of AI. Same goes for the GOP. Republican voters are less excited about the promise of AI than their representatives in Washington or in statehouses around the country and more skeptical of regulation.
Could that discrepancy actually temper partisan bickering around AI? Perhaps for a while.
“AI isn’t likely to turn into a lightning-rod culture-war issue anytime soon,” Politico concludes. “But if it does take on a greater partisan valence, as AIPI’s polling suggests is happening, it’ll inevitably be more difficult to have measured debates, or pass the bipartisan laws being floated” in Congress and the states right now.