By Cecilia Kang
Lawmakers in California last month advanced about 30 new measures on artificial intelligence aimed at protecting consumers and jobs, one of the biggest efforts yet to regulate the new technology.
The bills seek the toughest restrictions in the nation on AI, which some technologists warn could kill entire categories of jobs, throw elections into chaos with disinformation and pose national security risks. The California proposals, many of which have gained broad support, include rules to prevent AI tools from discriminating in housing and health care services. They also aim to protect intellectual property and jobs.
California’s Legislature, which is expected to vote on the proposed laws by Aug. 31, has already helped shape U.S. tech consumer protections. The state passed a privacy law in 2020 that curbed the collection of user data, and in 2022 it passed a child safety law that created safeguards for those younger than 18.
“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic Assembly member who chairs the state Assembly’s Privacy and Consumer Protection Committee.
As federal lawmakers drag out regulating AI, state legislators have stepped into the vacuum with a flurry of bills poised to become de facto regulations for all Americans. Tech laws like those in California frequently set precedent for the nation, in large part because lawmakers across the country know it can be challenging for companies to comply with a patchwork across state lines.
State lawmakers across the country have proposed nearly 400 new laws on AI in recent months, according to lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds.
Colorado recently enacted a comprehensive consumer protection law that requires AI companies use “reasonable care” while developing the technology to avoid discrimination, among other issues. In March, the Tennessee Legislature passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which protects musicians from having their voice and likenesses used in AI-generated content without their explicit consent.
It’s easier to pass legislation in many states than it is on the federal level, said Matt Perault, executive director of the Center on Technology Policy at the University of North Carolina at Chapel Hill. Forty states now have “trifecta” governments, in which both houses of the legislature and the governor’s office are run by the same party — the most since at least 1991.
“We’re still waiting to see what proposals actually become law, but the massive number of AI bills introduced in states like California shows just how interested lawmakers are in this topic,” he said.
And the state proposals are having a ripple effect globally, said Victoria Espinel, CEO of the Business Software Alliance, a lobbying group representing big software companies.
“Countries around the world are looking at these drafts for ideas that can influence their decisions on AI laws,” she said.
More than a year ago, a new wave of generative AI like OpenAI’s ChatGPT provoked regulatory concern as it became clear the technology had the potential to disrupt the global economy. U.S. lawmakers held several hearings to investigate the technology’s potential to replace workers, violate copyrights and even threaten human existence.
OpenAI CEO Sam Altman testified before Congress and called for federal regulations roughly a year ago. Soon after, Sundar Pichai, CEO of Google; Mark Zuckerberg, CEO of Meta; and Elon Musk, CEO of Tesla, gathered in Washington for an AI forum hosted by the Senate majority leader, Chuck Schumer, D-N.Y. The tech leaders warned of the risks their products presented and called for Congress to create guardrails. They also asked for support for domestic AI research to ensure the United States could maintain its lead in developing the technology.
At the time, Schumer and other U.S. lawmakers said they wouldn’t repeat past mistakes of failing to rein in emerging technology before it became harmful.
Last month, Schumer introduced an AI regulation road map that proposed $38 billion in investments but few specific guardrails on the technology in the near term. This year, federal lawmakers have introduced bills to create an agency to oversee AI regulations, proposals to clamp down on disinformation generated by AI and privacy laws for AI models.
But most tech policy experts say they don’t expect federal proposals to pass this year.
“Clearly there is a need for harmonized federal legislation,” said Michael Karanicolas, executive director of the Institute for Technology Law and Policy at UCLA.
State and global regulators have rushed to fill the gap. In March, the European Union adopted the AI Act, a law that curbs law enforcement’s use of tools that can discriminate, like facial recognition software.
The surge of state AI legislation has touched off a fierce lobbying effort by tech companies against the proposals. That effort is particularly pronounced in Sacramento, the California capital, where nearly every tech lobbying group has expanded its staff to lobby the Legislature.
The 30 bills that were passed out of either the Senate or Assembly will now go to various committees for further consideration before the Legislature ends its session later this summer. Democrats there control the Assembly, Senate and governor’s office.
“We’re in a unique position because we are the fourth-largest economy on the planet and where so many tech innovators are,” said Josh Lowenthal, an Assembly member and Democrat, who introduced a bill aimed at protecting young people online. “As a result, we are expected to be leaders, and we expect that of ourselves.”
The bill gaining the most traction requires safety tests of the most advanced AI models, such as OpenAI’s chatbot GPT4 and image creator DALL-E, which can generate humanlike writing or eerily realistic videos and images. The bill, by state Sen. Scott Wiener, a Democrat, also gives the state attorney general power to sue for consumer harms.
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems.)
On May 8, the California Chamber of Commerce and tech lobbying groups wrote a letter to appropriations committee members who were considering the bill. The letter described the proposal as “vague and impractical,” saying it would create “significant regulatory uncertainty” that discourages innovation.
Chamber of Progress, a tech trade group with lobbyists in California, has also criticized the bill. It issued a report last month that noted the state’s dependence on tech businesses and their tax revenue, which total around $20 billion annually.
“Let’s not overregulate an industry that is located primarily in California, but doesn’t have to be, especially when we are talking about a budget deficit here,” said Dylan Hoffman, executive director for California and the Southwest for TechNet, in an interview.
Wiener said his safety testing bill would likely be amended in coming weeks to include provisions that support more transparency in AI technology development and to limit the tests only to the biggest systems that companies have invested more than $100 million in to develop. He stressed that many in the tech sector have supported the bill.
“I would prefer that Congress act, but I’m not optimistic they will,” he said.
Comments