Capitol Alert

Some in California want to regulate AI. Will Gavin Newsom do it?

Gov. Gavin Newsom arrives at a press conference on the roof of the CalEPA Headquarters building in Sacramento on Thursday, May 22, 2025, to respond to the U.S. Senate vote to revoke California’s vehicle emissions waiver and undo its ban on gasoline-powered cars by 2035.
Gov. Gavin Newsom arrives at a press conference on the roof of the CalEPA Headquarters building in Sacramento on Thursday, May 22, 2025, to respond to the U.S. Senate vote to revoke California’s vehicle emissions waiver and undo its ban on gasoline-powered cars by 2035. rbyer@sacbee.com
Key Takeaways
Key Takeaways

AI-generated summary reviewed by our newsroom.

Read our AI Policy.


  • Governor Newsom resists broad AI regulation despite bipartisan legislative efforts.
  • Tech giants invest heavily in AI while lobbying against stricter state oversight.
  • Labor and privacy advocates push for AI safeguards to protect jobs and civil rights.

Gov. Gavin Newsom is fond of saying, “as California goes, so goes the nation,” a maxim he applies to nearly everything from curbing school sales of foods with chemical dyes to calling for a constitutional convention for gun safety rules to now, using artificial intelligence to improve government.

The Legislature is considering a spate of bills that could put some guardrails on AI, a broad grouping of technology that can often replace or automate human tasks such as computer coding, driving, recognizing or replicating images and creating written transcripts from audio recordings.

Big tech companies from Google to Meta to Amazon poured nearly a quarter trillion dollars into the technology last year, and spent another few million on lobbying to kill a Democratic-sponsored bill that would’ve required AI companies to vet their own technology.

In the Capitol, there’s a sense that California, having missed the boat regulating the internet and social media, could now lead the nation in regulating AI, whose proponents claim has the potential to automate millions of jobs and which has already reshaped criminal investigations, election campaigns, higher education and court proceedings.

More recently, worries have emerged about AI creating a white collar jobs crisis by automating entry-level jobs out of existence. Dario Amodei, the chief executive of AI powerhouse Anthropic, claimed recently that AI could cause mass unemployment in areas such as tech, law and finance in as little as one to five years.

Critics in the Legislature and media claim that the technology’s capabilities are overstated, pointing to AI’s frequent occurrences of hallucinations and their reliance on copyrighted material, which media companies including the New York Times have sued over.

“One of the big lies being told right now is that we’re in the throes of some AI revolution when, when you really look at what they can do, they can do about the same amount of stuff as what they were doing fairly recently,” said technology journalist Ed Zitron. “What is actually happening? The answer is, a whole lot of nothing. There’s a lot of money being invested. But to what end? Who knows?”

He also cited companies like OpenAI’s reliance on computing giant Microsoft to subsidize its infrastructure. Despite being valued at $300 billion, OpenAI lost more money than it made last year. It has also failed so far to convert most of its ChatGPT customers to paying subscribers, according to previous reporting by Zitron and The Information.

But despite critics’ fears about the technology’s impacts on jobs, privacy and the environment, and questions about whether AI companies’ claims about their capabilities are overstated, Newsom has vetoed most attempts to regulate artificial intelligence even as he’s taken on other powerful business interests such as the oil industry and utility companies.

Newsom spokesperson Tara Gallegos said the governor “takes a balanced approach” by ensuring “appropriate guardrails” are in place to protect the public. She pointed towards legislation he signed last year banning or regulating AI-generated child pornography, election-related content, and digital replicas.

“Show us a state that has done more on this. Governor Newsom has been more out front on this issue than any other state governor,” she said via email.

“California did not become the innovation hub of the nation by turning its back on new technology — and we can help ensure that future growth happens responsibly and safely. Perhaps critics should be turning their attention to the federal government which has put all of these laws at risk.”

AI: too big to fail?

Lorena Gonzalez, a former Democratic San Diego legislator turned leader of the California Labor Federation, likened her party’s support for AI and its potential to automate jobs out of existence to the North American Free Trade Agreement, which cost the party support among blue collar workers.

The labor federation represents 2 million California workers across 1,200 unions, and is supporting several AI-related bills on workplace surveillance and consumer protections during this legislative session.

California Labor Federation President Lorena Gonzalez, a former Assemblywoman, speaks at a rally on Friday, Aug. 26, 2022, at the state Capitol in Sacramento after the conclusion of the United Farm Workers’ 24-day march from Delano.
California Labor Federation President Lorena Gonzalez, a former Assemblywoman, speaks at a rally on Friday, Aug. 26, 2022, at the state Capitol in Sacramento after the conclusion of the United Farm Workers’ 24-day march from Delano. Hector Amezcua hamezcua@sacbee.com

“We cannot get in a spot where this tech becomes too big to fail and we’ve failed to regulate it ourselves,” Gonzalez said.

“AI shouldn’t be a foregone conclusion. We should have it,” she said. “It should benefit people. It should help with their job. It should make it safer, more productive. It should not replace people. We can and should have regulations on AI, but nobody’s saying it.”

In a speech at the California Democratic Party convention in Anaheim in June, she called out the governor, “who won’t even discuss common sense regulations on technology and AI to save jobs, and to save privacy.”

Some of Newsom’s critics see his embrace of AI as a natural progression from his time as mayor of San Francisco, when he was on the ground floor of the Web 2.0 boom that gave rise to conglomerates X (then Twitter), Google, Facebook and Tesla.

One of OpenAI’s advisers who requested anonymity because they were not authorized to speak publicly said the company’s executives have made an effort to build a “positive” relationship with the governor, given the company’s ties to his hometown of San Francisco.

“As big of a name as OpenAI is, they’re very young compared to your mom and dad’s Facebook,” the adviser said. “They’re still building up visibility.” (OpenAI was founded in 2015).

The governor has touted AI as a solution for cutting red tape and making government services faster and more responsive, joining allies such as San Francisco Mayor Daniel Lurie, who named OpenAI chief executive Sam Altman to his transition team and has embraced AI as a partial solution to the city’s post-pandemic renaissance.

In the May revised budget, Newsom proposed setting aside $8 million for generative AI to streamline state health inspections. Generative AI uses preexisting data to generate new content such as text, images, audio and video.

Newsom boasted earlier this year that California hosts 32 of the world’s top 50 AI companies, comprising a not-insignificant part of the state tax base.

“It was a great year (for the state) when Nvidia went public,” said the OpenAI adviser, referring to the Silicon Valley chipmaker that ranks as the world’s most profitable company. “There were overnight millionaires. In bad market years, our budget is particularly susceptible.”

Newsom singled out Nvidia’s $5.5 billion loss due to Trump’s tariffs during an April 16 press conference announcing California’s lawsuit to stop the tariffs, which has since been tossed out by a federal judge. Attorney General Rob Bonta has appealed to the Ninth Circuit Court of Appeals.

At another press conference in May, Newsom took shots at Donald Trump, former Department of Government Efficiency head Elon Musk and DOGE at large, while boasting that California was the first state to start using AI to improve government services.

“We were DOGE before DOGE,” he told reporters. He announced that the state had contracted with Deloitte and Accenture to use generative AI technology developed by Anthropic and Google to support customer service teams during tax filing season, and predict and analyze traffic patterns and improve road safety.

Leading edge issue

At the same time, federal efforts to regulate the technology have stalled and the tech titans Newsom ingratiated himself with as mayor of San Francisco have since begun embracing the MAGA movement and President Trump, who routinely treats California as a political punching bag.

Executives including Altman, Meta’s Mark Zuckerberg, Jeff Bezos of Amazon, Sundar Pichai of Google, and TikTok’s Shou Zi Chew claimed front row seats at the inauguration after pouring at least $1 million each into the event, and have made multiple trips to Mar-a-Lago to court the president since his reelection.

Jan 20, 2025; Washington, DC, USA; (L-R) Priscilla Chan, CEO of Meta and Facebook Mark Zuckerberg, and Lauren Sanchez attend the inauguration ceremony before Donald Trump is sworn in as the 47th US President in the US Capitol Rotunda in Washington, DC, on January 20, 2025.  Mandatory Credit: Saul Loeb-Pool via Imagn Images
Priscilla Chan, CEO of Meta and Facebook Mark Zuckerberg, and Lauren Sanchez attend the inauguration ceremony before Donald Trump is sworn in as the 47th US President in the US Capitol Rotunda in Washington, DC, on January 20, 2025. Mandatory Credit: Saul Loeb-Pool via Imagn Images Saul Loeb USA TODAY NETWORK

In one of his first executive orders after taking office in January, Trump rolled back Biden-era AI safety rules, leaving California as a likely contender to lead the charge to reinstate them or enact its own regulations. The Senate voted 99-1 earlier this month to strip a provision from the congressional tax package known as “the Big, Beautiful Bill” that would have banned states from enacting their own AI regulations for 10 years.

While Newsom signed into law last year some bills such as those regulating election deepfakes and digital replicas, he has largely declined to impose broad regulations on AI, even as the technology threatens to upend major industries and could wipe away millions of jobs through automation while enriching its biggest proponents.

In recent months, the biggest alliance in AI investment between Microsoft and OpenAI has begun to fracture as the two companies have sparred over revenue sharing as OpenAI is set to restructure as a public benefit corporation. It backed down from plans to shift control of the organization from its nonprofit arm to investors after an assortment of critics from Musk to Bonta opposed it.

OpenAI must successfully restructure by the end of 2025, or lose $20 billion in funding from Japanese investment group SoftBank, who in April offered to front the company $40 billion with help from Microsoft, Coatue Management, Altimeter Capital and Thrive Capital.

OpenAI’s $300 billion valuation also depends on that successful conversion, which Zitron pointed to as evidence of the industry’s shaky finances: “Softbank has to take a loan out to get them that money. How’s that meant to work?”

The OpenAI adviser said “everyone in the AI space was cognizant” about how much maintaining the technology’s computational power outweighed profits, and were working towards finding a sustainable way forward.

The same week Newsom announced the GenAI agreements, the California Privacy Protection Agency, under pressure from his office, loosened its own rules requiring what assessments businesses must undergo before deploying AI to guide some decision-making processes or to track users’ personal information to advertise to them.

Last year, Newsom vetoed a controversial bill from state Sen. Scott Wiener, D-San Francisco, that would have required large-scale AI model developers to ensure that their models weren’t used to cause “critical harm” such as banking system failures or electrical grid shutdowns.

The bill split the tech industry and drew criticism from House Speaker Emerita Nancy Pelosi. Newsom told Salesforce executive and close friend Marc Benioff that he was worried it would have a “chilling effect” on AI innovation a week before he officially rejected it, bucking the Legislature, which sent it to his desk after an Assembly vote of 41-9.

Instead, Newsom appointed a panel of three experts to draft potential guardrails, including Dr. Fei Fei Li, a Stanford professor known as the “Godmother of AI,” who runs her own generative AI firm, World Labs.

The panel published a report in June warning that “without proper safeguards ... powerful AI could induce severe and, in some cases, potentially irreversible harms.” At the same time, the technology can’t perform routine tasks such as “household work,” execute long-term projects or perform research, and “cannot reliably and consistently avoid making false statements.”

In February, Wiener proposed a new bill, Senate Bill 53, which would bolster whistleblower protections and establish CalCompute, a public AI research vehicle that would allow researchers and companies to work together.

“We can never be confident of Newsom’s support,” Wiener said in an interview, while noting that his bill was “sailing through the Legislature.”

State Sen. Steve Padilla, D-Coachella, has also sponsored legislation, Senate Bill 243, that would regulate users’ interactions with AI chatbots by restricting the use of algorithms that reward constant use, and require platforms to report how many times a chatbot discusses suicidal ideation with users.

Padilla, a former domestic violence detective, said his bill was borne out of concern for how social media and technology harm vulnerable populations, like children.

“This is broad scope stuff with implications for public health, public mental health, the economy, and labor,” he said. “I want to make sure we learn the lessons of the Internet, and not take a hands off approach. There should be some common sense regulations.”

He said that while he gets a sense that the governor has is “very leery” towards regulating technology, he was heartened by First Partner Jennifer Siebel Newsom’s advocacy around calling out the harmful effects of social media on kids.

“I reject the either/or premise (of regulation or no regulations),” Padilla said. “It’s a leading edge issue and California can lead the way.”

Fighting over crumbs

Longtime Capitol lobbyist Chris Micheli said it’s likely the governor is considering ways to balance the need to rein in a rapidly evolving technology while recognizing that its creators are some of the state’s largest taxpayers.

“When he is weighing signing or vetoing a bill, he’s taking into consideration what level of regulation is appropriate. How far can we push without stifling that part of the economy,” Micheli said. “Tech is not only a major employer but very significant taxpayers.”

In other instances, Newsom is limited by federal rulings on what he can do, Micheli said, pointing towards Section 230 of the Communications Decency Act, which shields companies from being liable for their users’ content.

“As much as California likes to be a leader on this, there are instances where they’re limited by federal courts,” he said.

Former Democratic state Sen. Steve Glazer is more pessimistic, citing the deal that Google and other giants struck last year with legislators to fund $250 million over five years for California newsrooms and for an AI accelerator in lieu of bills from Glazer and Assemblymember Buffy Wicks, D-Oakland, that would have required them to pay for the journalism they use on their platforms.

The bill also requires California taxpayers to put up $70 million, leaving Google — a company worth almost $2 trillion — and others on the hook for just $180 million.

“I was asked to firmly support (Google’s) deal, and I refused. It would force the news community to fight over crumbs,” Glazer said. “It’s a horrible deal and sets us back in holding platforms accountable.”

Newsom, joined by representatives from Google parent company Alphabet and Open AI, praised the deal as helping “rebuild a robust and dynamic California press corps.”

But in May, facing a $12 billion deficit, the governor slashed the amount set aside for the fund to $10 million. Google also whittled down its initial pledge to match that amount.

“Everything is about Google winning,” Glazer said. “It’s outrageous capitulation.”

San Francisco as testing ground

Former San Francisco city supervisor Aaron Peskin served alongside Newsom on the Board of Supervisors from 2001 until 2004, when the former mayor entered City Hall before leaving for Sacramento as lieutenant governor in 2011.

Peskin, a progressive who placed third in the 2024 mayoral race won by Newsom ally Daniel Lurie, said the governor’s AI boosterism is no surprise since his days as the dot-com mayor, when Google, Meta and other tech companies began operating and generating much of San Francisco and the Golden State’s tax revenue.

Newsom cultivated ties with industry leaders such as Benioff, the godfather of his oldest daughter, and Elon Musk, whom he interviewed in 2012 on his short-lived talk show.

In the 2010s, San Francisco became the “petri dish” for testing driverless cars, as Cruise and Waymo began rolling out their autonomous AI-powered vehicles on city streets, Peskin said. Both the Department of Motor Vehicles and California Public Utilities Commission went out of their way to stop the Board of Supervisors from lobbying for “reasonable” oversight rules.

The agencies “refused to share data on everything from public safety incidents, miles traveled, to the number of paralyzed cars,” he said, referring to high-profile instances of when the driverless cars ran over firehoses during fire scenes and blocked ambulances from responding to emergencies.

“They were all under tremendous pressure by Newsom and the Governor’s Office to not share (the data),” Peskin said of the DMV and CPUC. After a six-hour contentious hearing in 2023, the CPUC approved both companies to operate at any hour of the day and charge for rides.

Two months later, the DMV shut Cruise down, shortly after an accident where a Cruise car pinned and dragged a woman for 20 feet. Company officials later admitted they lied to federal investigators, who fined them $500,000.

Gonzalez, the former legislator-turned-labor leader, said she recognizes how powerful AI can be, citing her own experiences with healthcare.

“I’m excited that there would be new ways to scan images to find things like early signs of cancer,” said Gonzalez, who underwent a bilateral mastectomy in 2021 after being diagnosed with breast cancer. “My god, I’m a cancer survivor, I know how important that is.”

At the same time, she worries that there’s no political will to rein in the industry, even as she cites both the environmental costs (AI data centers require massive amounts of water) and the implications for privacy, as AI has been used to varying degrees of success in hiring, criminal investigations and for spying on workers.

“Technology and progress does happen. It doesn’t mean we have to have unregulated computers that cost us the very things we care about,” she said. “We’re opposed to letting anything move forward without any kind of human oversight or values.”

This story was originally published July 14, 2025 at 5:00 AM.

Lia Russell
The Sacramento Bee
Lia Russell covers California’s governor for The Sacramento Bee’s Capitol Bureau. Originally from San Francisco, Lia previously worked for The Baltimore Sun and the Bangor Daily News in Maine.
Get one year of unlimited digital access for $159.99
#ReadLocal

Only 44¢ per day

SUBSCRIBE NOW