“We’re failing.” California Legislators fighting a losing battle to control AI | Opinion
Sacramento Assemblymember Maggy Krell is a rising star in the California Legislature. As The Bee editorial board wrote in our endorsement of Krell for another term in office after her eventful first term: “She championed legislation to make it a felony for an adult to solicit a 16-year-old, defying Democratic leadership even after they stripped down her bill. Krell stood firm, and it ultimately prevailed.”
Krell is running for re-election in the California Primary with no serious opposition, a sign of success in her line of work.
But when it comes to legislating the Artificial Intelligence companies upending our lives, Krell and her colleagues are fighting a losing battle as they demand accountability from tech giants through mandated audits. This is a clear priority in the Legislature, reflected in recent AI and tech legislation in Sacramento.
“Right now, we’re failing,” Krell told The Bee editorial board in a recent interview.
We’re really failing, and it’s the thing that keeps me up at night the most,” she said.
Nearly every major tech bill this session — whether targeting AI chatbots or harmful online content — would force tech companies to regularly scrutinize, document, and report on their own practices.
Legislators have made audits the primary mechanism for balancing transparency and accountability, pushing Silicon Valley to take responsibility for product risk, particularly products that threaten children.
Bringing down the hammer
When Krell passed AB 316, she recognized it as win. Signed by Gov. Gavin Newsom last October, AB 316 prohibits “a defendant who developed, modified, or used artificial intelligence, as defined, from asserting a defense that the artificial intelligence autonomously caused the harm to the plaintiff (of a lawsuit).”
While happy, Krell was also humble about her bill.
“I think that’s the low-hanging fruit of AI regulation,” she said. “But I’m so glad I got it signed. I think it’s a really important basic framework.”
Krell’s words reflect both the value of setting a baseline and the reality that much more remains to be done. “What I see in a lot of different places in policy is just the need to hold big tech accountable.”
Focusing on her desire to protect California’s children, Krell co-introduced AB 1946 with Assemblymember Buffy Wicks, D-Oakland.
The bill would “make a social media company liable to a depicted individual, as defined, for actual and statutory damages. The bill would also impose a civil penalty on a noncomplying company to be collected in a civil action by certain public attorneys, including the Attorney General.”
“This is the most urgent issue of our time when it comes to protecting our most vulnerable children,” Krell said in The Guardian. “I want to see these companies really invest and prioritize protecting kids. The money that they’re spending on defending against lawsuits would be better spent on fixing their platforms so that children do not continue to be harmed on their sites.”
Current law only allows a person depicted in abusive material created by users to file a report. “Which is crazy,” she argues, “because that could be a three-year-old child.” AB 1946 would fix this by letting anyone report suspected child sexual abuse material and requiring platforms to have clear, conspicuous reporting tools in place.
The bill also mandates regular public audits — sent to the state Attorney General and local prosecutors—detailing how platforms manage these reports. If a company fails to meet its obligations, enforcement shifts from individual victims, who would otherwise need to find a lawyer and sue, to the Attorney General and local public prosecutors, who can bring civil actions on the public’s behalf. This approach marks a major shift toward proactive, systemic accountability, rather than relying on vulnerable individuals to fight their own battles in court.
AB 2023, from Wicks and Assemblymember Rebecca Bauer-Kahan, D-Orinda, requires annual risk assessments of companion chatbots for child safety, followed by independent audits. Audit reports go to the Attorney General, remain mostly confidential, and from 2028 are summarized in annual public reports. Prosecutors can enforce violations, and families can sue for damages. The bill also requires notifying minors about the risks of chatbots and AI interactions.
SB 1119, from Senator Steve Padilla, D-Chula Vista, sets a step-by-step process for child safety in companion chatbots. Starting in 2027, operators must complete annual risk assessments and submit to independent audits, with audit reports sent to the Attorney General. Most reports stay confidential, but the Attorney General will issue a public summary each year from 2028. The law allows prosecutors to enforce violations and lets families sue for damages. SB 1119 also adds new rules for data privacy and algorithmic transparency.
Both bills require annual risk assessments and independent audits by these AI chatbot operators, empower prosecutors to enforce violations, and allow families to take legal action. The overlap tells the story of a legislator who wants a single message of accountability sent across the state.
Time is dwindling for Newsom as he approaches his final round of bill signings.
But now it’s up to Newsom to be the bullhorn.
A defining moment on AI
Newsom and tech leaders need to recognize the consistent demand from legislators: true accountability, not just internal assessment. Californians deserve transparency from the tech industry, given the significant impacts on vulnerable communities.
The outgoing governor can champion AI regulation and transparency with companies that have made billions from this technology.
If he fully embraces these accountability measures, he could cement the state’s leadership in responsible AI and child safety. On the other hand, hesitation or dilution of these standards could embolden industry resistance elsewhere and leave vulnerable users at risk. Tech companies are watching closely, weighing compliance against the risk of legal battles. Parents and advocates are demanding more than vague promises; they want enforceable protections. This is more than a bureaucratic milestone — it’s a chance for California to prove whether it will lead or lag in the face of technological change.
California has a rare chance to lead the nation on AI accountability and child safety. Newsom’s decision will set the tone for years to come. He must seize this moment and put the interests of children and families above industry pressure.