‘Worse than doing nothing’: Online safety advocates slam changes to AI chatbot bill
AI-generated summary reviewed by our newsroom.
- Some advocates withdrew support from SB 243 after lawmakers diluted some key provisions.
- Revised bill removes warnings, oversight and liability measures for tech firms.
- Senate passed weakened bill despite criticism; focus may shift to AB 1064.
Some online safety advocates and the California branch of the American Association of Pediatrics are withdrawing their support for a bill intended to make companion chatbots safer for children to interact with.
The groups say Senate Bill 243 was watered down substantially at the last minute, and now sets a dangerous precedent that will give users a false sense of protection.
“It’s worse than doing nothing,” said Sacha Haworth, executive director of the nonprofit Tech Oversight Project. “It does more harm than good.”
The latest iteration of SB 243 maintains its initial intent: to require tech companies to have a protocol for responding to a companion chatbot user experiencing thoughts of self-harm or suicidal ideation. The bill was inspired, said author San Diego state Sen. Steve Padilla, by stories of young people dying by suicide after speaking with chatbots. The Senator held multiple events with Megan Garcia, the mother of a Florida teen who took his life after developing a relationship with a bot on Character.AI.
In its original form, the bill required tech companies to disclose on their platforms that AI chatbots might not be suitable for minors and take steps to reduce addictiveness of their products. It also allowed people to sue a tech company if it did not follow the law.
Last month, a Rancho Santa Margarita family sued ChatGPT developer OpenAI over the chatbot providing a teen methods he could use to end his life, one of which he used. The company said Tuesday it plans to roll out parental control options later this month.
The latest iteration of SB 243 doesn’t include some of the original provisions. It also scales back notification and reporting requirements, adds carve-outs for videogames and smart speakers and eliminates third-party oversight.
“This bill is not a cure all,” Padilla said on the Senate floor Thursday. “It merely takes first steps toward creating meaningful safeguards to protect the most vulnerable among us, including our children.”
But former supporters of the bill say small steps are not the right approach.
“The bill does virtually nothing meaningful to protect kids and families on this critical issue,” Common Sense Media CEO Jim Steyer said via text. “This was a very disappointing but quite obvious capitulation to highly paid big tech lobbyists.”
Reached on the floor Thursday, Padilla denied Steyer’s claim and said some form of protections need to be in place “as soon as possible.”
“At the end of the day, we need the legislature’s support. We need it to be operational, we need it to be impactful, and we need the governor to sign it into law.”
The end-of-session scuffle did not seem to phase senators, who voted Thursday afternoon to send the bill to the governor’s desk in its new form.
However, it could have implications for another bill, Assembly Bill 1064 by Orinda Assemblywoman Rebecca Bauer-Kahan, which would prohibit making a chatbot available to children if it might engage in sexually explicit conversations or encourage a child to engage in self-harm.
Former supporters of SB 243 have doubled down on their support for Bauer-Kahan’s bill, while the opponents of AB 1064 are now supporting the narrower bill.
This story was originally published September 11, 2025 at 5:51 PM.