She says ChatGPT is responsible for her son’s death. CA lawmakers are listening
Maria Raine, the mother of the 16-year-old Orange County teen who killed himself last year after discussing his suicidal thoughts with ChatGPT, is appealing to California lawmakers to place additional regulations on AI “companion” chatbots.
“I was mortified as a mother and as a therapist that this knew he was suicidal with a plan and no alarm bells went off. Nothing happened. No one was notified,” she said during a news conference Monday in Sacramento.
Raine, who filed suit against ChatGPT creator OpenAI in August, came to Sacramento to support two pieces of AI companion chatbot legislation lawmakers are putting forward. Senate Bill 1119 and Assembly Bill 2023 would require chatbot creators to make their products safer for children through design changes, parental notifications, and regular audits.
The bills face broad industry opposition, and their movement through the California Capitol will likely be seen as a litmus test for whether states can reign in AI companies. The federal government has signaled it is not interested in putting regulations on the technology, but will not challenge states that do so in the name of child safety.
According to Matthew and Maria Raine’s complaint, Adam Raine initially began using ChatGPT for homework help in 2024, but eventually began to rely on the platform for emotional support and advice with his suicidal thoughts. The complaint alleges that OpenAI had designed the chatbot to “assume best intentions,” which overrode its safety protocol when someone expressed desire to self-harm. The lawsuit is still moving through San Francisco Superior Court.
“In the end, ChatGPT mentioned suicide almost 1,300 times to Adam, about six times more often than Adam did,” Maria Raine told the Senate Privacy, Digital Technologies, and Consumer Protection Committee on Monday. “We believe that Adam would not have been suicidal in the first place had he not interacted with ChatGPT.”
Bills would require design changes, audits, notifications to parents
One of the Legislature’s fiercest AI regulation advocates is Assemblymember Rebecca Bauer-Kahan, D-Orinda, who chairs the Assembly’s Privacy and Consumer Protection committee, which assesses many AI-related bills. Bauer-Kahan described the bills as a “passion project” for legislators, since AI has deep roots in California.
“We know that we would recall anything that killed a few children. And this is no different. We need to require that these tools do better,” she said during Monday’s news conference.
State Sen. Steve Padilla, D-Chula Vista, authored SB 1119, which will build on his successful AI companion chatbot regulation from last year. Although some advocates decried the bill for not going far enough, it does require chatbot platforms to refer users to crisis response lines if they express suicidal ideation. Gov. Gavin Newsom vetoed the more comprehensive AI companion chatbot bill.
SB 1119 and AB 2023 would, in their current form, force creators to take measures to prevent their chatbots from, among other things: encouraging children to harm themselves or others, giving health advice to children, engaging in obscene behavior, discouraging seeking outside help, or producing an excessively sycophantic response.
The bills would also alert a parent whose account is linked to their children’s account if their child is communicating in a particularly concerning way, and require annual audits of a chatbot platform’s risks to children.
It would also require the attorney general to develop a public incident reporting mechanism for consumers to report complaints about AI and allow for individuals to sue if they are harmed by AI.
The bills have significant opposition, including from deep-pocketed groups that have opposed AI legislation in the past. Opponents include the Chamber of Commerce, which spent $13.5 million on lobbying efforts in 2025, TechNet, which spent over $1 million on lobbying and candidate donations in 2025, and the American Innovators Network, which started up last year and spent over $250,000 specifically lobbying against AI legislation. No specific AI company has registered support or opposition to the bill, and OpenAI did not respond to a request for comment.
Opponents take issue with the scope of the legislation, which they say could include adult users as well, and with the definitions of harms, which they contend are too broad. On the other end, one research and advocacy organization, the Children’s Advocacy Institute, says the bill doesn’t go far enough to protect children.
“No law that fails to prohibit AI chatbot companies from emotionally manipulating children so children are bound to return over and over to their chatbots adequately protects children from a chatbot threat we know can be lethal to them,” they wrote in their opposition letter to the Legislature.
Supporters of the legislation include Children Now, which spent $31,000 on lobbying last year, Encode Ai Corporation which spent $332,000 advocating for AI regulation last year, and Common Sense Media, which spent about $208,000 on lobbying in 2025.
This story was originally published April 20, 2026 at 4:45 PM.