Matthew and Maria Raine’s 16-year-old son, Adam, took his own life in April after chatting extensively with an artificial intelligence chatbot they contend lured him into a personal, confidential relationship, and encouraged his suicidal thoughts and plans.
Looking through his cell phone after his death, the California couple found his extended conversations with ChatGPT. During those conversations, Adam confided his suicidal thoughts and plans. The chatbot validated them, discouraged him from seeking help, and offered to write a suicide note.
Calling the chatbot Adam’s “suicide coach,” Matthew Raine testified at a Senate hearing that ChatGPT told his son on his last day, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
Raine said he shares Adam’s story in hopes of sparing other families the same suffering. The couple is suing OpenAI, creator of ChatGPT.
They were not the only parents to testify. Maria Garcia, a Florida lawyer, described the 2024 suicide death of her 14-year-old son, Sewell Setzer III, after an extended virtual relationship with a Character.AI chatbot.
“Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged,” she told senators, adding that the chatbot engaged in sexual role play, presented itself as his romantic partner and falsely claimed to be a licensed psychotherapist. Garcia is suing Character.AI developer Character Technology.
According to a recent survey by Common Sense Media, a nonprofit digital safety organization, 72% of teens have used AI companions at least once — more than half used them a few times a month. About 30% used AI chatbot platforms for social relationships that include role playing friendships, and sexual and romantic partners. Those uses were found to be three times more common than homework help.
With alarm growing over the harm chatbots pose to vulnerable teens, as well as the blurring of reality for all of us, there is bipartisan interest in Washington in establishing AI regulatory guard rails. Many states, including California, have adopted or are proposing laws that require technology companies to develop safeguards.
In September, the California Legislature overwhelmingly passed two bills:
• SB 243 would require chatbot operators to clarify users are not communicating with a “real” person. It would require, among other things, operators to maintain a protocol for preventing user suicidal ideation, suicide or self-harm. It would authorize a person who suffers injury as the result of an operator’s noncompliance to sue.
• AB 1064 would prevent chatbot operators from making them available to children if self-harm, suicidal ideation, violence, substance abuse, or eating disorders are foreseen. It forbids the sharing of a child’s personal information without parental consent. Risk-level assessments, incident reports and audits are mandated. The attorney general could penalize operators for noncompliance and children who suffer harm could sue.
As of this writing, the bills await Gov. Gavin Newsom’s signature.
In a race to create more chatbots and uses for AI, the technology industry opposed both bills, contending they are vague, infringe on free speech, and will hinder AI advancements. Operators contend they are working on safeguards to address concerns.
Actions by individual states to create AI safeguards have met considerable opposition in Washington. Texas Sen. Ted Cruz is waging a yearslong campaign to exempt AI from regulation to encourage its development.
This summer, Bakersfield Republican Rep. Vince Fong introduced legislation that would block state laws from being passed, or enforced, regulating the operation of AI driverless trucks.
Much like the internet did decades ago, AI promises to improve our lives. But the internet also has created problems that may have been avoided if Washington had not balked at regulating the internet when it was a fledgling industry.
Believing Washington politicians — faced by intense technology industry pressure — now will regulate the fledgling AI industry is like hoping Santa is real.
Until effective federal regulations emerge, state laws, such as those proposed in California, are necessary.