The California State Assembly took a big step toward regulating AI on Wednesday night, passing SB 243 — a bill that regulate AI companion chatbots in order to protect minors and vulnerable users. The legislation passed with bipartisan support and now heads to the state Senate for a final vote Friday.
If Governor Gavin Newsom signs the bill into law, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.
The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika.
The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.
SB 243, introduced in January by state senators Steve Padilla and Josh Becker, will go to the state Senate for a final vote on Friday. If approved, it will go to Governor Gavin Newsom to be signed into law, with the new rules taking effect January 1, 2026 and reporting requirements beginning July 1, 2027.
The bill gained momentum in the California legislature following t …