Key Takeaways:
- While early chatbot litigation focused largely on IP disputes, a growing wave of lawsuits and regulatory actions alleging addictive use and physical harm has made companion chatbot safety a key concern.
- Chatbots of all kinds face a multifaceted compliance landscape, including privacy, cybersecurity, consumer protection, IP, AI transparency, content moderation, and industry-specific regulations.
- Developers and deployers of chatbots should be actively assessing, hardening, and documenting their systems and compliance measures now, before litigation or regulators force them to do so.
Companion Models are a Primary Focus of Recent Litigation and Lawmaking
Although the first wave of chatbot litigation focused heavily on IP issues, including disputes over training data, copyright and related rights, a newer set of cases and regulatory initiatives has shifted attention toward safety.
Chatbots that act like human companions are now a legal focal point because they sit at the intersection of human longing for connection, mental health, and real-world consequences. There are well-publicized instances of users turning to chatbots for a wide range of human needs and developing deep, intense feelings for AI systems. There are also reports of users engaging in self-harm, high-risk conduct, and suicide following chatbot interactions.
Plaintiffs who have filed suit typically combine product liability theories, including alleged design defects and failures to warn, with negligence claims asserting a duty to implement reasonable safeguards. Some add claims of wrongful death, infliction of emotional distress, and unjust enrichment. Litigation is not expected to abate anytime soon, as plaintiffs’ law firms are starting to advertise more prominently their AI suicide and self-harm practices.
Regulators and lawmakers are also involved. Kentucky’s Attorney General recently sued a well-known company in the AI companion space for allegedly failing to protect minors. And California and New York have both enacted companion chatbot laws.
California’s statute defines a “companion chatbot” broadly as an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and can meet a user’s social needs. The definition includes a few exceptions related to customer service, business and professional tasks, video games, and speaker-and-voice-command interfaces. But the exceptions do not overlap substantially with the core definition of a companion chatbot. For example, an AI-based character in a video game couldbe a companion chatbot if it can meet a user’s social needs by maintaining discussions on topics unrelated to the video game.
California requires companion chatbot operators to provide clear and conspicuous disclosures that the bot is not human when a reasonable person might think they are interacting with a human; maintain, publish, and operationalize suicide- and self-harm-prevention protocols; and implement heightened safeguards for minors, including periodic “take a break” reminders. Beginning July 1, 2027, they must also submit annual reports to the Office of Suicide Prevention about the company’s suicide prevention protocols and how many times it referred users to a crisis service provider.
New York’s statute defines an “AI companion” as an AI system designed to simulate a sustained human or human-like relationship by retaining information from prior interactions, asking unprompted emotion-based questions, and sustaining an ongoing dialogue about matters personal to the user. The focus on the design and specific features of the systems makes New York’s definition of “companion” chatbots narrower than California’s. New York requires operators to implement protocols to detect and address suicidal ideation or self-harm, including referrals to crisis services, and to provide clear and periodic disclosures that users are not communicating with a human.
In sum, the ability of companion chatbots to engage users in emotionally intense interactions has made them a recent flashpoint for how courts and regulators approach risks of conversational AI.
Companion and Non-Companion Chatbots Implicate Multiple Regulatory Frameworks
While early chatbot litigation has focused heavily on the safety of companion models, chatbots of all kinds implicate numerous legal issues. Below are examples of relevant legal considerations.
- Privacy: Some companies offer chatbots to assist with specific questions only, while others are more general purpose. That distinction matters because some US privacy laws require companies to use personal data only in ways consistent with users’ reasonable expectations, as shaped by the nature of the service and the company’s privacy notices. For instance, some consumers might not expect their interactions with a warranty bot to form the basis of cross-context behavioral advertising or surveillance pricing. Providing clear and complete disclosures about how chatbot inputs are used is one of many necessary compliance measures.
- Cybersecurity: Chatbots introduce new attack vectors because they are typically optimized for helpfulness and operate using probabilistic models. These features may be able to be exploited to reveal sensitive company information or bypass safeguards. For example, a malicious user may use carefully crafted prompts to extract internal system details, abuse backend integrations, or generate content that facilitates fraud or account takeover. This makes secure chatbot design, access controls, some level of human involvement, and abuse detection essential.
- Consumer Protection: Companies need to ensure that their chatbots do what they are represented to do and do not mislead consumers about their capabilities, limitations, or the level of human versus AI involvement in the interaction. Companies should also be aware that erroneous chatbot outputs can create real-world obligations, such as by confabulating discounts or making up representations about products, services, or company policies. Companies must also guard against unlawful algorithmic discrimination (i.e., systems that result in disparate treatment or unjustified disparate impact on protected classes) and offer opt-outs from automated decisions as required under laws such as the California Consumer Protection Act.
- IP: Chatbot operators must navigate a maze of IP issues, including in relation to training data, user-generated content, model-generated content, and IP licensed from business partners. To date, disputes over training data have attracted the most scrutiny, but the IP rights of all stakeholders must be evaluated holistically. Managing these issues requires attention to training data provenance, precise and internally consistent IP clauses with users and business partners, and system controls to minimize the likelihood of IP-infringing conduct.
- AI Transparency: Transparency takes various forms in the AI context. It can mean, for example, disclosing that AI is AI, as contemplated by California law if there is a risk of deception. It can also mean publicly disclosing details about a chatbot operator’s training data as California’s Training Data Transparency Act requires of publicly available genAI system developers. It might also encompass the detailed disclosures that frontier AI developers (i.e., developers of the most powerful AI models) must publish and submit to regulators under California’s Transparency in Frontier AI Act. Or it can mean complying with California’s AI Transparency Act, which will, as of August 2, 2026, require certain large generative AI providers to support detection tools and embed provenance markers identifying content as AI-generated. These examples make clear that AI transparency is not a single checkbox, but a layered set of design, disclosure and governance obligations that chatbot operators need to plan for early.
- Content Moderation: Chatbots raise content issues that cut across platform safety, prohibitions against child sexual abuse material (CSAM) and other restricted content, and community expectations. Chatbot operators must think carefully about the type of content they want their systems to generate, while complying with mandatory legal regimes. For instance, the federal TAKE IT DOWN Act criminalizes the knowing distribution of nonconsensual intimate imagery, including AI-generated deepfakes, while the Texas Responsible AI Governance Act restricts AI systems that are designed to produce or simulate CSAM, deepfake sexual imagery, and other unlawful material. To comply, chatbot operators must implement content policies, technical filters, and reporting and takedown procedures.
- Industry-Specific Requirements: Chatbot operators in regulated industries must also be aware of specific regulations that apply to them. For example, Utah requires suppliers of “mental health chatbots” to make clear and conspicuous disclosures that users are interacting with AI, mandates the creation and filing of detailed governance policies, and imposes a number of privacy restrictions. California has also enacted several laws that govern the use of generative AI in the healthcare context, including a statute that applies to AI-generated patient communications, and a statute that applies to health care service plans, disability insurers and their third-party contractors, and licensed healthcare professionals.
Designing for Compliance in a Rapidly Evolving Chatbot Landscape
As a practical first step, chatbot operators should assess whether their system is designed, or likely in practice, to function as a “companion” model, since that classification can materially change the legal obligations and risk profile. But regardless of where a chatbot falls on that spectrum, companies now face a dense and overlapping set of requirements spanning safety, privacy, cybersecurity, consumer protection, IP, transparency, content moderation, and industry-specific regulation.
The previous section outlined examples of specific compliance measures chatbot companies must take. Overall, however, developers and deployers must take a systematic approach to assessing risks, hardening systems, and documenting controls and decision-making. Waiting for litigation or regulators to dictate priorities is likely to be more costly and more disruptive than building compliance and governance into chatbot design from the outset.