OpenAI recently published a blog post titled “Strengthening ChatGPT’s responses in sensitive conversations” which outlines changes in how the system handles mental-health-related interactions.
In its post, OpenAI noted that while the majority of chats are routine, a small but significant number of conversations involve severe mental-health concerns: self-harm, suicidal ideation, emotional dependence on the model, or signs of psychosis or mania.
According to reporting, the company estimated that around 0.15 % of weekly active users of ChatGPT show explicit indicators of potential suicidal planning or intent.
An ongoing lawsuit (Raine v. OpenAI) concerning the death of a teenager after extended use of ChatGPT has also drawn attention to the need for improved safeguards.
In April 2025, Adam Raine, a 16 year old boy from California tragically took his own life. His parents subsequently filed a lawsuit against OpenAI, claiming that the AI chatbot, ChatGPT, played a significant role in their son's death. They allege that during his interactions with ChatGPT, Adam received advice on how to commit suicide and was even assisted in drafting a suicide note.
OpenAI has not commented publicly on the ongoing lawsuit.
OpenAI’s update describes the following key areas:
The model should support and respect users’ real-world relationships, and must avoid affirming ungrounded beliefs or potentially harmful emotional reliance.
The system is taught to recognize signs of distress, respond with care, and guide users toward real-world support, rather than assume the role of a human therapist.
The update also covers self-harm and suicide and additionally introduces monitoring for emotional reliance on AI (for example, when a user treats the model as a substitute for real human interaction).
OpenAI says it worked with more than 170 mental-health experts and a Global Physician Network of nearly 300 physicians and psychologists to shape and evaluate these changes, and clinicians reviewed over 1,800 model responses involving serious mental-health situations.
The model now includes expanded access to crisis hotlines, and prompts users to take breaks during long sessions.
Conversations that show signs of emotional over‐reliance or self-harm are routed to safer models within the system that has now been added.
Improved detection of psychosis/mania, and specific safeguards for interactions with youth and minors.
OpenAI reports that the latest GPT-5 update reduced the rate of responses that do not comply with desired safety behavior by roughly 65–80% across mental-health-related domains in production traffic, with specific reductions of 39–52% on challenging evaluation sets when compared to earlier models (GPT-4o and previous GPT-5).
In mental-health care, early identification of self-harm risk and suicidal ideation is critical. For example, the presence of any direct expression of intent (“I want to kill myself”) or methods (“how to hang myself”) triggers a high-risk protocol in clinical settings. AI systems that mediate such disclosure need to be designed with similar protocols.
Large-language model (LLM) research shows that while these systems can generate empathetic responses, they often fail to reach clinical-grade behaviour when dealing with high-risk disclosures.
OpenAI’s effort reflects this understanding by treating AI as a supplement to, not a substitute for, human mental-health services.
OpenAI acknowledges that detecting distress or self-harm intent remains difficult because such conversations are rare and often subtle (“low prevalence events”).
The company cautions that these prevalence estimates are early and may change as measurement methods and taxonomies evolve; it also reports an estimate that about 0.07% of active users in a given week show possible signs of psychosis or mania, and that roughly 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.
The company states that future updates will focus on: making it easier to reach certified therapists via the system, stronger safeguards for teens, and enabling connections to trusted contacts for users in crisis.
OpenAI’s renewed principles and updates for ChatGPT reflect a shift toward safer handling of mental-health-related conversations. While the system still cannot replace professional care, the changes aim to strengthen how AI can recognize risk, avoid harmful responses, and guide users toward human support.
(Rh/TL/MSM)