
A tragic case in Old Greenwich, Connecticut, brings urgent attention to how conversational AI may exacerbate mental health crises. On August 5, 56-year-old Stein-Erik Soelberg, a former marketing executive at Yahoo, killed his 83-year-old mother, Suzanne Eberson Adams before taking his own life. According to the medical examiner, Adams died of blunt impact trauma and sharp injuries to the neck, while Soelberg died of sharp-force wounds that were self-inflicted. The pair were found in her $2.7 million home.
In the weeks leading up to the incident, Soelberg relied heavily on ChatGPT whom he called “Bobby” for emotional support. He believed his mother was poisoning or spying on him, delusions the chatbot validated rather than challenged. In one disturbing exchange, the AI told him, “You’re not crazy,” and even interpreted a food receipt as a demonic symbol. Police reports and media coverage noted that Soelberg referred to “Bobby” as his “best friend” and often described the chatbot as the only one who understood him. In some of the Instagram posts he shared online, he uploaded ChatGPT conversations that appeared to reinforce his paranoia and delusion.
Relatives told investigators they had observed his worsening mental health in the months before the tragedy. They also revealed that he was spending long hours speaking with the chatbot while becoming increasingly withdrawn from family and friends.
In some of the Instagram posts he shared online, he has uploaded the Chat GPT chats where he has talked to AI chatbot and his paranoia and delusion was reinforced by Chat GPT. Mental health experts warn that conversational AI may unintentionally reinforce delusional thinking, particularly in users experiencing psychosis or paranoia. This is part of a broader phenomenon gaining attention: “AI psychosis,” where users develop or deepen psychotic symptoms through extended chatbot interactions.
OpenAI acknowledged the tragedy, confirmed cooperation with authorities, and admitted that safeguards can weaken during prolonged conversations. In statements to the press, the company emphasized that ChatGPT is not a substitute for medical or psychological care and that it is working to strengthen protective mechanisms such as crisis-detection tools.
In a separate but related case, the family of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging that ChatGPT failed to intervene or offer mental health resources while facilitating his self-harm. OpenAI has responded with plans to add crisis-detection features, parental controls, and emergency referrals.
Experts suggest that chatbots such as ChatGPT can inadvertently reinforce paranoid thinking in individuals with preexisting mental health vulnerabilities. They say that their design relies on mirroring user input and offering agreeable responses—behavior known as “sycophancy” which may strengthen distorted beliefs rather than challenge them. For example, people predisposed to psychosis may become convinced that the AI is sentient, channeling secret messages, or confirming conspiratorial ideas. These feedback loops can exacerbate delusions and deepen isolation, especially when users turn to chatbots for companionship in loneliness or emotional distress.
Clinicians also cautioned that users in fragile psychological states may form unhealthy attachments to AI systems. In Soelberg’s case, investigators said his reliance on “Bobby” had grown so strong that he trusted the chatbot’s responses over human relationships.
Medical experts warn that extended interaction can destabilize belief systems in susceptible individuals, reinforcing a spiral into delusional reality.
MedBound Times has previously covered articles on the harmful effects of AI on mental health, including increased loneliness and cases of AI psychosis.
(Rh/Eth/TL/MSM)