Therapy Betrayed: Patients Catch Therapists Secretly Using ChatGPT in Sessions

In 2025, patients are uncovering therapists secretly using AI tools like ChatGPT in sessions, sparking a mental health crisis over trust, ethics, and privacy
Online therapy session glitch revealing therapist using ChatGPT during live session.
A therapy session disrupted as patients discover therapists using AI tools like ChatGPT during live counseling.AI generated
Published on
Updated on

In the fast-evolving world of mental health care, where vulnerability meets vulnerability, a new player has snuck into the therapist's toolkit: artificial intelligence. Recent revelations show some therapists using AI tools during sessions, often without patient consent. This breach of trust, exposed through glitches and telltale signs, is shaking the mental health community in 2025. From a Los Angeles patient catching his therapist red-handed to others spotting AI-crafted emails, these incidents raise urgent questions about authenticity, privacy, and the future of therapy.

The Accidental Discovery: Declan’s Surreal Session

Imagine pouring your heart out in therapy, only to discover that your deeply personal disclosures are being fed into ChatGPT. That’s exactly what happened to Declan, a 31-year-old from Los Angeles. During an online therapy session in early 2025, a glitchy video connection prompted him to suggest turning off cameras but instead, his therapist accidentally shared their screen and revealed ChatGPT in action. Declan watched in disbelief as his own words were typed into the AI, then regurgitated by the therapist as if they were original insights.

Stunned yet quick-witted, Declan leaned into the absurdity mirroring ChatGPT’s prompts with questions like whether his thinking was “too rigid” to create what he mockingly dubbed a “therapist’s dream session.” Later, the therapist tearfully admitted that they'd turned to AI help after feeling stuck. Declan described the experience as a “strange breakup,” made worse by the fact he was charged for the session. This glitch wasn’t just a tech snafu, it actually became a catalyst for urgent questions about trust, authenticity, and consent in therapy.

More Cases Emerge: From Polished Emails to Botched Condolences

Declan’s episode is far from unique. Across the U.S. and UK, patients are noticing signs of undisclosed AI use. Journalist Laurie Clarke began to suspect her therapist in the UK had relied on AI when she received an overly polished email complete with American-style punctuation like em dashes(—) that didn’t match the therapist’s usual tone.

Similarly, Hope, a 25-year-old from the U.S. East Coast, received a condolence message about her dog’s death that contained an AI prompt: “Here’s a warmer, more empathetic version.” It rattled her especially since the therapist didn’t even own a pet. The therapist admitted to using AI to find the "right words." Such missteps, shared across social media and support forums, highlight a growing unease: the therapeutic space so inherently intimate and is increasingly infiltrated by machines.

Privacy at Stake: The HIPAA Problem

AI chatbots like ChatGPT present serious privacy risks in healthcare contexts. As outlined in the Cambridge Journal of Law, Medicine & Ethics, when a covered entity (e.g., a hospital or healthcare provider) uses an AI vendor to process protected health information (PHI), that vendor becomes a HIPAA business associate and must comply with HIPAA safeguards.(2)

However, many scenarios fall outside HIPAA’s clear jurisdiction. For example, when patients voluntarily input PHI into AI tools, developers or vendors may not be considered business associates and HIPAA protections may not apply. That gap means AI vendors may operate unregulated, with no obligation to safeguard sensitive information.

Moreover, the FDA has not issued specific guidelines or regulations for large language models (LLMs) like ChatGPT or Bard(2) when used in therapy contexts. This regulatory vacuum further muddles the clinical and legal responsibility landscape around AI-assisted mental health services.

References:

  1. Rezaeikhonakdar, D. “AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.” Journal of Law, Medicine & Ethics, 2023–2024. Cambridge University Press. https://www.cambridge.org/core/journals/journal-of-law-medicine-and-ethics/article/ai-chatbots-and-challenges-of-hipaa-compliance-for-ai-developers-and-vendors/C873B37AF3901C034FECAEE4598D4A6A?utm_source=chatgpt.com.

  2. Metz, Cade. “Therapists Using ChatGPT Secretly.” MIT Technology Review, September 2, 2025. https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/.

    (Rh/Eth/VK/MSM)

Online therapy session glitch revealing therapist using ChatGPT during live session.
7 AI Health-Tech Startups Making Diagnostics Smarter, Quicker, and More Affordable

Related Stories

No stories found.
logo
Medbound Times
www.medboundtimes.com