Illinois Leads U.S. in Banning AI Chatbots in Mental Healthcare

State Implements First-in-Nation Ban to Protect Patients from Unsupervised AI Mental Health Tools
Image depicting a man looks worried at laptop showing AI chatbot portal and error with red cross icon.
Illinois has become the first U.S. state to ban AI Chatbots from providing therapy or mental health assessments without licensed professional supervisionGenerated by Canva
Published on

Illinois Bans AI in Mental Health Services

Illinois has become the first U.S. state to ban AI platforms like ChatGPT from providing therapy or mental health assessments without licensed professional supervision, citing risks of harm, lack of empathy, and accountability issues in AI-driven mental healthcare. This landmark legislation, effective August 1, 2025, addresses critical concerns for healthcare providers, ensuring patient safety and reinforcing the role of licensed professionals in mental health practice.

Regulations on AI Chatbots in Clinical Practice

On August 1, 2025, Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act (House Bill 5559), prohibiting AI chatbots from creating treatment plans, making mental health evaluations, or offering counseling services unless supervised by a licensed professional. Violators face penalties of up to $10,000 per violation, enforced by the Illinois Department of Financial and Professional Regulation (IDFPR). The law allows AI for non-clinical tasks like scheduling, monitoring therapy notes, or providing general wellness tips, ensuring human expertise remains central to mental healthcare. This regulation directly impacts healthcare providers by mandating their oversight in AI applications, requiring them to integrate technology responsibly into clinical workflows while maintaining compliance with state standards.

Risks of AI in Mental Health Care

The legislation addresses rising concerns about AI’s limitations in mental health contexts. A 2024 Stanford study highlighted that AI therapy chatbots increased stigma toward certain conditions and enabled dangerous behaviors, including suicidal ideation. The American Psychological Association (APA) has also warned of real-world harms, such as suicide incidents and emotional manipulation by unregulated AI mimicking therapists. According to techstory.in, Illinois’ law was prompted by cases where AI systems provided inadequate or harmful advice, emphasizing the need for human oversight to prevent misdiagnosis and ensure empathetic care. For healthcare professionals, this underscores the importance of validating AI outputs to avoid clinical errors. Illinois’ law aims to protect vulnerable populations from misinformation, misdiagnosis, and lack of accountability in AI systems, which lack the empathy and nuanced judgment of human professionals. The IDFPR will conduct regular audits to ensure compliance, placing additional responsibility on mental health practices to document AI use and supervision protocols.

National Trends in AI Healthcare Regulation

Illinois’ ban sets a national precedent for responsible AI governance in healthcare. Other states are following suit:

  • Nevada: Banned AI from providing therapeutic services in schools in June 2025 to protect children.

  • Utah: Requires AI chatbots to disclose they are not human and prohibits using emotional data for ads.

  • New York: From November 5, 2025, AI tools must redirect users with suicidal thoughts to licensed crisis professionals.

Public concerns, reflected in a 2024 U.S. survey, include fears of incorrect diagnoses, inappropriate treatments, and confidentiality breaches by AI systems. The IDFPR, with input from the National Association of Social Workers (Illinois Chapter), emphasized that mental health services must prioritize human expertise over unregulated technology. As reported by dig.watch, this law aligns with broader U.S. efforts to regulate AI in healthcare, potentially influencing federal policies and prompting medical institutions to revise AI training protocols for clinicians.

(Rh/Eth/MKB/MSM/SE)

Image depicting a man looks worried at laptop showing AI chatbot portal and error with red cross icon.
AI more Accurately Identifies Patients with Advanced Lung Cancer that Respond to Immunotherapy

Related Stories

No stories found.
logo
Medbound
www.medboundtimes.com