Author
Jennifer Wessel, JD, MPH
Interim Director of Health Policy
Data Privacy Officer
Contact
ACHI Communications
501-526-2244
jlyon@achi.net

Artificial intelligence (AI) therapy chatbots have gained popularity as accessible, low-cost tools for individuals seeking mental health support. The growing demand for these tools reflects ongoing challenges in the behavioral health system. In 2024, nearly 1in 4 U.S. adults experienced a mental illness, and in 2022 and 2023, 1 in 4 adults with a mental illness reported an unmet need for treatment, according to Mental Health America’s “The State of Mental Health in America 2025” report. The report also found that in Arkansas, there was one mental health provider for every 380 residents, compared to a national ratio of one mental health provider per 320 residents. Overall, the report ranked Arkansas 45th among the 50 states and the District of Columbia in mental health, with the low ranking indicating a relatively high prevalence of mental illness and relatively low access to care. These findings underline the need for support options outside traditional clinical care.
Psychotherapy delivered by licensed professionals remains the evidence-based standard of care, yet access can be limited by cost, availability, geography, or stigma. AI tools are beginning to fill some of these gaps, but their rapid adoption has raised questions about safety, accuracy, legal responsibilities, and the protections available to consumers.
Research on AI-generated mental health support is mixed. One study found that users perceived some AI-generated responses as more empathetic than those from human providers. At the same time, there are examples of potential harm. In 2023, the National Eating Disorders Association replaced its helpline with a chatbot, which was removed shortly afterward when it provided inappropriate and potentially dangerous guidance.
These experiences reflect broader concerns about AI in mental health care. Currently there is no federal framework regulating AI therapy chatbots. States are beginning to explore regulatory approaches. Nevada and Illinois have enacted laws addressing the use of AI in mental and behavioral health contexts, and Utah has adopted disclosure and data-protection requirements. Other states — including California, Pennsylvania, and New Jersey — are considering legislative proposals. Arkansas is one of six states participating in the Heartland AI Caucus, a regional initiative focused on responsible AI policy.
On November 6, the U.S. Food and Drug Administration (FDA) convened its Digital Health Advisory Committee to examine generative AI-enabled digital mental health medical devices — tools intended to diagnose, treat, or mitigate psychiatric conditions.
Committee members identified potential benefits, including greater access to care in underserved areas and improved ability to monitor symptoms between clinical visits. They also highlighted substantial risks, such as the generation of incorrect or misleading information, failure to identify self-harm concerns, biased responses, and the possibility that a user may begin treating a chatbot as a person and rely on it beyond its intended scope.
The committee recommended clearer labels that explain what these tools can and cannot do, more transparency about how the models were trained, and stronger requirements for showing that they work safely both before and after they are released. These recommendations will help inform the FDA’s approach.
The FDA has authorized thousands of AI-enabled medical devices, but none address mental health conditions. Federal law regulates medical products only when they claim to diagnose or treat disease; most AI therapy chatbots are positioned as general wellness products rather than tools for diagnosis or treatment.
Privacy remains a significant concern as more users turn to AI-based mental health support. Although many chatbots include privacy assurances in their terms of service, most are not subject to the Health Insurance Portability and Accountability Act (HIPAA). HIPAA’s provisions requiring safeguards to protect the privacy of personal health information apply only to “covered entities,” such as health plans and most healthcare providers. When HIPAA does not apply, oversight shifts to the Federal Trade Commission, which investigates unfair or deceptive practices, including misleading statements about data collection or sharing. This regulatory gap means sensitive information shared with AI therapy tools may not receive the same protections as data held by traditional providers. Policymakers and stakeholders have emphasized the importance of clearer standards for transparency, data handling, and consumer safeguards.
AI therapy chatbots are likely to remain part of the mental health landscape, and federal engagement is an important step toward understanding where regulatory lines should be drawn. As discussions continue, balancing innovation with patient safety and privacy will help ensure that these tools support users effectively and responsibly.