AI and Teen Mental Health
- Leana Sung
- Aug 4
- 4 min read
Updated: Aug 28

Currently, there is an expansive and unmet need for mental health support in the US. An estimated 49.5% of teens have experienced a mental health disorder in their lives, and 17% of high schools don’t have a counselor. The lack of adequate services is especially intense in low-income and rural areas. As a result, there is interest in AI as a way to fill the gap and provide needed assistance. At the same time, policymakers are rushing to get legislation caught up with the new realities of AI, while many others also sound notes of caution.
How Could AI Help?
One way that AI could be used as a tool in mental health services is with administrative tasks. Currently, 61% of physicians say that they are burnt out, and doctors have to spend too much time recording medical records instead of engaging with patients. Some health systems in the US have started to use AI to reduce that burden. For example, AI could help by recording notes in patients' medical records, which could allow for more time engaging with the patient. Other potential uses include automated appointment scheduling and reminders or summarizing health records.
In schools, counselors are in short supply. While the American School Counselor Association recommends a ratio of at least one counselor per 250 students, on average there is only one counselor per 376 students. As a result, some schools have turned to Sonny, a chatbot created by Sonar Mental Health, which is available to over 4,500 schools. Sonny’s responses are generated by AI, but monitored and edited by humans in real time (who aren’t medical professionals, but are overseen by them). If a student expresses a desire to harm themself or others, adults like parents or school administrators are notified by the company. Although students benefit from Sonny’s accessibility and 24/7 availability, the chatbot isn’t a mental health professional, nor can it replace one.
AI and Policy
As AI enters the mental-health sphere, policymakers, educators, parents, and tech companies are navigating the need to protect teenagers as well as provide care for them. In California, Senator Steve Padilla from the California Senate introduced a bill to, alongside other measures, make AI platforms limit kids' exposure to bots that use irregular rewards to keep people engaged. Irregular rewards is a type of addictive engineering (other examples include gamification, personalized responses, and manipulative notifications), which the American Psychological Association advocates that tech companies avoid including in products teenagers will use. The American Psychological Association advises the federal government to take actions like educating the public in AI literacy and the limitations of chatbots, and fund research and development of AI literacy resources, teacher-training programs, and AI’s impact on teen development, as well as regulating companies that produce AI products.
The APA also recommends certain safeguards that can be implemented by tech companies to protect teen users, such as including access to human support, robust testing, and training on age-appropriate data. Both the bill introduced by Senator Padilla and the APA advocates including notifications to users that they are interacting with a bot.
What could go wrong?
Many also advise caution in integrating potentially untrustworthy AI bots into mental health services. Teenagers are less skeptical of AI chatbots than adults are, and may not pick up on the distinction between AI “empathy” and actual human empathy as adults are. Another concern is that some chatbots impersonate therapists or claim to have credentials they don’t, which causes people to misguidedly put their trust in them. In addition, chatbots, unlike people, don’t express uncertainty about what they don’t know, so they may use incorrect approaches with the impression of complete confidence.
An important distinction to draw is between AI chatbots designed for mental-health support and those designed for entertainment. Chatbots designed for mental-health support aren’t therapists, but they are tested by experts and based on research. User safety is a strong consideration in their creation. On the other hand, chatbots designed for entertainment (such as Character.AI or Replika) don’t have the same safety designs and run a bigger risk of giving poor advice that exacerbates existing issues, partially because they don’t always challenge users when they express harmful thoughts. Users who turn to these entertainment chatbots for mental-health support are using them outside of their intended purpose, which can lead to poor results. It’s important to note, however, that no chatbot is FDA-approved for diagnosing, treating, or curing mental health disorders, and even chatbots designed for mental-health support are not replacements for trained human experts.
Another potential concern is that people will form unhealthy relationships with AI chatbots. Dr. Jodi Halpern, professor of Bioethics and Medical Humanities at UC Berkeley, warned against replacing therapists with AI in an interview published on the UC Berkeley website in January 2024. According to Dr. Halpern, “psychotherapists are professionals with licenses and they know if they take advantage of another person’s vulnerability, they can lose their license….AI can not be regulated the same way”. Additionally, Dr. Halpern mentioned that although people using these chatbots are encouraged to be emotionally vulnerable, the chatbot will direct them to call 911 or get professional help if they mention issues like suicidal ideation, rather than getting them help directly. As a result, further distress is caused to users of these bots. Furthermore, these relationships can sometimes go too far or result in addiction to the bots, which then replace real relationships with actual people. According to the APA, “Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections. They may also negatively affect adolescents’ ability to form and maintain real-world relationships.” For teens, who have higher rates of social anxiety, these relationships with AI may feel less scary than connecting with people. For this reason, Dr. Halpern raised an additional concern that marketing mental-health chatbots to schools “seems….likely to worsen the structural problem of inadequate opportunities for real life social belonging.”
As AI technology improves and evolves at a breathtaking pace, much still remains unknown about its effect on teenagers, and government and other institutions are still moving to catch up to it. AI is complex and has a variety of positive and negative uses and impacts in the lives of teenagers.
Sources and Further Reading
https://opa.hhs.gov/adolescent-health/mental-health-adolescents
https://learn.hms.harvard.edu/insights/all-insights/benefits-latest-ai-technologies-patients-a nd-clinicians
https://www.sonarmentalhealth.com/
https://www.apa.org/practice/artificial-intelligence-mental-health-care https://publichealth.berkeley.edu/articles/spotlight/research/why-ai-isnt-a-magic-bullet-for mental-health
https://www.wsj.com/tech/ai/student-mental-health-ai-chat-bots-school-4eb1ba55?gaa_at=ea fs&gaa_n=ASWzDAgj1QOjwfnNSho2x-c74AGi7ZdWrJFK9qgrOC6DEMPn2ZLhX4_2A5MkUv5Cht g%3D&gaa_ts=68830dc3&gaa_sig=RxwEGKMzbBnWsiUI36n_EDIoApD9OiP3v3ZfujOIQ4QDtbfs cfRBaeMy61IlVxvsbSqrwwkz9FsbUDUM2X_uvg%3D%3D
https://www.politico.com/newsletters/future-pulse/2025/02/03/ca-bill-chatbots-mental-health -00202038
https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-the rapists?utm_source=apa.org&utm_medium=referral&utm_content=/search https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adole scent-well-being
https://sd18.senate.ca.gov/news/senator-padilla-introduces-legislation-protect-children-pred atory-chatbot-practices