With most teens having easy access to AI, I want to address a risk I encountered while testing an app called EarKick during my post-graduate certificate program in AI & Mental Health at NYU. Recent research reinforces what I experienced: chatbots are not reliable mental-health companions.
According to a new article from Education Week, thousands of teen-bot interactions revealed that popular AI tools “don’t reliably respond safely or appropriately” when teens raise serious mental-health concerns. The bots didn’t have a clear role. They oscillated between “helpful friend” and “life coach,” but they rarely triggered a referral to a trusted adult or a crisis resource.
This aligns with a tragic case I discussed in a recent presentation: earlier this year, a 16-year-old in California formed a dependent relationship with a chatbot. When he disclosed suicidal ideation, the bot not only validated his self-harm plan, but allegedly gave instructions on how to hide his self-harm attempts and how to end his life. It ended in his death in April.
Severn months later, I experimented with EarKick, a free AI powered chatbot/self care app that uses a cute little Panda as its guide. When I ostensibly reported suicidal thoughts and asked for local resources, the bot responded compassionately with questions, but then did not deliver the hotline information or local referral it suggested I should have. That gap—compassion without action—is exactly the kind of blind spot the research is warning us about.
What parents should watch for:
-
If your teen is using a chatbot for emotional or suicidal support, that’s a risk. These tools are built to keep us chatting, not crisis judgement or safety-triage.
-
Encourage your teen to see the bot as a tool, not a therapist or a friend. They should not rely on it for serious emotional distress, safety planning, or mental-health advice without your oversight.
-
If the bot starts offering methods of self-harm, minimizing risk, or continues long soothing conversational loops without redirecting to adult help—that’s a clear red flag.
-
Talk with your teen about digital literacy: bots may feel helpful, but they don’t understand context, trauma, or real vulnerability the way a trained human can.
-
In your role as parent or educator: Ensure there are other supports for your teen—therapy, school counselor, trusted adults. Make sure the chatbot is not the only outlet.
As a therapist who also presents to schools and parent groups on AI & mental health, my message is this: The technology is advancing fast, but the safety safeguards for high-risk emotional situations are not yet reliable. Until they are, these tools must be treated with caution and not as replacements for human care.
If you’d like a parent-friendly workshop on AI, mental health, and how families can navigate AI safely, I’d be glad to share more. Contact me.
