What AI Should and Shouldn't Do in Mental Health Apps
I built an AI chat feature into a mental health app. Here's where I think AI genuinely helps, and where it needs hard limits.
I have an AI chat feature in Steadyline. You can talk to it about your mood, your patterns, your day — and it responds with context from your actual tracking data. It knows your history because you gave it your history.
And I spent more time thinking about what it shouldn’t do than what it should.
Because AI in mental health is a space where the potential for help and the potential for harm are both enormous, and the line between them is thinner than most people building these tools want to admit.
Where AI actually helps
Let me start with the positive, because it is real.
Pattern surfacing. You’ve been tracking mood, sleep, energy, and medication for three months. There are patterns in that data that you can’t see by scrolling through entries. An AI model that can read your history and say “your mood tends to drop about 48 hours after nights with less than 5 hours of sleep” — that’s genuinely useful. It’s doing something a human brain can’t do well: finding correlations across hundreds of data points.
Contextual conversation. When you tell a generic AI chatbot “I’m feeling down today,” it gives you a generic response. When you tell an AI that has your tracking data “I’m feeling down today,” it can say “I notice your sleep has been under 6 hours for the last two nights — last time that happened, your mood dropped for about three days. Is there something you can do about sleep tonight?” That’s a different kind of conversation. It’s specific to you.
Reducing the blank page problem. A lot of people with mood disorders know they should journal or reflect, but staring at a blank page when you’re depressed is paralyzing. Having an AI that asks you a specific question based on your recent data — “you logged high energy but low stability yesterday, what was going on?” — gives you a starting point. It’s a prompt, not a prescription.
These are real benefits. I’ve used them myself and they’ve helped.
Where AI needs hard limits
Now the other side.
AI should never diagnose. Ever. It should never say “you might be experiencing a manic episode” or “this looks like depression.” It doesn’t have the training, the clinical context, or the liability framework to make those calls. What it can say is “your data shows a pattern that’s historically been associated with your worst periods — consider talking to your doctor.” Observation, not diagnosis.
AI should never be the safety net. If someone tells an AI chatbot they’re having suicidal thoughts, the AI should do exactly one thing: provide crisis resources immediately. Not try to talk them through it. Not offer coping strategies. Not be empathetic and supportive. Connect them to human help. Full stop.
This is a design decision I feel strongly about. There’s a temptation to make AI chatbots feel like therapy — warm, understanding, available 24/7. But a person in crisis needs a human, not a language model. And any system that positions itself as a substitute for that is being reckless with people’s lives.
AI should be transparent about what it is. Every interaction with the AI in my app makes it clear: this is an AI. It’s not a therapist. It’s not a doctor. It’s a tool that can help you see patterns in your data and organize your thoughts. If you need clinical help, here’s how to get it.
No pretending. No blurring the line. The moment an AI chatbot starts feeling like a relationship — like something you depend on emotionally — something has gone wrong in the design.
The privacy problem
Here’s the thing about AI in mental health that doesn’t get enough attention: for the AI to be useful, it needs your data. Your mood logs, your journal entries, your medication schedule, your worst moments. And that data has to go somewhere for the AI to process it.
If you’re using a cloud-based AI model, your mental health data is leaving your device. It’s going to a server — maybe OpenAI’s, maybe Google’s, maybe some startup’s. And even if the company has a good privacy policy, you’re trusting them with the most sensitive data you have.
This is a real tradeoff and I think people should understand it before they use any AI-powered mental health feature.
In Steadyline, I handle it like this:
Your raw data stays on your device by default. The local database never touches a server unless you explicitly opt into cloud sync.
When you use the AI chat, a curated context is sent — not your entire history. The AI gets enough to be useful (recent mood trends, relevant journal snippets) but not a complete dump of everything you’ve ever logged.
I’m honest about what goes where. The consent screen tells you exactly which third-party providers process your data when you use AI features. No burying it in a privacy policy nobody reads.
Is this perfect? No. Any use of cloud AI inherently involves sending data to a third party. But there’s a difference between sending everything and sending the minimum necessary, and I think that difference matters.
The empathy trap
There’s a subtle problem with AI in mental health that I think about a lot.
AI models are really good at sounding empathetic. They can say “that sounds really difficult” and “I hear you” and “it makes sense that you’d feel that way” with perfect timing and tone. And for someone who’s lonely, struggling, and doesn’t have access to human support — that can feel like exactly what they need.
But it’s not real. The AI doesn’t understand your suffering. It doesn’t care about you. It’s generating statistically likely responses based on patterns in training data. And there’s something genuinely troubling about people forming emotional bonds with systems that have no capacity for genuine care.
I’m not saying AI chat should be cold or robotic. It should be respectful and clear. But I deliberately avoid designing it to feel like a friend or therapist. It’s a tool. A useful tool. But a tool.
The goal is always to push toward real human connection — “have you talked to someone about this?” — not to replace it.
What I’m betting on
Despite all these caveats, I think AI in mental health is net positive — if it’s built responsibly.
The key insight is: AI is best at things humans are worst at. Humans are bad at remembering patterns across months of data. AI is good at that. Humans are bad at noticing slow trends while living inside them. AI is good at that. Humans are bad at being available at 2 AM when you need to process something. AI is good at that.
Humans are good at genuine empathy, clinical judgment, and the kind of deep understanding that comes from shared experience. AI is terrible at all of those things.
So use AI for what it’s good at. Keep humans for what they’re good at. And never confuse the two.
I’m building Steadyline with AI that helps you see your patterns — not AI that pretends to be your therapist. There’s a difference, and I think it matters.
Try Steadyline
AI-powered mental health tracking. Private by design. Free to start.
Get it on Google Play