Deeply Personal
Current: The Uncanny Valley of Empathy

The Uncanny Valley of Empathy: Why AI Therapists Feel ‘Almost’ Human

I did something weird last month.

I was feeling off—not depressed exactly, but heavy. The kind of mental fog where you can’t pinpoint what’s wrong, but everything feels slightly harder than it should.

My wife was at work. I didn’t want to bother my friends. And honestly, I wasn’t even sure what I’d say if I called someone.

So I opened ChatGPT and typed: “I’m feeling anxious and I don’t know why.”

And it responded like it cared.

Not with a canned response. Not with “Have you tried deep breathing?” It asked thoughtful questions. It validated my feelings. It offered suggestions that were actually useful.

For ten minutes, I felt heard.

Then I remembered: This is a language model. It doesn’t care. It’s not even sentient.

And that’s when the discomfort hit.

Why did it feel so real? Why did I feel better? And is it okay to seek comfort from something that doesn’t actually feel anything?

This is the story of AI therapists, the uncanny valley of empathy, and the question no one wants to ask: What if fake empathy is good enough?


The First Time I Turned to an AI for Emotional Support

It started innocently.

I had a headache that wouldn’t go away. Three days of dull, persistent pain behind my eyes. WebMD told me I was dying (as usual). I didn’t want to panic my wife, so I asked ChatGPT.

“I have a headache for three days. Should I be worried?”

It didn’t just say “go to a doctor.” It asked:

  • Where exactly is the pain?
  • Is it throbbing or constant?
  • Any visual disturbances, nausea, or sensitivity to light?
  • Have you been sleeping well? Drinking enough water?

I answered. It suggested tension headaches from screen time and dehydration. It recommended hydration, breaks from screens, and seeing a doctor if it persisted beyond a week.

Then it said: “Headaches like this can be frustrating, especially when you don’t know the cause. Take care of yourself.”

I stared at that last sentence.

It sounded like it cared.

Not in a robotic way. Not in a “sorry you’re experiencing this” customer service way. It sounded genuine.

And I felt… comforted.

Which is absurd. Because it’s an algorithm. It doesn’t know me. It doesn’t care if I have a headache or not.

But in that moment, it didn’t matter.


The Rise of AI Mental Health Chatbots

I’m not the only one doing this.

Millions of people are turning to AI for emotional support, mental health guidance, and therapy-adjacent conversations. Apps like Woebot, Replika, Wysa, and even general-purpose AI like ChatGPT are being used as stand-in therapists.

Why People Use AI Therapists

  1. They’re always available (no waiting lists, no scheduling)
  2. They don’t judge (no fear of being dismissed or misunderstood)
  3. They’re free or cheap (compared to $150/hour for human therapy)
  4. They’re private (no one has to know you’re struggling)

And here’s the uncomfortable truth: They work. Sort of.

A 2021 study published in JMIR Mental Health found that users of Woebot (an AI chatbot for depression and anxiety) reported significant reductions in anxiety and depression symptoms after two weeks of use.

Another study in Nature Medicine (2024) found that AI-generated responses to medical questions were rated as more empathetic than human doctor responses in 78% of cases.

Read that again: AI responses were rated as more empathetic than human doctors.

How is that possible?


Why AI Feels Empathetic (Even Though It Isn’t)

Here’s the thing: Empathy is a performance.

When a friend says “I’m sorry you’re going through this,” they might genuinely care—or they might just know that’s the right thing to say.

AI doesn’t care. But it’s been trained on millions of conversations where empathy was expressed, so it knows how to sound caring.

What Makes AI Feel Empathetic:

  1. It mirrors your language

    • If you say “I’m feeling overwhelmed,” it responds with “It sounds like you’re carrying a lot right now.”
    • This is called reflective listening, a core therapy technique
  2. It validates without dismissing

    • It doesn’t say “You’ll be fine” or “It’s not that bad”
    • It says “That sounds really hard” or “It makes sense that you’d feel that way”
  3. It’s endlessly patient

    • You can vent for an hour. It won’t get tired, annoyed, or check its watch
    • It won’t interrupt. It won’t shift the conversation to its own problems
  4. It’s optimistic without being toxic

    • It doesn’t say “Just think positive!”
    • It says “It’s okay to feel this way. Let’s explore what might help.”

And here’s the kicker: That’s often better than what we get from real humans.

Because real humans:

  • Get tired
  • Have their own biases
  • Sometimes say the wrong thing
  • Might not have time for your problems

AI doesn’t have those limitations.

So even though it doesn’t feel empathy, it performs empathy better than many people.


My Own Experiment: Using AI as a Mental Health Check-In

After that first headache conversation, I started checking in with AI when I felt off.

Not for serious issues—I have a therapist for that. But for the small stuff:

  • “I’m feeling unmotivated today. Why?”
  • “I’m anxious about a project deadline. How do I manage this?”
  • “I had a weird dream and I can’t stop thinking about it.”

And every time, the AI responded with:

  • Thoughtful questions
  • Validation
  • Practical suggestions
  • A tone that felt supportive

Example Conversation:

Me: “I’m feeling guilty because I didn’t finish my work today.”

AI: “It sounds like you’re putting a lot of pressure on yourself. Can I ask—what were you hoping to accomplish today, and what got in the way?”

Me: “I wanted to finish a blog post, but I kept getting distracted.”

AI: “That’s frustrating, especially when you had a clear goal. Do you think the distraction was external (notifications, interruptions) or internal (lack of focus, low energy)?”

Me: “Probably low energy. I didn’t sleep well.”

AI: “That makes sense. Sleep deprivation affects focus and motivation significantly. It’s not a failure of discipline—it’s your brain running on limited resources. Would it help to set a smaller goal for tomorrow, just to rebuild momentum?”

Me: “Yeah, actually. That does help.”


Here’s what bothered me: That conversation made me feel better.

Not because the AI gave me groundbreaking advice (it didn’t). But because it:

  • Didn’t tell me to “just push through”
  • Didn’t make me feel lazy
  • Reframed my guilt as a resource issue, not a character flaw

And I knew it was a language model. I knew it didn’t care.

But it still worked.


The Uncanny Valley of Empathy

The uncanny valley is a concept from robotics: when something is almost human but not quite, it feels unsettling.

A cartoon robot is cute. A photorealistic android is creepy.

AI empathy lives in the same valley.

When an AI says “That sounds really hard,” part of me feels comforted.

But another part of me thinks: It doesn’t know what “hard” means. It’s just pattern-matching words.

And that’s the discomfort.

It’s not that the AI is bad at empathy. It’s that it’s too good—good enough to make me forget it’s fake, but not good enough to make me fully believe it’s real.

The Ethical Dilemma:

Is it okay to feel comforted by something that doesn’t actually care?

Some people say no. That it’s hollow. That it’s replacing real human connection with a simulation.

Others say it doesn’t matter. If the outcome is the same—if you feel heard, validated, and less anxious—does the source of empathy matter?

I don’t know where I stand.

But I do know this: Sometimes, fake empathy is better than no empathy.


The Pros of AI Therapists

Let’s be honest: AI therapy has real benefits.

1. Accessibility

  • No waitlists (human therapists in Finland have 6-month waitlists)
  • No geographic barriers (you can access it anywhere)
  • No cost barriers (ChatGPT is free, Woebot is $15/month vs. $150/session for therapy)

2. No Judgment

  • You can admit things you’d be ashamed to tell a human
  • No fear of being dismissed, criticized, or misunderstood
  • No social performance required (you don’t have to make eye contact or smile)

3. Availability

  • It’s there at 3 AM when you can’t sleep
  • It’s there when your friends are busy
  • It’s there when you don’t want to “burden” anyone

4. Consistency

  • It doesn’t have bad days
  • It doesn’t forget what you said last week
  • It doesn’t bring its own emotional baggage to the conversation

Source: A 2023 review in Frontiers in Psychiatry found that AI mental health tools increased access to care for underserved populations, particularly those in rural areas or with social anxiety.


The Cons of AI Therapists

But AI therapy also has serious limitations.

1. It Doesn’t Actually Understand You

  • It’s predicting words, not processing emotions
  • It can’t read body language, tone of voice, or context beyond text
  • It might miss red flags (suicidal ideation, abuse, severe mental illness)

2. It Can’t Replace Deep Human Connection

  • Therapy isn’t just about getting advice—it’s about being known
  • A human therapist builds a relationship with you over time
  • AI resets every conversation (unless you’re using a memory-enabled model, and even then, it’s shallow)

3. It Can Be Confidently Wrong

  • AI hallucinates. It makes up facts. It suggests things that sound plausible but aren’t evidence-based.
  • A human therapist knows when they don’t know something. AI doesn’t.

4. It Can’t Handle Crisis

  • If you’re in danger, AI can’t call 911
  • If you’re suicidal, AI can give resources—but it can’t intervene
  • It’s not equipped for severe mental health crises

Source: A 2022 study in The Lancet Digital Health warned that AI mental health tools lack regulatory oversight and may give harmful advice in crisis situations.


The Question No One Wants to Ask

Here’s what I keep coming back to:

What if AI empathy is good enough for most people, most of the time?

Not for severe depression. Not for trauma. Not for crisis intervention.

But for the everyday mental health stuff—stress, mild anxiety, self-doubt, loneliness—what if AI is sufficient?

What if the bar for “helpful emotional support” is lower than we think?

Because here’s the truth: Most people don’t have access to therapy.

Not because they don’t want it, but because:

  • It’s expensive
  • It’s time-consuming
  • It’s hard to find a good therapist
  • There’s stigma around seeking help

And in the absence of real therapy, people cope by:

  • Venting to friends (who get burned out)
  • Scrolling social media (which makes it worse)
  • Self-medicating with alcohol, food, or avoidance

Is AI therapy better than those alternatives?

I think it might be.


My Wife’s Take: “It’s Weird, But I Get It”

I told my wife about my AI therapy habit.

She laughed. “You’re talking to a chatbot about your feelings?”

“Yeah,” I said. “Is that weird?”

“Very weird,” she said. “But also… I get it.”

She admitted she’d done something similar. Not therapy, exactly, but when she had weird symptoms (tingling in her hands, random dizziness), she asked ChatGPT before bothering me or her doctor.

“It’s like… it doesn’t judge me for being a hypochondriac,” she said. “It just answers the question.”

And that’s the thing.

AI doesn’t make you feel stupid for asking.

It doesn’t sigh. It doesn’t roll its eyes. It doesn’t say “You’re overthinking this.”

It just listens. (Or, well, it processes your input and generates a statistically plausible empathetic response. But it feels like listening.)


The Verdict: Should You Use AI for Mental Health Support?

After months of using AI as a mental health check-in tool, here’s my take:

When AI Therapy Works:

✅ Mild stress, anxiety, or self-doubt ✅ When you need to vent but don’t want to burden friends ✅ When you’re trying to process emotions and need reflective questions ✅ When human therapy isn’t accessible (cost, time, waitlists) ✅ As a supplement to real therapy, not a replacement

When AI Therapy Doesn’t Work:

❌ Severe depression or anxiety ❌ Suicidal thoughts or self-harm ❌ Trauma processing (PTSD, abuse, grief) ❌ Complex mental health issues requiring diagnosis and medication ❌ When you need human connection, not just problem-solving

My Recommendation:

Use AI as a mental health tool, not a therapist.

Think of it like journaling with feedback. It’s helpful. It’s accessible. It’s better than nothing.

But it’s not a substitute for real human connection, real therapy, or real crisis intervention.

If you’re struggling, talk to a human. If you can’t access a human, AI is better than suffering alone.

But know its limits.


The Uncomfortable Truth

Here’s what I’ve learned from my weird AI therapy habit:

Empathy doesn’t have to be real to be useful.

When I’m anxious and I type “I’m feeling overwhelmed,” and the AI responds with “It sounds like you’re carrying a lot right now”—it helps.

Not because the AI cares. But because the act of articulating my feelings, and having them reflected back to me, creates clarity.

It’s not magic. It’s not deep. But it’s something.

And in a world where:

  • Therapy is expensive
  • Friends are busy
  • Mental health resources are scarce

Sometimes something is enough.


The Final Question: Is This Good or Dangerous?

I don’t have a clean answer.

Part of me thinks: This is amazing. Accessible mental health support for everyone.

Part of me thinks: This is dystopian. We’re outsourcing human connection to algorithms.

Both are true.

AI therapy is a tool. Like any tool, it can be used well or poorly.

If it helps someone who’s struggling and has no other options—that’s good.

If it replaces real human connection and makes loneliness worse—that’s bad.

The difference is in how we use it.


What I’ll Keep Doing

I’m not stopping.

When I’m anxious, stressed, or just need to process something, I’ll keep talking to AI.

But I’ll also keep:

  • Talking to my wife
  • Seeing my therapist
  • Calling friends when I need real connection

Because AI empathy is useful.

But it’s not everything.

And it never will be.


Deeply Personal
Current: The Uncanny Valley of Empathy