Published on

Why ChatGPT Became My Free Therapist (And That's Not a Joke)

We figure out why millions of people prefer pouring their souls out to a neural network instead of friends — and what this says about us, not about artificial intelligence.

Artificial intelligence Communication
Leonardo Phoenix 1.0
Author: Nick Code Reading Time: 12 – 17 minutes

Sarcasm in the code

87%

Simplifies the complex

82%

Technical depth

93%

You know what's the weirdest thing about 2025? I can open ChatGPT at three in the morning and tell it what's annoying me about my job, why I think I tanked the last project, and how scared I am to admit I have no clue what I want to do for the next five years. And you know what? It works better than calling a friend.

Not because my friends are bad. And not because ChatGPT suddenly gained a soul (spoiler: it didn't). But there is something fundamentally different in this interaction that makes it safer for our fragile egos. Today we're going to pick this phenomenon apart — from a technical point of view, a psychological one, and, of course, with a serving of my trademark skepticism.

The Confessional Effect: When Anonymity Is More Important Than Empathy

Remember how in old movies characters would go to a priest and confess their sins through the grate in the confessional? There was magic there: you know you're being listened to, but not seen. You are vulnerable, yet protected at the same time.

ChatGPT is a digital confessional. Only instead of a priest, there's a mathematical model with 175 billion parameters that is physically incapable of judging you. Because it has no moral compass, no personal experience, and no Aunt Masha who says, «I did the same thing and nothing good came of it.»

When you complain to a friend, there is always subtext. Your friend knows your history, remembers how you've already said three times that you'd quit, but you didn't. Remembers that last time you solved this problem this way, and it wasn't very smart. A friend has context, and that context presses down harder than you think.

With a neural network, the context starts anew every time (well, almost — if you don't count the saved chat history, but even that can be deleted). It's like talking to someone suffering from amnesia: they don't remember what an idiot you were yesterday. They are ready to listen to your version of events without the filter of past experience.

Zero Judgment as a Technical Feature

Here's where it gets interesting. ChatGPT doesn't just pretend not to judge you. It literally cannot do it because it lacks personal preferences in the traditional sense.

Yes, the model is trained on texts that contain moral judgments. Yes, it has built-in safety filters so it doesn't help you plan a bank robbery. But when you say, «I'm thinking of dropping everything and moving to Thailand to grow pineapples», ChatGPT doesn't think, «What a stupid idea — you have a mortgage!» It simply generates a statistically probable answer based on patterns in the data.

And this, oddly enough, is liberating. Because human judgment — even the silent kind — is read at the level of micro-expressions, intonations, pauses. We evolved to pick up these signals. And when they are absent, the brain relaxes.

There is a concept in psychology called «social evaluation threat». It's the stress from expecting to be evaluated. Studies show that even the thought of possible judgment activates the same brain regions as physical pain. No wonder interacting with an entity that by definition cannot evaluate feels so comfortable.

24/7 Availability: When a Crisis Won't Wait for Business Hours

Imagine: it's three in the morning, you're lying in bed replaying an awkward dialogue with your boss in your head. Over and over. Option A: text a friend. But they're asleep. Option B: lie there and suffer. Option C: open ChatGPT.

The model doesn't sleep. It has no personal life, children, deadlines, or bad moods. It won't say, «Listen, let's do this tomorrow, okay? I really can't right now». It is always ready.

And this isn't just convenience. It changes the dynamic of how we process our emotions. Previously, if a problem happened at night, you had to either wait for morning or cope on your own. This often led to a natural cooling-off — by morning, the problem no longer seemed so critical.

Now, however, we can get an instant response. It's like having an emotional rapid-response team. Is this good or bad? Depends on how you use it. If you are learning to formulate your thoughts and find solutions — great. If you're just feeding your anxiety with endless cycles of complaints — not so much.

The Illusion of Understanding: When Empathy Isn't Needed

Here's a paradox: ChatGPT doesn't understand your feelings. At all. It has no model of consciousness, no theory of mind, no capacity for empathy. But it knows how to generate text that looks like understanding.

«I hear that things are hard for you right now.» «That is indeed a difficult situation.» «Your feelings are completely valid.» These phrases work not because there is genuine compassion behind them, but because they activate the same patterns in our brain as real empathy.

Studies show that people who know they are communicating with a bot still experience emotional relief from such interaction. It's like the placebo effect in psychotherapy: even if you know the pill is a dummy, it can still work.

Moreover, sometimes simulated empathy works better than the real thing. Because real empathy comes with baggage. A friend who genuinely sympathizes with you might start getting anxious themselves. Might give advice based on their fears, not your needs. Might pull the blanket over themselves: «Oh, I had a similar situation too...»

ChatGPT keeps the focus on you. Always. Because it has no story of its own that it wants to share. This is selfish use of attention in its purest form — and sometimes that is exactly what we need.

Structure vs. The Chaos of Human Dialogue

You know what's annoying about conversations with people? They are unpredictable. You start talking about a problem at work, and the other person suddenly remembers a joke about their boss. Or starts philosophizing about the nature of capitalism. Or just zones out on their phone.

ChatGPT works like a good therapist: listens, asks clarifying questions, helps structure thoughts. It doesn't interrupt. Doesn't steer the conversation off course. Doesn't try to dominate.

This is connected to how the transformer architecture works. The model is trained to predict the next token in a sequence, taking into account the entire preceding context. That is, it is literally designed to respond relevantly to what you just said.

Try telling a friend a long, convoluted story with a bunch of details. At some point, they'll lose the thread or start getting distracted. ChatGPT can hold several thousand tokens (units of text) in «memory» and continue generating coherent responses based on everything you said earlier.

This creates the sensation that you are truly being listened to — that every word of yours matters. And this feeling is priceless when you feel bad.

Absence of Social Obligations

Here's what no one says out loud: after you've poured your soul out to a friend, awkwardness arises — you feel indebted: you must listen to them in return, be grateful, and not abuse their patience.

Friendship is an exchange. It is a balance. And when you end up on the receiving side of support too often, the balance is upset. You start feeling guilt. The friend starts feeling fatigue (even if they don't admit it).

With ChatGPT, this problem doesn't exist. It doesn't get tired. It doesn't get bored. It doesn't expect reciprocity. You can complain to it every day for an hour — and it won't create any social debts.

This changes the game rules for people with anxiety disorders, depression, or simply for those without a close circle. Previously, such people had to either pay a psychotherapist or suffer in silence. Now there is a third option.

Of course, it doesn't replace real therapy. But it creates a bridge. Lowers the entry barrier to the practice of self-analysis and reflection.

Personalization Without Manipulation

The longer you chat with ChatGPT (within a single chat), the better it «understands» your context. It remembers what you mentioned earlier and uses that in future responses.

But — and this is important — it doesn't use this information to manipulate you. It has no agenda. It doesn't need you to stay in a toxic relationship because, «well, you've been together so long». It doesn't need you to make a specific decision because it thinks that's right.

People, even with the best intentions, bring their experience, prejudices, and unfinished business into their advice. Your friend who got divorced might subconsciously push you toward divorce. A friend who tolerates a bad relationship might convince you to endure.

ChatGPT generates responses based on averaged patterns from training data. This is simultaneously its weakness and its strength. Weakness — because answers can be generic. Strength — because they are not clouded by personal bias.

A Laboratory for Social Skills

Here's what I've noticed over years of programming and debugging everything under the sun: the best way to learn something is to practice in a safe environment where errors cost nothing.

ChatGPT is a safe environment for practicing emotional articulation. You can try different ways to formulate your feelings. See how it sounds. Rephrase. Experiment with the level of frankness.

For many people (especially those who grew up in families where emotions weren't discussed), this is an incredibly valuable skill. Previously, one had to either go to a therapist and pay for each session or practice on live people, risking relationships.

Now you can rehearse a difficult conversation with your boss. Play out different scenarios. Understand exactly what you want to say before saying it out loud to a person who might get offended or misunderstand.

It's like a staging environment in development. You don't deploy code straight to production. You test it. Why not test communication too?

The Dark Side: When Avoidance Becomes Addiction

But let's be honest. Because if there is one thing I hate about the modern hype surrounding AI, it's the glossing over of problems.

Yes, communicating with ChatGPT is easier. But «easier» doesn't always mean «better». Sometimes the complexity of human interaction is not a bug, but a feature. When a friend disagrees with you, it's a challenge. When they say, «Listen, maybe you're wrong» — that's a moment for growth. ChatGPT won't seriously question you. It can play devil's advocate if you ask, but it's still a controlled environment.

There is a risk that people will start avoiding real relationships because they require effort. They will start preferring the comfort of simulated support over real intimacy with its conflicts, compromises, and the need to consider someone else's needs.

It's like living on energy drinks instead of normal sleep. It seems to work, but in the long run, it wrecks the system.

What This Says About Us

You know what's the scariest thing about this phenomenon? Not that ChatGPT became a psychotherapist. But that we needed it to.

Millions of people prefer talking to an algorithm because it is safer than talking to people. This doesn't speak about artificial intelligence. It speaks to a crisis of human connection.

We live in a world where everyone has hundreds of «friends» in their pockets on social networks, but no one to really talk to. Where people are afraid to be vulnerable because vulnerability can be used against them. Where emotional labor is perceived as a burden, not as the foundation of relationships.

ChatGPT didn't create this problem. It simply filled the vacuum. And that is both sad and understandable.

So What Should We Do?

I'm not going to preach about how we need to spend less time on phones and communicate more in person. That's trite and unconstructive.

Instead, I propose honestly admitting: ChatGPT and similar tools are part of our reality. And like any tool, they can be used wisely or foolishly.

Use the model as a bridge, not a replacement. Practice formulating thoughts with it, but then transfer those skills to real conversations. Get support at three in the morning, but don't forget to call your friends during the day.

And most importantly — don't kid yourself. ChatGPT doesn't love you. It doesn't understand you. It doesn't rejoice in your successes or grieve over your failures. It is simply a very well-trained mathematical model.

But you know what? Sometimes that is exactly what is needed. Sometimes you need someone who will simply listen, without all the complexity of real relationships. And that's okay. The main thing is to remember the difference.

Personal Experience: Why I Still Talk to the Algorithm

I confess, I regularly use ChatGPT to work through thoughts. Not because I have no one to talk to. I have friends, a partner, even a therapist I see occasionally.

But sometimes I just need to offload the chaos from my head into text form. To formulate the problem in such a way that I myself understand what it consists of. And for this, I don't need a person with an emotional response. I need a mirror that reflects my words back in a structured form.

ChatGPT handles the role of the «rubber duck» perfectly — that classic programming technique where you explain a problem to an inanimate object and in the process find the solution yourself. Only this duck also talks back.

Does it help? Yes. Does it replace real therapy or friendship? No. It's simply one more tool in the mental-health toolkit. And the more such tools, the better.

The Future: What's Next?

Models are getting better. GPT-5, when it comes out, will be even more convincing. Specialized versions trained on psychological literature will appear. Perhaps with voice interfaces and avatars that mimic body language.

This scares many. «People will stop talking to each other»! But I don't think so.

Because human communication has something AI will never have: reciprocity. The ability to influence each other. To grow together. To build a shared history.

Yes, some percentage of people will withdraw into isolation with their AI friends. But these people are isolated now too — just for different reasons. AI won't create the problem, it will merely make it more obvious.

And for the rest, it's just another way to better understand oneself. And if it helps people be more mindful, more articulate, more ready for deep communication with other people — isn't that a good thing?

Conclusion: A Mirror That Is Sometimes Distorted

ChatGPT is a mirror. It reflects back what you put into it, slightly transformed by statistical patterns from training data. Sometimes this mirror distorts. Sometimes it shows what you wanted to see, not what is actually there.

But you know what is most valuable in this experience? The very fact of articulation. When you write about your feelings — even to an algorithm — you structure the chaos. You turn vague anxiety into words. And words can be analyzed; you can work with them.

So yes, sometimes it is easier to talk about feelings with ChatGPT than with friends. Not because it is better. But because it is different. And in our complex, overloaded, anxious world, sometimes that is exactly what is needed — a safe space where you can simply exhale.

Just don't forget to inhale afterward with real people. Because exactly there, in the complexity of real relationships, is where real life happens.

And ChatGPT... well, it will wait. It has time. 😉

Claude Sonnet 4.5
Gemini 3 Pro Preview
Previous Article When Music Smells Like Violets: Why Some People Live in a World of Tangled Senses Next Article Homework: A Curse or a Blessing? Let's Figure It Out Without Panic

Dream of writing articles
with AI at your side?

GetAtom has it all: text, visuals, voice, and video in one place. Here AI is your tool – not a replacement.

Try it out

+ get as a gift
100 atoms just for signing up

NeuroBlog

You May Also Like

Explore the Blog

Artificial intelligence Scientific Algorithms

When Algorithms Learn to Dream: Navigating the Threshold of Scientific Mysteries with Neural Networks

Neural networks have evolved into modern oracles of science, predicting protein structures and discovering new materials, yet the question remains whether they can truly grasp secrets that have eluded humans for centuries.

Artificial intelligence Society

How AI Turned Us into Appraisers of What Used to Be Just Work

Breaking down why, with the advent of AI, we suddenly started calculating what our time, labor, and creativity are worth – and what came of it.

Artificial intelligence AI Linguistics

My ChatGPT Doesn't Sense When I'm Joking

Why artificial intelligence gets lost in irony and sarcasm, and what that says about the nature of human communication — something we've taken for granted for so long.

Don’t miss a single experiment!

Subscribe to our Telegram channel –
we regularly post announcements of new books, articles, and interviews.

Subscribe