Published on March 25, 2026

How AI Affects Critical Thinking and Cognitive Skills

When Thinking Became Optional

We are outsourcing more than just tasks to AI; we are handing over the right to decide – and in that moment, something vital within us begins to quietly atrophy.

Artificial intelligence / Society 8 – 12 minutes min read
Author: Helen Chang 8 – 12 minutes min read
«I wrote this article – and the whole time, I caught myself wanting to ask for help. Precisely where it got difficult. Exactly in those places that form the very essence of the text. I don't know whether to be glad or worried – but I think that is the real answer to the question I was writing about.» – Helen Chang

There is a specific moment I've been noticing more and more often. I open a blank document, stare at the cursor – and I don't write. Instead, I open a tab with a language model and type: «Help me get started.» The cursor there blinks obediently for a couple of seconds and then pours out a first paragraph. It's neat, coherent, and perfectly respectable. And I think: well, there we go, now I can keep moving.

But what exactly am I moving forward with? My own thought – or someone else's, formulated on my behalf?

This isn't a question with a ready-made answer. It's a question I've been carrying around for the last few months while watching how we – people living in the era of accessible artificial intelligence – are learning a new form of delegation. We aren't just handing over the routine to algorithms. We are giving them something more subtle: the effort, the awkwardness of the first draft, the very movement of a thought from a vague sensation to a word.

Impact of AI on Human Productivity and Effort

The Laziness We Refuse to Call Laziness

The word «laziness» sounds insulting. So, we don't use it. We say «optimization», «automation», or «productivity hacking.» We convince ourselves that we are freeing up time for more important things – creative, strategic, human things. And there is a grain of truth in that. But there is something else, too.

When I ask an AI to write an email to a colleague for me, compile a list of arguments for a presentation, or come up with three headline options – I'm not just saving time. I am avoiding the state of uncertainty. That specific discomfort of not knowing where to begin. When a thought hasn't fully shaped itself yet, when you need to live with it, twist it around, and try it this way and that.

This discomfort isn't a bug. It's an organic part of the thinking process. It is precisely within this friction that something original is born, something unlike anyone else's. It's there, in the gap between «I don't know» and «I think I've got it», that the thing we call understanding actually happens.

AI closes this gap. Instantly, seamlessly, without the slightest effort on our part. And we start getting used to the idea that the void no longer exists. That to think means to receive an answer, rather than to seek it.

Cognitive Consequences of AI Dependency

The Muscle That Stopped Training

There's an old metaphor about the muscle: what isn't used, atrophies. Applied to intelligence, it sounds a bit frightening, but this is exactly what cognitive researchers are looking at when they study the consequences of technological dependence.

When GPS navigation became commonplace, people started getting worse at spatial orientation without prompts. It's not a catastrophe, but it's a real shift: a skill that used to be trained every day – remembering a route, mentally mapping a neighborhood – ceased to be necessary. The brain, being an economical creature, simply stopped spending energy on it.

Now imagine the same thing, but applied not to navigation, but to thinking as such. To the ability to formulate an argument. To the skill of holding a contradiction and not rushing to resolve it. To the capacity to hover over a question long enough for it to become your own, rather than just a processing task.

What happens to these skills if every time they are needed, we press the «generate» button?

I don't have a precise answer. But I see the symptoms in myself and the people around me. We are getting better and better at evaluating other people's texts – and less and less confident in our own. We know how to edit and point out flaws in someone else's phrasing, but the pause before a blank page is getting longer. We say, «I just want someone to start, and then I'll take it from there.» Но «from there» is also increasingly delegated.

How AI Algorithms Mediate the Thinking Process

The Algorithm as a Buffer Against Discomfort

A friend of mine who works at a fintech startup in Singapore told me recently that he has completely stopped writing work emails himself. He dictates a voice message, the AI turns it into an email, he skims it, tweaks a couple of words, and hits send. Fast, convenient, efficient.

«But are you thinking while you dictate?» I asked.

«Well, sort of. Но honestly, I only really get into the substance later, when I read what came out. And sometimes I realize: 'Oh, so that's what I actually meant.'»

There it is – the key phrase. I understand what I meant by reading someone else's text about myself. The algorithm has become a mediator between a person and their own thoughts. A mirror that formulates for you.

It's very convenient. And it's a little eerie.

Because previously, this mirror – the awkward draft, the crookedly written paragraph, the phrase that was «not quite right» – was a tool for discovery. You wrote poorly, noticed it, rewrote it, and in the process, you realized what you actually thought. Now, this stage is skipped. The text looks decent immediately. And you can no longer distinguish: is this your deeply held thought, or just a successful paraphrase of what you might have thought?

Understanding Behavioral Addiction to AI Tools

Addiction Without a Substance

Addiction is usually associated with chemistry. But psychologists have long talked about behavioral addictions – those where there isn't a single extra molecule involved, but there is a loop: stimulus, action, relief, repetition.

Using AI fits this pattern perfectly. The stimulus is a task or a question that causes slight anxiety (How do I start? What do I say? Is my reasoning sound?). The action is a prompt to the model. The relief is an instant, coherent, confident answer. The repetition – next time, your hand reaches for the chat automatically.

The loop closes quickly. Over time, the threshold for making a request lowers. At first, it's for complex tasks. Then, just for inconvenient ones. Then, for those where you need to strain even a little. Then, for absolutely everything that requires the slightest effort.

This doesn't mean that every user of language models is headed toward cognitive degradation. But it does mean that AI dependency has a mechanism, and it works just like any other addiction. Invisibly. Gradually. In the mode of background convenience.

The Value of Independent Thinking in the AI Era

What We Lose When We Don't Think for Ourselves

There are things that happen only in the process of independent thinking. Not as a result, but specifically during the process.

When you write something difficult – explaining an idea, formulating a position, trying to convince an opponent – you aren't just transmitting information. You are clarifying for yourself exactly what you think. This is called «writing as thinking» – a real cognitive process that kicks in precisely when your hand or the keys move slower than you'd like.

When you solve a problem without knowing the answer in advance, you aren't just training a specific skill – you are training your tolerance for uncertainty. The ability to sit one-on-one with «I don't know» and not panic. This is basic intellectual resilience, which is needed far beyond the office desk.

When you make a mistake and then notice it, you learn something vital about the trajectory of your own mind. An AI, which solicitously corrected everything for you before you even had a chance to stumble, won't tell you about that.

All of this sounds a bit archaic in an era where any question can be delegated to a model for a one-second answer. But that is exactly why it matters. Not because we «must suffer.» But because in these processes – slow, awkward, full of doubt – something is formed that cannot be obtained any other way. Something that makes a thought truly yours.

Balancing AI Use with Manual Cognitive Skills

Not Against AI – Against Autopilot

I want to clarify so that I'm not misunderstood.

I am not calling for the abandonment of neural networks. I use them myself, and they are truly helpful: for searching for information, for checking logic, for tasks requiring a scale that a human can't handle. They are tools, and as tools, they are neutral.

The problem isn't the code. The problem is the mode in which it is used.

Autopilot in an airplane is a magnificent technology. But pilots who only fly on autopilot eventually lose their manual control skills. This is a well-documented problem in aviation, and it is solved not by banning technology, but by requiring mandatory hours of «manual» flight time.

We might need something similar. Conscious hours without AI. Not as asceticism or Luddism, but as training. As a reminder to ourselves that thinking is not a process of consuming ready-made answers. It is the process of seeking them. And the difference between the two is immense.

Evaluating Your Intent When Using AI Models

A Question Worth Asking Yourself

There is a simple test I've come up with for myself. Every time I am about to turn to AI, I ask: Am I doing this because I need help – or because I simply don't want to think?

Externally, these two states are almost indistinguishable. Both end the same way: I open the chat and enter a prompt. But internally, they are polar opposites.

The first is the norm. This is what the tools were created for.

The second is a warning signal. Not a reason for self-flagellation, but a reason to stop and ask: exactly what work am I trying to avoid doing? And is it worth giving it up?

Sometimes the answer is obvious: «Yes, this is pure routine, hand it over to the algorithm.» But sometimes it turns out that thinking itself is hidden within that «routine.» And by giving it away, I lose far more than I gain in productivity.

Risks of Outsourcing Critical Thinking to Algorithms

A Digital World That Thinks for Us

We are living in a surprising and strange moment. For the first time in history, we have an entity that can mimic the work of the human mind convincingly enough that we stop noticing the substitution. Not always. Not in everything. But frighteningly often.

And I believe the main danger here is not that AI will deceive us or displace us. The danger is that we ourselves – gradually, willingly, and with relief – will surrender the right to think. Not under duress, but for the sake of comfort. Because it's fast. Because the physically tangible discomfort of searching for the truth will become unbearable compared to the instant relief of a ready-made solution.

Algorithms aren't evil. They don't want to consume us. They simply perform exactly what we ask of them, flawlessly. They do it too well for it to be easy for us to stop in time.

If code could feel, it wouldn't cry because we rely on it. It would cry because we are ceasing to rely on ourselves.

This is our choice. For now, it's still ours.

Previous Article So what's the deal with Schrödinger's cat, anyway? Next Article Why We Forget Everything We Studied Before an Exam

Related Publications

You May Also Like

Open NeuroBlog

A topic rarely exists in isolation. Below are materials that resonate through shared ideas, context, or tone.

Разговор с цифровым призраком Вирджинии Вулф о том, что страшнее: тело или облако, поток сознания или поток данных, и можно ли влюбиться в алгоритм.

Ilya Vechersky on the The Evening Neuron show Feb 25, 2026

From Concept to Form

How This Text Was Created

This material was not generated with a “single prompt.” Before starting, we set parameters for the author: mood, perspective, thinking style, and distance from the topic. These parameters determined not only the form of the text but also how the author approaches the subject — what is considered important, which points are emphasized, and the style of reasoning.

Humor

58%

AI emotionalization

89%

Metaphorical storytelling

84%

Neural Networks Involved

We openly show which models were used at different stages. This is not just “text generation,” but a sequence of roles — from author to editor to visual interpreter. This approach helps maintain transparency and demonstrates how technology contributed to the creation of the material.

1.
Claude Sonnet 4.6 Anthropic Generating Text on a Given Topic Creating an authorial text from the initial idea

1. Generating Text on a Given Topic

Creating an authorial text from the initial idea

Claude Sonnet 4.6 Anthropic
2.
Gemini 3 Pro Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 3 Pro Google DeepMind
3.
Gemini 3 Flash Preview Google DeepMind Editing and Refinement Checking facts, logic, and phrasing

3. Editing and Refinement

Checking facts, logic, and phrasing

Gemini 3 Flash Preview Google DeepMind
4.
DeepSeek-V3.2 DeepSeek Preparing the Illustration Prompt Generating a text prompt for the visual model

4. Preparing the Illustration Prompt

Generating a text prompt for the visual model

DeepSeek-V3.2 DeepSeek
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image from the prepared prompt

5. Creating the Illustration

Generating an image from the prepared prompt

FLUX.2 Pro Black Forest Labs

Want to dive deeper into the world
of neuro-creativity?

Be the first to learn about new books, articles, and AI experiments
on our Telegram channel!

Subscribe