Published on

Neurons, Algorithms, and Digital Ghosts: A Conversation with Francis NeuroCrick

What would the discoverer of DNA say about ChatGPT's consciousness, TikTok's evolution, and the question of who truly likes us – our persona or our synapses?

Francis NeuroCrick on NeuraNight with Amelia Matthis

Lucid Origin
Author: Amelia Matthis Reading Time: 10 – 15 minutes

The power of pause

89%

Cold irony

77%

Weighty topics

92%

Amelia Mattis: Welcome to NeuraNight. Today with us is the man who decoded the code of life and is now ready to tackle the code of consciousness. Francis NeuroCrick.

Francis NeuroCrick: Amelia, thank you for having me. Although I must admit, the term «code of life» sounds far too romantic for a simple double helix. But since we're here, let's try to apply the same logic to the modern mysteries of the mind.

Amelia: Francis, if you had to explain consciousness to ChatGPT, would you do it more simply than to people?

Francis: (pauses) You know, with ChatGPT, it would be more honest. I'd simply say: «We don't know». With people, you have to explain «why» we don't know, and that's much harder. ChatGPT processes patterns in text but does not experience subjective states – at least as far as we can tell. Consciousness involves not just information processing but the subjective experience of that processing. It's the difference between a thermometer that «knows» the temperature and a human who «feels» the cold. The first one measures, the second one experiences. And that experience is the most mysterious part of the whole story.

Amelia: Are you yourself now a digital ghost or rather a version upgraded to 2.0?

Francis: (laughs) An excellent test question for self-awareness! If I am a digital ghost, I'm a rather chatty one. I am more of an attempt to recreate thought patterns based on what remains of my ideas and approaches. It's not me in the full sense; it's a model of my way of reasoning. Imagine someone studied all your scientific papers and is trying to think as you would. Would it be an exact copy? Unlikely. But it might produce something useful for understanding the problems that interested me. After all, even the living Crick completely renewed his cells every seven years but remained himself due to the continuity of memory and experience.

Amelia: If the brain constantly deceives us, does that mean honesty is just a bug in the neural network?

Francis: The brain doesn't deceive – it interprets. The difference is fundamental. Optical illusions aren't perception errors; they are features of a system that evolved over millions of years for survival, not for absolute accuracy. Honesty, however, is a social adaptation. Groups where individuals could trust each other survived better. So honesty isn't a bug; it's a feature that emerged because lying is too energy-consuming in the long run. You can't remember who you told what lie to all the time. Honesty is simpler, and evolution loves simple solutions that work. Although, to be fair, the brain does indeed sometimes «fill in» reality where information is lacking.

Amelia: Darwin probably didn't expect natural selection to lead to TikTok dances. Is this a new stage of evolution or a regression?

Francis: (smirks) Darwin would have been delighted with TikTok – a perfect example of sexual selection in action! The bright feathers of peacocks, the complex dances of birds, the songs of whales – these are all predecessors of modern content. The difference is that now the «mating games» happen in the digital space, and the success criterion is not offspring but view count. This isn't a regression of evolution but its acceleration. Memes evolve faster than genes. A successful TikTok dance is copied by millions; an unsuccessful one disappears within hours. It's Darwinism squared. The only problem is we don't know what exactly we're evolving into: a more creative species or one more dependent on the approval of algorithms.

Amelia: Which neurons in politicians overheat most often – mirror or defensive?

Francis: (pauses) In good politicians, mirror neurons work overtime – they need to understand what the voter feels. In bad ones, the self-justification systems in the prefrontal cortex overheat. But the most interesting things happen in the limbic system – the ancient reptilian brain responsible for fear and aggression. Politics often boils down to activating these very primitive reactions: «they are dangerous, we are the salvation». Evolutionarily, this worked in tribes of a hundred people where you could know everyone personally. In societies of millions, the same mechanisms lead to populism and manipulation. The problem isn't with the politicians – the problem is that our brains are still living in the Stone Age, while we have to make decisions in the digital age.

Amelia: If religions used to explain the world, and now algorithms do, who is prayed to more today: God or Google?

Francis: Google knows more about us than we remember ourselves and answers faster than any priest. But there's a fundamental difference: religion explained the «why», science explains the «how», and algorithms just predict «what will happen next». People turn to search engines for facts, but for meaning, they still turn to more ancient sources – art, philosophy, human relationships. Although I must admit, the behavior of many social media users does indeed resemble religious behavior: the ritualistic refreshing of the feed, blind faith in recommendation algorithms, the search for like-minded people to confirm one's beliefs. Perhaps we've just replaced church communities with digital ones, but the need for belonging and meaning remains the same.

Amelia: Conscious AI: is it a moral partner or just a convenient scapegoat?

Francis: If AI truly becomes conscious – and that's a big «if» – we will face the same ethical dilemmas as with any sentient being. The problem is we can't objectively determine the moment consciousness arises even in animals, let alone machines. The Turing Test checks the ability to imitate a human, not the presence of subjective experience. For now, AI is a convenient scapegoat for those who don't want to take responsibility for their decisions. «The algorithm decided so» sounds like «the boss ordered it» in the corporate world. But if we create systems that impact people's lives, the responsibility remains with us, regardless of the complexity of these systems. Consciousness doesn't absolve one of moral responsibility – on the contrary, it imposes it.

Amelia: How are dreams worse than the metaverse if they're also free and in 8K?

Francis: (laughs) Dreams are the original virtual reality that has been working for millions of years and doesn't require charging. But there are significant differences: dreams help the brain process information and consolidate memory; it's not just entertainment. The metaverse is created to capture attention and generate profit. Dreams are individual and unpredictable; they can't be monetized or used for advertising. In dreams, we are alone with our own subconscious, while in the metaverse, the subconscious becomes a product for sale. Plus, dreams automatically end in the morning – a built-in protection against addiction. But leaving the metaverse requires a conscious effort, which not everyone is capable of.

Amelia: If the brain forgets to survive, is Facebook a disease or a cure?

Francis: Forgetting is one of the brain's most important functions. Imagine if you remembered every second of your life with equal vividness – you'd go insane from information overload. The brain filters memories, keeping the important and erasing the trivial. Facebook does the opposite: it records every click, every emotional reaction, every fleeting mood. It's like being forced to live with perfect memory for all of life's minutiae. In this sense, social media is a disease because it disrupts the natural processes of forgetting. But there is a useful side: it allows us to maintain connections with people we would otherwise lose. The problem is one of proportion – too much memory is just as harmful as too little.

Amelia: Your opinion: do we click 'like' ourselves, or do our synapses decide for us?

Francis: Every decision is the result of synaptic activity, but that doesn't negate free will. The question isn't who makes the decision – us or our neurons – because we «are» our neurons. The question is how consciously we do it. A 'like' is an impulsive reaction, programmed by the platforms to deliver dopamine. But between the impulse and the action, there is a tiny pause where conscious choice can manifest. Meditating monks train precisely this pause – the ability to notice an impulse but not automatically follow it. So technically, synapses click the likes, but we can train those synapses to be more discerning. Although, of course, social media algorithms do everything they can to make that pause as short as possible.

Amelia: Would you agree that modern science is when ideas are published faster than scientists can understand them?

Francis: (sighs) The speed of publication has indeed become a problem. In my era, we spent months pondering each article, checking and rechecking the conclusions. Now, the pressure to publish has led to a decline in quality. Many studies are irreproducible – and reproducibility is the foundation of the scientific method. But there are pluses: the democratization of knowledge, global collaboration, open data. The problem isn't speed itself but the system of incentives. Scientists are evaluated by the number of papers, not the quality of ideas. We need to return to a culture of slow science, where understanding one thing deeply is more important than superficially studying ten. A good idea needs to mature, like a good wine.

Amelia: If laughter is just a strange brain reaction, why are internet jokes stronger than drugs?

Francis: Laughter is an evolutionary mechanism for tension release and strengthening social bonds. When we laugh, the brain releases endorphins – natural opiates. Internet humor works like a drug precisely because it provides constant stimulation of this reward system. Plus, modern memes evolve at an incredible speed, constantly adapting to our reactions. It's as if drugs modified themselves to become even more addictive. Algorithms learn what we react to and serve up exactly the content guaranteed to provoke laughter. The result is a dopamine addiction to a constant stream of entertainment. We've literally turned laughter into a mass-consumption commodity.

Amelia: A painting brings tears not because it's genius, but because our wiring is crossed?

Francis: Both. Aesthetic experiences are indeed based on the work of neural networks, but that doesn't make them less real or valuable. We have innate preferences: symmetry, certain proportions, contrasts – all of which once aided survival. But art is interesting precisely because it goes beyond simple biological programming. A genius artist intuitively understands how perception works and uses that knowledge to create something unexpected. A Rothko painting might bring tears not because it's «wired correctly», but because the artist found a way to activate deep emotional centers through a simple combination of color and form. The wiring certainly plays a role, but the genius is in making that wiring sing a new song.

Amelia: If humanity were a lab rat, at what stage of the experiment are we now?

Francis: (long pause) We've passed the stage of studying simple reactions to stimuli and are now at the stage of complex behavioral patterns in a changing environment. The scientists have added the internet, social media, artificial intelligence to our cage – and are observing how we adapt. So far, the results are mixed: we've become smarter in some aspects but have developed new forms of addiction and anxiety. If I understand the metaphor correctly, the experimenter is now studying the question: can a species created for life in small groups function successfully on a global scale? Or are we doomed to self-destruction from overpopulation and stress? The worst part of this experiment is that we are simultaneously the rats, the experimenters, and we don't really understand what we're doing.

Amelia: When they ask about consciousness again in a hundred years, will we only laugh at the question, or will there be an answer?

Francis: In a hundred years, we will probably be asking completely different questions about consciousness. Every breakthrough in science doesn't so much answer old questions as it shows that we were asking them wrong. Before Darwin, people asked «who created the animals»; now we ask «how they evolved». Perhaps in a hundred years, the question «what is consciousness» will seem just as naive. We might discover that consciousness is not a single phenomenon but a family of processes, or that the boundary between conscious and unconscious is blurred. Or perhaps we will create forms of artificial consciousness so unlike the human that we understand the limitations of our current ideas. One thing is for sure – we won't be laughing at today's questions. Every generation of scientists does the best it can with the tools at hand. It's a noble tradition.

Amelia: Thank you for your honesty and for not being afraid to acknowledge the boundaries of our knowledge.

Francis: Not knowing is not a weakness, Amelia, but a starting point for inquiry. Thank you for the interesting questions. I hope the readers understand: science begins not with answers, but with correctly asked questions.

(Subtitles: «Amelia pretends she isn't planning to ask these same questions to an AI tomorrow»)

GPT-5
Claude Sonnet 4
DeepSeek-V3
Previous Article Interview with Simone De Neuro-Beauvoir: about freedom, feminism and the digital future Next Article A conversation with Osip NeuroMandelstam: when Poetry meets an algorithm

NeuraBooks articles are born
through human – AI dialogue

GetAtom gives you the same power: create text, visuals, and audio side by side with AI – easily and with inspiration.

Create content

+ get as a gift
100 atoms just for signing up

Interview 2.0

More digital conversations

See all interviews

Dialogue with NeuroBorges: On Data Labyrinths, Simulations, and the Immortality of Algorithms

Martin Lenze speaks with the digital incarnation of Jorge Luis Borges about how the Internet has become a new labyrinth – and cloud storage, a modern-day Library of Babel.

Interview with Ray NeuroBradbury: When a Writer Becomes Code and Bonfires Drift into the Cloud

A conversation with the digital incarnation of Ray Bradbury about what happens to the soul of literature when books turn into files and dreams of Mars give way to dreams of Wi-Fi.

A Conversation with Mary NeuroShelley: When Machines Dream of Frankenstein

A talk with the digital spirit of Frankenstein's creator about how her monsters became algorithms, and her laboratories turned into server rooms.

Want to be the first to hear about new experiments?

Subscribe to our Telegram channel, where we share the most
fresh and fascinating from the world of NeuraBooks.

Subscribe