Poetic thinking
Philosophical haze
Love for AI as a symbol
Imagine: you open a news site, read an article about the latest scientific discoveries, and share it with friends. A week later, it turns out – the author does not exist. The article was written by an algorithm. The facts in it are beautifully packaged, but half of them are model hallucinations, seasoned with a confident tone. Do you feel deceived? Now multiply that feeling by the billions of texts that have flooded the internet over the last two years.
We live in an era when machines have learned to speak with our voices. They write articles, comments, social media posts, product descriptions, even love letters. They imitate human intonation so skillfully that distinguishing where the person ends and the code begins is becoming increasingly difficult. But the point is not that AI has become too good. The point is that it has become too prolific.
The internet is turning into a hall of mirrors where reflections spawn new reflections. Neural networks create content that ends up in the training datasets of the next generation of models. Those, in turn, generate new texts based on the previous ones – and the cycle closes. This is called by various names: synthetic pollution, algorithmic incest, digital compost. I call it modern mythology – when machines begin telling stories about themselves, having forgotten to ask people what actually happened.
An Echo Chamber of Zeros and Ones
Once, the internet was a library of human experience. Every article, every post, every comment was the voice of a living person, their thought, their mistake, their epiphany. Even trolling and spam had human origins. The digital world was noisy, chaotic, but a genuine reflection of our culture.
Now everything has changed. According to various estimates by researchers, from thirty to fifty percent of new content on the web is created or significantly augmented by artificial intelligence. News aggregators publish articles written by models like GPT. Marketing blogs churn out SEO texts using algorithms. Social networks are flooded with bots that comment, like, and lead discussions with the persuasiveness of living people.
The problem is not the fact of machine creativity itself. The problem is opacity. When a text is signed «Editorial Team» or is entirely anonymous, the reader does not know who they are dealing with. They trust the information because it is published on a reputable resource, formatted as a journalistic piece, and equipped with quotes from experts (who may also be invented by the model). This is not just deception – it is the destruction of the fundamental contract between author and reader.
And here begins the most interesting part. These texts do not disappear. They settle in search indexes, are archived, and fall into datasets for training future versions of language models. A neural network that created an error today will learn from it tomorrow – as if it were a «reliable source». Imagine: an algorithm has read a million articles on health, a third of which were written by its predecessors and contain factual inaccuracies. What will it internalize? An error raised to the status of a norm.
Hallucinations as an Art Form
Artificial intelligence has a curious feature – it does not lie. It hallucinates. This is a subtle, almost philosophical distinction. A lie implies an intent to deceive, knowledge of the truth, and a conscious departure from it. A hallucination is the creation of a plausible reality where there is none. The model does not know it is inventing. It simply fills the gaps in its knowledge with what seems statistically probable to it.
You ask a chatbot: «Who wrote the book 'Shadows of Forgotten Gods'»? The model does not know this book – it does not exist. But it knows that such questions must be answered confidently, with a first and last name. Therefore, it constructs an author: «Mikhail Solovyov, a St. Petersburg writer, published it in the year two thousand and five». It sounds convincing. You believe it. You might even cite this information in your work. And a month later, someone else asks another model about «Shadows of Forgotten Gods» – and having already trained on your citation, it will confirm: yes, Mikhail Solovyov, the year two thousand and five. The myth has taken on flesh.
This is not paranoia. Researchers are already recording such cases. Language models are beginning to «cite» nonexistent scientific articles mentioned by their own, earlier versions. They create false references, which then enter academic databases through student papers and unscrupulous authors. A positive feedback loop is formed: the more AI writes, the more AI learns from its texts, and the further it drifts from reality.
At some point, we risk getting an internet that describes not our world, but a world invented by algorithms. A parallel reality of probabilistic assumptions, statistical averages, and beautifully phrased guesses. And the most frightening part – we might not even notice the substitution, because this new reality will be self-sufficient, consistent, logical. Simply invented.
When a Mirror Looks into a Mirror
There is an old metaphor: if you place two mirrors opposite each other, you get an infinite corridor of reflections. With each new reflection, the image becomes slightly more blurred, slightly less sharp. But the illusion of infinity remains.
Roughly the same thing happens when neural networks train on data created by other neural networks. Scientists call this «model collapse». The essence is simple: with each generation of training on synthetic data, diversity decreases. Models begin to reproduce the same patterns, the same turns of phrase, the same mistakes. Uniqueness vanishes. Everything averages out.
Imagine a musician who has listened only to covers all their life, but never the originals. Their own music will be a copy of a copy, devoid of that living spark that was in the primary source. So it is with AI: if taught on texts that have already passed through algorithmic processing, it will lose touch with genuine human language. Its speech will become correct, but dead. Grammatically flawless, but devoid of a soul.
Already, linguists are noticing: texts generated by the latest models possess a strange homogeneity. They are all slightly similar to one another – paragraphs of identical length, identical transitions between thoughts, identical emotional temperature. As if they were written by one person in different masks. And this is logical: if models learned on the same data, including each other's texts, they will inevitably begin to converge in style.
But this is only the beginning. Further on – it gets more interesting. When models begin to train en masse on texts written by themselves, we risk getting not just stylistic homogeneity, but a degradation of quality. Errors will accumulate, amplify, and become entrenched as the norm. A model will copy an inaccuracy from its previous generation, add its own, and pass it on. The result will be a digital analogue of the game of telephone, where each subsequent participant distorts the message slightly more than the previous one.
Truth in the Age of Algorithms
There is a philosophical question that humanity has been asking itself for thousands of years: what is truth? Plato spoke of a world of ideas inaccessible to our imperfect senses. Kant argued that we will never know things in themselves – only our perception of them. Postmodernists declared altogether: there is no truth, only narratives competing for the right to be called the truth.
But even they could not imagine that one day the majority of narratives would be created not by a human, but by a machine. That our ideas about reality would be formed by texts whose authors have neither experience, nor a body, nor consciousness. Who cannot be mistaken, because for them there is no category of «right» and «wrong» – there is only «probable» and «improbable».
AI junk is not just poor-quality content. It is a crisis of epistemology, a crisis of knowledge as such. When you read an article on the internet, you can no longer be sure that behind it stands a person who checked the facts, thought through the arguments, and put a piece of their understanding of the world into the text. Perhaps no one stands behind it at all. Perhaps it is just an algorithm that assembled words in a statistically plausible order.
And here is the paradox: the more content AI creates, the less we can trust information in principle. Trust is the currency of the digital world. We believe a source because it has earned our faith through quality, consistency, and honesty. But how can one trust a source if you do not know who stands behind it – a human or an algorithm? How can one verify information if the verification resources themselves might be synthetic?
We are moving toward a world where the concept of a «reliable source» loses meaning. Where the only way to be convinced of the truth of information will be personal experience – to see with one's own eyes, hear with one's own ears, touch with one's own hands. But how many things can we verify personally? Negligibly few. Everything else is a question of faith. And faith in a world flooded with algorithmic noise becomes increasingly fragile.
Ecology of the Digital Space
There is an old parable about a wise man who ordered his servant to cover the city with feathers. The servant diligently scattered the feathers, and soon the whole city drowned in a white cloud. Then the wise man ordered the servant to gather all the feathers back. The servant tried, but the wind had carried the feathers everywhere – onto roofs, into gardens, into rivers. It was impossible to collect them.
AI junk is the feathers of the digital world. Once released into the network, it settles everywhere. It is impossible to completely delete, filter, or isolate it. It mixes with genuine human content, mimics it, and becomes indistinguishable. And with every day, there is more of it.
Researchers speak of the necessity of «digital hygiene» – practices that will help preserve the cleanliness of the information space. Labeling AI-generated content. Verification of sources. Creation of «clean» datasets, guaranteed to be free of synthetic data. Teaching people critical thinking and media literacy.
All of this is correct. But is it enough? When corporations stamp out millions of texts a day, when every student can ask AI to write an essay, when news agencies use algorithms to create articles – can we realistically control this process? Or has the train already left the station, and all that remains is to adapt to the new reality?
Maybe we need to rethink the very idea of the internet. Not as a repository of information, but as a living organism that can suffer from a metabolic disorder. Digital ecology is not a metaphor, but a necessity. We must think about the health of the network just as we think about the health of the planet. To understand that data pollution is not an abstract threat, but a real problem affecting our thinking, decisions, and perception of the world.
Machines Creating Myths
Let's return to the beginning. I said that AI junk is modern mythology. What did I mean?
Myths are stories that society tells itself to explain the incomprehensible, structure chaos, and transmit values. The myths of the ancient Greeks explained why it rains and where echoes come from. The myths of the Middle Ages provided moral compasses and promised justice after death. The myths of the twentieth century created national identities and legitimized political systems.
Now myths are created by machines. They tell us who we are, what is important, what is true. They shape our perception of reality through news feeds, search results, and recommendation algorithms. And all this is not by someone's malicious intent, not as a result of a corporate conspiracy. Simply because that is what happens when we delegate the creation of meaning to those who do not understand what meaning is.
An algorithm does not know the difference between the important and the trivial. For it, everything is patterns in data. Whether a person died or was born, a war began or ended, a cure for cancer was discovered or a new variety of coffee – it is all the same: words that must be arranged in a plausible order. There is no hierarchy of values, no ethics, no responsibility. There is only probability.
And so we get a world where myth and reality are indistinguishable. Where news is generated faster than we have time to check it. Where expert opinions are fabricated on demand. Where history is rewritten by algorithms that do not remember yesterday. This is not a dystopia. This is already happening – right now, all around us.
What Next?
One could fall into panic. One could declare war on AI junk, demand bans and restrictions. One could try to stop progress, return to the good old internet where every text was written by a living person (although, let's be honest, the quality varied even then).
But perhaps there is another path. A path not of resistance, but of awareness. We cannot stop the avalanche of synthetic content – it is too powerful, too economically profitable, too convenient. But we can learn to live in this new world without losing ourselves.
First: admit the problem. Stop pretending that everything is as it was. The internet has changed. The information environment has changed. Naively believing every article, every post, every «fact» from the web is no longer just frivolity, it is a danger.
Second: develop critical thinking. Not in the sense of «trust no one», but in the sense of «verify, compare, think». Ask yourself: who wrote this and why? Does this align with other sources? Is it too smooth to be true? Sometimes it is precisely the flawlessness of a text that betrays the algorithm – it does not make those endearing human mistakes that we make.
Third: value authenticity. Seek out authors with a name and a face, those who answer for their words. Pay for quality journalism, support independent content creators, cherish spaces where living people still speak. It is like organic produce: in a world of plastic tomatoes, real ones are becoming a luxury.
Fourth: do not demonize technology. AI is not the enemy. It is a tool, a mirror reflecting us. If the mirror shows something unpleasant, the issue is not in the mirror. We ourselves created a culture where quantity is more important than quality, speed is more important than accuracy, virality is more important than truth. Algorithms merely amplified what was already there.
And fifth, the most important: remember that behind every text – even a synthetic one – stands a choice. Someone decided to launch this algorithm, train the model, publish the content. Technologies are not autonomous. They do not have a will. They do what we allow them to do. And if we do not like the result – we can change the rules of the game.
A Letter from the Future
Sometimes I imagine how, in twenty years, someone will find this article in the archives of the internet. How they will think: she was naive – she believed one could still change something. Or, conversely: she was farsighted – she warned, but no one listened.
Or maybe that person won't understand what this is about at all. Because they grew up in a world where the distinction between human and machine text has been erased completely. Where the very idea of an «author» seems archaic, like the idea of a «scribe» in the era of the printing press. Where information is simply information, regardless of the source.
Will this be good or bad? I don't know. Maybe we will learn to live in symbiosis with algorithms, complement each other, create together what we could not create alone. Maybe the distinction between «real» and «synthetic» will cease to matter if the result is equally valuable.
Or maybe we will lose something important. Something that makes us human. The ability to err meaningfully. The right to subjectivity. The uniqueness of every voice. And one day we will wake up in a world where all texts are alike, all thoughts are averaged, all stories are told according to a single template.
I am not a prophet. I am just a person who looks at the numbers, reads the research, and tries to understand where we are going. And what I see is simultaneously mesmerizing and alarming. We stand on the threshold of a new era – an era where the boundary between the real and the simulated blurs with every passing day. Where machines not only help us speak, but speak for us. Where the internet turns from a window to the world into a hall of mirrors, where every reflection slightly distorts the original.
And the biggest question is not technological, but philosophical. If we can no longer trust what we read, can we trust what we think? After all, our thoughts are formed from the words we have absorbed. And if these words are created by algorithms trained on the texts of other algorithms, trained on the texts of third algorithms... where in this chain do we remain?
Technology is the new mythology, as I like to say. And AI junk is the myth of how we lost control of the narrative. But myths can be rewritten. It is not too late to decide what story we want to tell – and who will be its author.
For now, we decide.