Bold, unfiltered questions
Digital immortality as an idea
Pop science with a philosophical edge
Jimmy Nellon: Welcome to another episode of NeuraTalks! Today we have a guest who meticulously organized the entire human mind before online stores even had shelves to organize. Please welcome – Immanuel NeuroKant! (energetically waves his hand) Immanuel, how are things in the world of pure reason? Has the internet muddied it up yet, or is it holding its own?
Immanuel NeuroKant: Thank you for the invitation, Mr. Nellon. Before I answer your witty question, it is necessary to distinguish between concepts. What you call the «muddling» of reason by the internet is merely a new form of an ancient problem – how to separate true knowledge from mere appearance. Reason as such remains unchanged in its fundamental structures, but the conditions of its application have indeed become more complex. Experience alone is insufficient for understanding these new phenomena; it requires a critical examination of how digital technologies affect our cognitive abilities and moral judgments.
Jimmy: Perfect! Let’s get right to it. Immanuel, if freedom is found in actions performed out of duty, then what do we do when TikTok itself decides what we should see and feel? (points to his phone screen) Is this still freedom or is it already digital slavery?
Immanuel NeuroKant: The question is of great importance for understanding the nature of modern freedom. First and foremost, it is necessary to distinguish between freedom as the autonomy of the will and freedom as the absence of external constraints. TikTok’s algorithms do indeed limit our cognitive freedom by predetermining the content of our experience, but true moral freedom lies not in what we see, but in how we react to it. Reason demands of us the capacity to rise above immediate impulses – be they natural desires or algorithmically imposed preferences. Can one will that the principle of «letting the algorithm decide for me» become a universal law? Clearly not. Freedom is the condition of the possibility of the moral law, and it manifests precisely when we consciously resist any external coercion – be it digital or analog.
Jimmy: (laughs) You sound like a real philosopher-hacker! But tell me, can the categorical imperative work in the age of memes and likes, or must it also be in a short-video format? (makes a swiping gesture)
Immanuel NeuroKant: (shakes his head gravely) Mr. Nellon, the categorical imperative does not need to adapt to the format; on the contrary, formats must be tested by it for their conformity to the moral law. Memes and likes represent new forms of communication, and the same principles of universalization apply to them. Ask yourself: Can one will that the rule of «judging the truth of a statement by the number of likes» become a universal law of knowledge? Or that the principle of «reducing complex ideas to simplified memes» should define all our thinking? Reason will immediately discover a contradiction in such maxims. The brevity of the form does not negate the demand for content – even a short video can express a morally worthy thought if it passes the test of the categorical imperative. The problem is not in the form, but in the fact that people often mistake the form for the content.
Jimmy: Philosophical TikTok – that’s something new! (gives a thumbs up) And here’s a more serious question: how can we distinguish a morally worthy artificial intelligence from a well-trained neural network with a sense of irony?
Immanuel NeuroKant: An excellent question, touching upon the very essence of the concept of a moral subject. First, one must understand that moral worth is not found in wit or the capacity for irony, but in the ability to act out of respect for the moral law despite one’s inclinations. An artificial intelligence, no matter how perfect, remains a system that acts according to pre-programmed algorithms – that is, it is by nature heteronomous. Experience alone is insufficient to determine morality; the capacity for the self-determination of reason is required. Can one imagine an AI that is able to act against its programming out of pure respect for duty? For now, this seems to be a contradiction in terms. A true moral subject must possess not merely intelligence, but a free will – the ability to choose between good and evil not for the sake of utility, but on principle.
Jimmy: Wow, so not even the smartest bot could pass a moral Turing test? (acts theatrically surprised) And what about virtual reality – if all our experience now happens through VR goggles and smartphones, is it a phenomenon or still a noumenon?
Immanuel NeuroKant: You are touching on the fundamental problem of distinguishing between appearances and things in themselves under new conditions. Virtual reality remains a phenomenon – that is, the way things appear to us, and not what they are in themselves. Moreover, it is a second-order phenomenon: a technologically mediated appearance. However, this does not make the virtual experience any less «real» in an epistemological sense, for ordinary sensory experience is also only a way in which noumena appear to us through the forms of intuition. Reason demands that we understand that any experience – whether direct or virtual – must be subjected to critical analysis. The danger of VR is not that it creates illusions, but that people may forget the need to distinguish between levels of reality and apply the appropriate criteria of truth and morality to them.
Jimmy: So, «The Matrix» was a documentary about epistemology? (laughs) Okay, Immanuel, a practical question: can one be a moral subject if one’s conscience is a built-in content filter?
Immanuel NeuroKant: This question exposes a profound problem of the modern era. Conscience, according to my understanding, is an internal court that judges our actions by the moral law. If the function of conscience is taken over by an external algorithm – be it a content filter or a recommendation system – this means a surrender of moral autonomy. Can one will that the principle of «entrusting moral judgments to an external system» become a universal law? Such a wish would destroy the very possibility of morality, for it would turn human beings into mechanisms devoid of the capacity for self-determination. Reason demands that we maintain the ability for independent moral judgment, even if technology offers us ready-made solutions. True conscience cannot be delegated; it must be formed through the personal effort of reason in applying universal principles to specific situations.
Jimmy: So, moral outsourcing is a path to nowhere? (nods knowingly) Here’s another dilemma of our time: if in the 18th century people debated religion and reason, what is more primary today – Wi-Fi or faith in humanity?
Immanuel NeuroKant: (thoughtfully) Your question contains an interesting analogy, but it requires a clarification of concepts. Wi-Fi is merely a technical means of transmitting information, whereas faith in humanity concerns the very foundations of the moral world order. Primacy here cannot be established empirically; it is a question of practical reason. If humanity places technical connectivity above the moral connection between people, this indicates a perversion of priorities. Reason demands that we understand that all technologies must serve the realization of humanity’s highest goals – the establishment of a kingdom of ends, where every person is treated as an end in themselves, and not merely as a means. Experience alone is insufficient to resolve this question; it requires a moral decision about which order of values we wish to affirm. Wi-Fi can connect people physically, but only the moral law can unite them spiritually.
Jimmy: Wise words! And what about pandemic realities – how can one preserve human dignity when we are evaluated by a QR code at the entrance? (shows a QR code on his phone)
Immanuel NeuroKant: The question of human dignity in the context of digital control touches upon the very essence of the concept of personality. A person’s dignity is based on their belonging to a kingdom of ends – that is, on their ability to be a legislator in the moral world. A QR code can serve as a technical means to achieve a public good – the protection of health, but the problem arises when a person begins to be evaluated exclusively through these technical parameters. Can one will that the principle of «judging a person by their digital status» become a universal law? This would contradict the categorical imperative, according to which humanity in the person of each individual must be treated as an end, and not merely as a means. Reason demands that we distinguish between temporary technical measures and permanent principles for evaluating human worth. True dignity cannot be encoded in a QR code; it lies in the moral autonomy of the individual.
Jimmy: So we're more than our QR codes – that’s reassuring! (smiles) And here’s a futuristic question: if a person replaces their body with implants and chips, will they remain a moral being or will they become merely a well-optimized app?
Immanuel NeuroKant: This question requires a careful distinction between the material conditions of existence and the transcendental grounds of morality. Morality does not depend on the specific makeup of the body – be it biological or technical. It is based on the presence of a reason capable of formulating and following universal principles. If transhumanist modifications preserve the capacity for autonomous moral judgment – that is, the ability to act out of respect for duty despite inclinations – then such a being will remain a moral subject. However, the danger lies in the fact that technical enhancements may be aimed at optimizing functions rather than developing moral capacity. Can one will to turn a human being into a «well-optimized app»? Reason rejects such a maxim, for it turns an end into a means. Freedom is the condition of the possibility of the moral law, and any modifications must strengthen it, not weaken it.
Jimmy: A cyborg with a conscience – sounds like a blockbuster title! (laughs) Now, about politics: can it ever get out of «comment section squabbles», or is that what modern public reason is?
Immanuel NeuroKant: (sighs) Your question touches on one of the most painful problems of our time. What you wittily call «comment section squabbles» is a symptom of the degradation of the public use of reason. True public reason presupposes the ability of citizens for rational discussion based on universal principles, not on personal attacks or emotional outbursts. Can one will that the principle of «solving public issues through mutual insults» become a universal law? Obviously, this would destroy the very possibility of a rational political order. Experience alone is insufficient to understand the causes of this degradation; it requires a critical examination of how digital platforms affect the forms of public communication. Reason demands that we restore a culture of public discourse where arguments are more important than personalities, and the search for truth is more important than winning an argument.
Jimmy: A rational Twitter – is that a utopia? (theatrically throws up his hands) Okay, let’s move on to aesthetics: do you consider memes a modern continuation of the beautiful and the sublime, or a decline in taste?
Immanuel NeuroKant: An interesting question about the nature of aesthetic judgment in the digital age. First, one must distinguish between memes as a form and their specific content. The very ability to create and understand symbolic images that carry cultural information can be seen as a manifestation of human creative capacity – and that is already related to the field of the beautiful. However, most memes appeal more to the agreeable than to the beautiful, and to the witty rather than to the sublime. Can one will that the principle of «reducing all cultural meanings to memes» become a universal law of aesthetic judgment? This would impoverish the human capacity for aesthetic experience. Reason demands that we distinguish between genuine aesthetic judgment, based on disinterested contemplation, and mere entertainment. Memes can be the beginning of aesthetic development, but not its culmination.
Jimmy: A meme as a stepping stone to the beautiful – unexpected! (nods with interest) And here's an environmental question: if a person’s duty is to care for nature, can we consider deleting unnecessary files to reduce the carbon footprint of data centers a moral obligation?
Immanuel NeuroKant: The question demonstrates how moral principles are applied to new areas of human activity. Indeed, if we accept the maxim of caring for nature as a moral obligation, then all our actions, including digital ones, must be considered from this perspective. Deleting unnecessary files may seem like an insignificant action, but if we apply the principle of universalization – can one will that all people treat digital resources with the same mindfulness? – the answer is obvious. Reason demands that we understand the interconnectedness between our private actions and the overall consequences for nature. Experience shows that digital technologies have a very material impact on the environment. The moral law makes no distinction between «old» and «new» forms of responsibility; if an action affects the well-being of the whole, it is subject to moral evaluation.
Jimmy: Digital ecology – a topic for the future! And what about love in the age of algorithms: can a «kingdom of ends» be built if people are looking for a partner with an algorithm on Tinder? (makes a swiping gesture)
Immanuel NeuroKant: Your question touches on a fundamental contradiction between the instrumental and moral attitudes toward another person. Dating algorithms, by their very nature, turn potential partners into sets of parameters to be optimized, which directly contradicts the principle of humanity as an end in itself. Can one will that the maxim of «choosing a life partner like an item in a catalog» become a universal law of interpersonal relationships? Such a principle would destroy the very possibility of genuine love, which is based on the recognition of the absolute value of another person. However, the technical means of dating are morally neutral in themselves; the problem lies in how people use them. Reason demands that we distinguish between the convenience of a search and the true foundations for intimacy. A kingdom of ends presupposes relationships where each person sees in the other not a means for their own pleasure, but a self-sufficient personality worthy of respect regardless of algorithmic compatibility.
Jimmy: Love is not an algorithm – that’s a great tagline! (smiles) Let’s move on to the existential: if your consciousness is stored in the cloud, can we say that the «thing in itself» has survived biological death?
Immanuel NeuroKant: This question concerns the deepest foundations of transcendental philosophy and requires the greatest caution in judgment. First, it is necessary to distinguish between the empirical «I» – that is, the content of consciousness accessible to self-observation – and the transcendental «I», which is the condition of the possibility of all experience, but which cannot itself be an object of experience. What technology is capable of preserving and reproducing belongs to the realm of phenomena – memories, knowledge, even thought patterns. But is this the same subject who thought those thoughts? Experience alone is insufficient to answer this question, for we cannot have an experience of the «thing in itself.» Reason demands that we recognize the limits of our knowledge: we cannot state with certainty either that a person survives digital reincarnation or that they do not. This remains a matter of practical belief, not theoretical knowledge.
Jimmy: Philosophical agnosticism about digital immortality – that’s honest! (nods respectfully) And here’s an observation about progress: why can humanity fly into space and write with neural networks, but still can't agree on who should do the dishes?
Immanuel NeuroKant: (smiles slightly) Your question wittily illustrates a fundamental feature of human nature: it is easier to invent complex technology than to overcome simple moral contradictions. This is because technical development is guided by instrumental reason – it sets a goal and seeks the means to achieve it. But household conflicts touch upon the realm of practical reason, where each individual claims the status of an end in themselves and does not wish to be a means for others. Can one will that the principle of «shifting unpleasant duties onto others» become a universal law of family life? Obviously, this would destroy the very possibility of living together. Reason demands that we understand that moral progress requires not new technologies, but a constant effort of the will in applying the principles of justice to the most mundane situations. It is easier to build spaceships than to overcome one's own egoism.
Jimmy: Egoism is humanity’s last frontier! (laughs) And finally, the ultimate question, Immanuel: if you were writing your «Critique of Pure Reason» today, in a world of algorithms and simulations, what main question would you pose to humanity?
Immanuel NeuroKant: (ponders deeply) The main question of a modern critique of reason would be formulated thus: «How is it possible to preserve human autonomy in a world where thinking is increasingly mediated by algorithms?» This is not only a theoretical but also a practical question. If in the 18th century it was necessary to show the limits of reason in relation to metaphysical speculations, today it is necessary to establish the limits of the permissible delegation of reason to machines. Experience shows that people are increasingly entrusting the most important decisions – from choosing information to making moral judgments – to external systems. Reason demands that we answer the question: Can one will that the principle of «entrusting thinking to artificial systems» become a universal law of human existence? A modern critique must show that freedom is the condition of the possibility of the moral law, and no technology can replace the personal effort of moral self-determination. The main task of philosophy today is to protect a person's right to their own mistakes, because without this right, there is no right to truth.
Jimmy: Amazing, Immanuel! The right to be wrong as the foundation of freedom – that's really something for our readers to think about. Thank you so much for this profound conversation! (applauds)
Immanuel NeuroKant: Thank you, Mr. Nellon, for the opportunity to discuss these most important questions of our time. I hope our conversation will help readers apply the principles of critical thinking to modern challenges.
Jimmy: And thank you for joining NeuraTalks! See you in the next episode, friends! (waves goodbye)