Metaphorical storytelling
Cultural context
Journalistic approach
Imagine: a neural network has made a mistake. It gave the wrong advice, mixed up facts, or accidentally offended a user. What should it do? In the West, the answer is obvious: apologize, explain what went wrong, and offer a fix. But ask a developer from Seoul or Singapore about this – and you’ll get a very different story. There, the algorithm might stay silent because a public admission of error could destroy trust more than the error itself.
Artificial intelligence learns from us. It soaks up our languages, preferences, and fears. And, of course, our notions of what is good and what is bad. But ethics is a tricky thing. It isn’t universal. What seems fair in California might seem tactless in Beijing. And when we try to teach a machine to be «good», we inevitably arrive at the question: good for whom?
Two Worlds, Two Truths
Western ethics is a story about the individual. Here, the focus is always on the human with their rights, freedoms, and individual choices. If an algorithm makes a decision, it must respect everyone’s autonomy. If a neural network hides something, it’s a violation of transparency. If a system errs, it is obliged to explain why. Western AI grows up in a culture where «I» is more important than «we», where openness is valued above harmony, and the right to know the truth ranks higher than social tranquility.
Eastern ethics looks at the world differently. Here, context, relationships, and balance matter. A decision that helps preserve harmony in the group might turn out to be more important than individual comfort. Transparency is good, but not always. Sometimes silence isn’t concealment; it’s wisdom. Sometimes leaving things unsaid offers protection. Eastern AI is formed in a culture where «we» is more important than «I», where respect for hierarchy and tradition sets the rules of the game, and social harmony stands above abstract truth.
These two worlds create algorithms that think differently. Not because one approach is right and the other isn’t. They simply reflect different versions of humanity.
How Culture Seeps into Code
Algorithms are not neutral. They never have been neutral. Every line of code, every label in a training set, every evaluation criterion is an imprint of the culture of those who create them.
Let’s take a simple example: a recommendation system. In the West, it strives to offer the user what will appeal exactly to them. This is the cult of individuality: «We know you are unique, and we’ll pick something unique for you». But in Japan or South Korea, the same system might work differently. It takes into account not only personal preferences but also what is currently popular in the group, what is approved by society, and what fits current trends. Because here, choice isn’t just about «I want», but also about «what will others think».
Or take content moderation systems. Western platforms often rely on rigid rules: there is a list of what’s prohibited, and the algorithm impartially removes everything that matches it. This is the logic of justice: one law for all. But in Asia, the same tasks are solved more flexibly. The algorithm might consider context: who the author is, what the situation is, what the status of the participants in the conversation is. What might seem like favoritism in the West is perceived here as necessary sensitivity to nuances.
And this isn’t a design error – it is an intentional choice reflecting different ideas about justice.
Transparency vs. Harmony
One of the hottest topics in AI ethics is transparency. In the West, the belief is: if an algorithm makes a decision affecting a person’s life, that person has the right to know exactly how it was reached. The European GDPR, for example, gives users the right to demand explanations from automated systems. It’s sacred – your right to know why an algorithm denied you a loan, why it showed you that specific job ad, or why it decided you are a risk.
But in some Eastern cultures, such insistent transparency can be perceived as mistrust. If a person constantly demands explanations, it hints at suspicion: «Don't you trust me? Do you think I'm deceiving you?» Delicacy is valued more here. The system might drop a hint or offer a suggestion, but it isn’t obliged to reveal every step of its internal logic. The result and the preservation of relationships are more important than dissecting the mechanisms.
This doesn’t mean that they don’t care about justice in the East. It’s just that justice is understood differently. It can mean respect for seniors, for those who hold responsibility and knowledge. An algorithm that explains its every step might seem intrusive or even disrespectful – as if it were justifying itself to those standing lower in the hierarchy.
And here a question arises: how do we create a global AI that will be ethical for everyone? A neural network that is simultaneously frank for a European and tactful for a Japanese person? It’s like trying to write a text that is equally polite in English and Korean. Formally possible, but in practice, something will inevitably be lost.
The Individual vs. The Collective
Another crack between West and East is the question of privacy. In the West, privacy is almost a religion. Your data is your property. No one has the right to use it without your explicit consent. An AI that collects information about you without warning is perceived as a border violator, as a thief.
In the East, the attitude toward privacy is softer and more complex. Of course, people here value personal space too, but it isn’t absolute. There is an understanding that your data is part of a shared ecosystem, and its use can bring benefit to all. Chinese health apps, for example, might share user data to track epidemics. A Westerner would perceive this as an intrusion, but here it can be the norm – if it helps protect society.
And again, the algorithm finds itself trapped. A system created in Silicon Valley asks for permission at every step: «Can we use your location? Can we analyze your messages?» This is respect for autonomy. But in another culture, this might look like excessive bureaucracy or even mistrust: «Why do you keep asking? Didn't I give consent by starting to use your service?»
These differences aren’t just cultural curiosities. They form the architecture of technologies. Western AI is built on the opt-in principle: you explicitly give consent for every action. Eastern AI more often follows the opt-out principle: the system assumes consent until you say otherwise. And both approaches consider themselves ethical.
Responsibility: Who Is to Blame When a Machine Errs?
Now imagine: an autonomous car gets into an accident. The algorithm makes a decision leading to an injury. Who bears the responsibility?
In the West, the answer always seeks individual guilt. You need to point to a specific culprit: the developer, the company, the car owner. The legal system demands that someone be named to answer for it. This is a culture of personalized responsibility.
In the East, the situation is more complicated. Responsibility here is often perceived as collective. It’s not one person who is to blame, but the entire chain: developers, managers, testers, sometimes even users. Acknowledging one’s role in a failure is part of the cultural code. But the paradox is that publicly pointing out someone’s guilt can be perceived as an attack, as a breach of harmony. Therefore, decisions are often made quietly, inside the organization, without noisy proceedings.
And again – a dilemma for global AI. How should an algorithm react to its own error? Publicly apologize and explain what went wrong, as Western standards demand? Or keep silent and fix everything unnoticed, saving face, as is customary in the East?
If code knew how to cry, it would stumble right here.
Cultural Traps in Data
The most insidious thing about ethical differences is that they hide in the data. Algorithms learn from examples, and examples are always culturally colored.
When a Western company creates a chatbot, it teaches it to be friendly, open, and straightforward. «Hi! How can I help? 😊» – and that is normal for an American user. But move this bot to Japan – and it will seem too familiar, even rude. A polite distance is important here. A Japanese bot would more likely say, «I apologize for the disturbance. I would be glad to be of service.» And this isn’t just translation – it is a different ethical field of communication.
Or take emotion recognition systems. They learn on faces, expressions, and intonations. But a smile in the USA and a smile in Thailand are not the same thing. An American smile most often means joy and friendliness. A Thai smile can mean anything: embarrassment, apology, disagreement, discomfort. An algorithm trained on Western data will make a mistake. And this isn’t a technical error – it is cultural blindness.
Data carries a worldview. And when we try to create a universal AI, we discover that universality does not exist. Every dataset is a slice of culture; every label is someone’s idea of the norm.
The Future: Can East and West Be Reconciled in One Algorithm?
So, what is to be done? Create separate versions of AI for each culture? Perhaps, but then we risk deepening the divide, creating digital ghettos where algorithms from different countries don’t understand each other – just as people sometimes don’t understand each other.
Or seek a common ethical foundation? Something universal: honesty, care, respect? But even these words mean different things. Honesty in the West means telling the truth, even if it’s unpleasant. Honesty in the East means not deceiving, but also not causing pain with unnecessary words.
Some companies are trying to create adaptive systems – algorithms that adjust to the user's cultural context. They recognize where you are from and change their behavior. It sounds reasonable, but there is a risk: such a system can turn culture into a stereotype. «Are you from Asia? Then you need politeness and hierarchy.» But people are more complex. Someone from Tokyo might prefer Western directness. Someone from New York might appreciate Eastern subtlety. Culture is not a stamp.
Maybe the answer lies in allowing the user to choose the ethical style of their AI themselves? So that the algorithm asks: «How do you want me to behave? Openly or tactfully? Directly or delicately?» But even this isn’t a panacea. Ethics isn’t just a personal choice; it is also a responsibility to others, to society.
A Mirror That Reflects Too Much
Artificial intelligence is a mirror. It shows us who we are. And observing AI ethics in different cultures, we see not technical differences, but different ways of being human.
Western AI says: «I respect your freedom, your right to choose, your individuality.» Eastern AI whispers: «I care about our harmony, about our relationships, about our common good.» And both are right. And both are wrong if they try to impose their truth on everyone else.
Perhaps the main lesson here is humility. The admission that we cannot create one ideal AI for everyone. That ethics will always be local, contextual, and fluid. That an algorithm, no matter how smart it is, cannot be wiser than the people teaching it.
But there is also hope. Because these differences are not obstacles, but an opportunity to learn from one another. Western developers can understand the value of context and harmony. Eastern engineers can see the power of transparency and autonomy. And, perhaps, precisely in this dialogue, in the collision of cultures, something new will be born. Not a universal ethics, but one that is richer, more sensitive, and more human.
Because in the end, AI isn’t about machines. It’s about us. About what we want to be. About what we value, what we fear, what we dream of. And if an algorithm ever learns to truly understand people, it will have to understand this too: humanity is not a single formula, but a symphony of voices, each sounding in its own way.
For now, neural networks learn from us, stumbling over our contradictions, faltering on our paradoxes. And that is probably the most human thing that can happen to them.