Artificial intelligence is traditionally considered the domain of engineers, mathematicians, and programmers. However, as systems become more complex, questions frequently arise that technical expertise alone cannot answer: What values should be embedded in the model? Who bears responsibility for the algorithm's decisions? How do we define the boundaries of what is acceptable?
These questions became central at the sixth episode of the 2025 AI Ethics Seminar, where Song In-young – a philosopher from Seoul National University – delivered a presentation. Her topic was: «Finding the Future of AI in Philosophy.»
Why Does AI Need Philosophy?
At first glance, the connection is not obvious. Philosophy deals with abstract concepts, while AI focuses on concrete tasks. But Song In-young pointed out that the development of intelligent systems increasingly encounters fundamental questions: what constitutes justice, how to define good, and whether morality can be formalized.
Simply put: when we train a model to make decisions affecting people, we inevitably embed in it a certain understanding of what is right and what is not. And this is no longer a purely technical task, but a philosophical one.
Philosophy offers tools for comprehending such situations. It helps ask the right questions even before coding or data collection begins. This doesn't slow down development; it makes it more deliberate.
Key Topics from the AI Ethics Seminar
What Was Discussed at the Seminar?
Song In-young examined several key areas where a philosophical lens proves necessary:
- Value questions in system design. Every model reflects someone's ideas about what is important. Philosophy helps to express these ideas explicitly and discuss them openly.
- Problems of autonomy and control. The more decisions are delegated to algorithms, the sharper the question becomes: where is the boundary between assistance and the substitution of human choice?
- Responsibility and explainability. If a system makes a decision that leads to negative consequences, who is answerable? The developer? The company? The model itself? Philosophy offers frameworks for analyzing such situations.
- Long-term consequences. Technologies develop faster than regulatory mechanisms. Philosophy allows us to look beyond immediate profit and consider what the mass adoption of AI might lead to in decades to come.
These are not abstract musings. Each of these questions is already encountered in real projects: from personnel recruitment systems to algorithms for allocating medical resources.
Why AI Ethics Matters in Modern Technology
Why Is This Important Now?
We are at a point where AI has ceased to be an experiment and has become part of everyday infrastructure. Models participate in hiring employees, issuing loans, diagnosing diseases, and moderating content. They impact people's lives – and they do so based on data and rules that someone, at some point, established.
If these rules are not thought through from an ethical standpoint, the consequences can be serious: discrimination based on traits the model considers neutral; reinforcement of existing biases; and erosion of responsibility for decisions made.
Philosophy doesn't provide ready-made answers, but it asks the right questions. And this is exactly what is often missing in a rapidly developing industry.
What's Next?
The AI Ethics Seminar is part of a broader movement attempting to integrate humanities knowledge into technology development. Song In-young is one of the voices in this dialogue, and her presentation shows that philosophy and AI can interact productively.
This is not a call to stop development or introduce rigid restrictions. Rather, it is a reminder: technologies are not neutral. They carry within them the values of those who create them. And the sooner we realize this, the better the chances of building systems that will serve people, rather than simply function efficiently.