Discussions about ethics in artificial intelligence often remain abstract. Companies declare principles and publish eloquent statements about «safe» and «responsible» AI – and that's often where it ends. This makes it all the more interesting when a company decides not just to state its values but to report on what has actually been done. This is precisely what LG AI Research aimed to do by releasing its second AI ethics report – «2025 AI Ethics Accountability Report».
Why is such a report needed?
Simply put, it's a document where the company explains what it means by ethical AI, what specific steps have been taken, and where gaps still exist. It's not a marketing piece or an academic paper – rather, it's an attempt to make internal work visible to an external observer.
The first edition of this report was released previously. The second is a follow-up, in which LG AI Research documents how its approach to the safety and trustworthiness of AI systems has evolved as they develop.
Why is this important? Because today, AI systems are used in a wide variety of contexts – from business analytics to assisting with decisions that affect real people. And the broader their application, the more pressing the question becomes: can these systems be trusted? Not in the sense of «do they work technically», but in the sense of «do they act as we expect and not cause harm?»
The Three Pillars: Safety, Transparency, and Accountability
The report is built upon several key areas that LG AI Research paid special attention to.
Safety is about ensuring models behave predictably and do not generate harmful content. This sounds simple, but in practice, it requires serious methodological work: one must define in advance what counts as «harm», how to test for it, and what to do if the model still makes a mistake.
Transparency is about being open about how systems work. A user or company partner should understand what they are interacting with: is it an AI or a human? What data was the model trained on? What are its limitations? LG AI Research views transparency not as a formality, but as a prerequisite for trust.
Accountability is perhaps the most complex of the three points. It means that someone bears real responsibility for the behavior of AI systems. This requires not just internal regulations but also mechanisms that allow for tracking where something went wrong and who should fix it.
Ethics is a Process, Not a Document
One of the key messages of the report is that AI ethics cannot be «implemented» once and then forgotten. It is a living process that requires constant review as technologies and their contexts of use change.
LG AI Research describes how ethical principles are integrated into the development cycle itself – not as a final check before release, but as part of the work at every stage: from defining the task to evaluating the results. In short: ethics begins not when the model is already complete, but when the team is just defining what it wants to build.
This approach requires teams to have not only technical skills but also the ability to ask uncomfortable questions: «Who might be harmed by this decision?», «What groups of people are underrepresented in our data?», «What will happen if the system makes a mistake at a critical moment?»
Honesty About Limitations
Particularly noteworthy is that the report not only lists achievements but also acknowledges limitations. This in itself is not trivial – most corporate publications on the topic of responsible AI focus on successes.
LG AI Research admits that a number of challenges remain open. For example, assessing how «honest» a model is in its responses remains a methodologically complex task. Or the question of how to ensure the consistent application of ethical principles across different products and teams within a large organization.
This is an important detail: acknowledging incompletion is not a weakness, but a sign of a mature approach. A company that pretends to have everything figured out inspires less trust than one that honestly says, «Here's what we've done, and here's what isn't working yet.»
Context: Why Now?
The publication of such reports is part of a broader industry trend. As AI systems become more complex and influential, the demand for accountability from users, regulators, and society as a whole is also growing.
Regulatory frameworks for AI are being actively developed in various parts of the world. And companies that are already building internal practices for responsible development and know how to communicate them are in a more advantageous position – both in terms of reputation and readiness for future requirements.
But it's not just about regulators. Trust in AI systems from ordinary people – those who use them for work, study, or daily life – is largely determined by how transparently the companies that create these systems behave. And in this regard, public reporting plays a role that is hard to overstate.
What This Means for the Industry
LG AI Research's report is not the only one of its kind, but it fits into a small but growing practice of corporate transparency in AI. When major players begin to publicly document not only their intentions but also their actual practices – including what isn't working yet – it gradually establishes a new norm for the entire industry.
In short: this isn't just a corporate document. It's a signal that AI safety and trust are not optional add-ons, but part of the fundamental engineering and organizational culture. And the more companies move in this direction, the higher the chance that the AI systems of the future will be not only smarter but also more trustworthy. 🤝