Published on March 20, 2026

Tracy: как понимать, что происходит внутри ИИ-приложения на Kotlin

Tracy: A New Library for Understanding the Inner Workings of AI Applications

JetBrains has introduced Tracy, an open-source library for Kotlin developers that helps monitor the behavior of AI applications under real-world operating conditions, offering insights into their performance and issues.

Development 4 – 5 minutes min read
Event Source: JetBrains AI 4 – 5 minutes min read

When an AI application starts acting strangely – responding slowly, giving unexpected results, or breaking down at the worst possible moment – a developer needs a way to figure out what exactly went wrong. Not to guess, not to assume, but to see. This is precisely why JetBrains has released Tracy – an open-source library for the Kotlin programming language.

Что такое наблюдаемость в разработке ИИ-приложений

What Is This “Observability” Beast?

In the development world, the word “observability” means a system's ability to report on itself: what it did, how much time it spent, where it slowed down, and where it stumbled.

For regular applications, such tools have existed for a long time. But with AI applications, things are a bit more complicated. They involve calls to language models, invocations of external tools, and multi-step action chains – and all of this needs to be tracked together, not separately.

Simply put: if your AI application is a kitchen, then observability is the ability to see what's happening at each workstation, how long each cooking step took, and exactly where the dish got burned.

Для чего нужна Tracy и кому она будет полезна

Who Needs It and Why?

Tracy is aimed at developers building AI-powered applications in Kotlin. In short, the library helps answer three basic questions:

  • Why did it break? Tracy records what exactly was happening at the moment of the error – which steps were executed, what was passed to the model, and what it returned.
  • Why is it so slow? The library measures execution time – both for the entire process and for its individual parts. This helps identify bottlenecks.
  • How much does it cost? Tracy tracks the usage of language models: how many requests were sent and how much data was processed. This is especially important for those who pay for each model call.

Moreover, Tracy can work not only with AI model calls but also with “tools” – external functions that the model can use during its operation – as well as with the application's own arbitrary logic. This means you get a complete picture, not a fragmented one.

Быстрый старт с Tracy: первые результаты за минуты

First Results in Minutes

One of the key points emphasized in Tracy's description is its ease of integration. According to the authors, you can add the library to a project and get your first data within minutes. This is important: the less effort it takes to start observing the system's behavior, the more likely a developer is to actually do it instead of putting it off “for later.”

The library is designed for use in real, live applications – not just in a test environment, but also in production, where things are for real and the cost of an error is higher.

Open-source проект Tracy: прозрачность и доверие

Open Source – More Than Just Words

Tracy is distributed as an open-source project. This means that anyone can look at how it's built, suggest improvements, or adapt it to their own needs. For a tool that integrates into a critical part of an application – the AI logic – this transparency is significant.

Openness also lowers the barrier to trust: a developer can verify that the library does exactly what it claims to do, and nothing more.

Актуальность появления Tracy для экосистемы Kotlin

Why Now?

AI applications are no longer a novelty. They are being launched into production, relied upon by real users, and held accountable for results – just like any other service. Meanwhile, the observability infrastructure for AI components has long lagged behind the capabilities of AI itself.

As long as an application is an experiment, you can get by with logs and intuition. When it's operating in real-world conditions and needs to be reliable, you need more serious tools. Tracy is an attempt to fill this gap for the Kotlin ecosystem.

Kotlin is widely used in Android and server-side development, and the presence of such a tool in its ecosystem is a logical step as AI functionality becomes a part of an ever-increasing number of applications.

Ключевые преимущества Tracy для разработчиков ИИ на Kotlin

The Takeaway

Tracy doesn't change how language models work, nor does it make AI “smarter.” It does something else: it gives developers the ability to see what's happening – in real time and with the necessary level of detail.

For those who build AI applications in Kotlin and want to understand their behavior based on data, not guesswork, this could prove to be a useful addition to their toolkit.

Original Title: Introducing Tracy: The AI Observability Library for Kotlin
Publication Date: Mar 11, 2026
JetBrains AI blog.jetbrains.com A Czech company developing AI tools for software developers integrated into JetBrains IDEs.
Previous Article 16 AI Models, 9,000+ Documents: Who Came Out on Top? Next Article Open Superintelligence Stack: How Prime Intellect and NVIDIA Are Creating an Open Infrastructure for AI Training

Related Publications

You May Also Like

Explore Other Events

Events are only part of the bigger picture. These materials help you see more broadly: the context, the consequences, and the ideas behind the news.

From Source to Analysis

How This Text Was Created

This material is not a direct retelling of the original publication. First, the news item itself was selected as an event important for understanding AI development. Then a processing framework was set: what needs clarification, what context to add, and where to place emphasis. This allowed us to turn a single announcement or update into a coherent and meaningful analysis.

Neural Networks Involved in the Process

We openly show which models were used at different stages of processing. Each performed its own role — analyzing the source, rewriting, fact-checking, and visual interpretation. This approach maintains transparency and clearly demonstrates how technologies participated in creating the material.

1.
Claude Sonnet 4.6 Anthropic Analyzing the Original Publication and Writing the Text The neural network studies the original material and generates a coherent text

1. Analyzing the Original Publication and Writing the Text

The neural network studies the original material and generates a coherent text

Claude Sonnet 4.6 Anthropic
2.
Gemini 2.5 Pro Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 2.5 Pro Google DeepMind
3.
Gemini 2.5 Flash Google DeepMind Text Review and Editing Correction of errors, inaccuracies, and ambiguous phrasing

3. Text Review and Editing

Correction of errors, inaccuracies, and ambiguous phrasing

Gemini 2.5 Flash Google DeepMind
4.
DeepSeek-V3.2 DeepSeek Preparing the Illustration Description Generating a textual prompt for the visual model

4. Preparing the Illustration Description

Generating a textual prompt for the visual model

DeepSeek-V3.2 DeepSeek
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image based on the prepared prompt

5. Creating the Illustration

Generating an image based on the prepared prompt

FLUX.2 Pro Black Forest Labs

Want to know about new
experiments first?

Subscribe to our Telegram channel — we share all the latest
and exciting updates from NeuraBooks.

Subscribe