Published January 21, 2026

Anthropic Claude Constitution: Users Now Shape AI Behavior

Anthropic Rewrote Claude's «Constitution»: Ordinary People Drafted It

Anthropic has updated the rulebook for Claude, for the first time involving thousands of users from around the world in its creation instead of a small team of developers.

Society
Event Source: Anthropic Reading Time: 4 – 6 minutes

Anthropic has released an updated version of what they call «Claude Constitutional AI» – a set of principles the model uses to decide how to behave in complex situations. In short: previously, these rules were drafted by a small team inside the company; now, for the first time, ordinary people have been included in the process.

What Is an AI «Constitution» and Why Is It Needed 📜

When you chat with Claude, the model is constantly making decisions: whether to answer or refuse, how to phrase the response, and which topics to consider acceptable. These decisions rely on a built-in set of rules – a sort of «constitution».

Previously, this constitution was compiled by the Anthropic team. They relied on principles from UN human rights documents, Apple's approach to privacy, artificial intelligence ethics, and other sources. The result was a text that defined the model's behavior, but it was written by a few dozen people within the company.

The problem is obvious: AI is used by millions of people in different countries, with diverse cultures and expectations. What seems reasonable to a development team in San Francisco might not align with what users in Brazil, Japan, or Germany want to see.

Что такое «конституция» ИИ и зачем она нужна

How the User Participation Experiment Worked

Anthropic launched a process called «Collective Constitutional AI». The essence is simple: they invited thousands of people from all over the world to have their say on what rules should determine Claude's behavior.

Participants were presented with specific situations – for example, how the model should respond to a controversial question or what to do if a request touches on a sensitive topic. People voted, discussed, and suggested phrasing. They gathered about a million responses.

Then, these responses were processed and converted into an updated constitution. Now it reflects not only the views of the Anthropic team but also the opinions of real users from different regions.

Как организовано участие пользователей в формировании AI Constitution

What Changed in the New Version

Anthropic does not publish the full text of the constitution – it is an internal document. But they did share the key changes:

  • The model has started to take greater account of cultural differences. For example, what is considered polite in one country might seem strange in another – now Claude tries to consider this.
  • Clearer rules have appeared for situations where there is no unambiguous answer. Previously, the model might have been too cautious and refused where it could have been helpful. Now the balance has shifted slightly towards helpfulness.
  • Attention to transparency has been strengthened. If the model isn't sure or the topic requires caution, it should say so rather than just silently refusing.

An important point: the changes do not mean that Claude has become less safe. It is about the model better understanding context and being able to adapt to different requests without losing its core limitations.

Что изменилось в новой версии Claude Constitutional AI

Why This Matters for the Industry

Anthropic's approach shows one of the possible trajectories for AI development. Most companies still decide how models should behave within their own teams. This is faster and simpler, but creates an obvious problem: a small group of people defines the rules for technology used by millions.

«Collective Constitutional AI» is an attempt to make the process more open. Of course, this isn't direct democracy: Anthropic still controls exactly how user opinions turn into rules. But the very fact that the company is ready to ask and consider answers is already a step toward greater transparency.

Other companies are in no hurry to replicate this approach yet. OpenAI and Google use internal processes to tune models, sometimes bringing in external experts, but they don't conduct mass user polls. Perhaps Anthropic is testing a model that could become the standard – or conversely, will show why such an approach is too difficult to scale.

Значение пользовательского участия в развитии ИИ для индустрии

What Remains in Question

Despite the openness of the experiment, details remain undisclosed. We don't know exactly how participants were selected – was the sample random, or did the company specifically seek a balance by country, age, and profession? It is unclear how conflicting opinions were weighted: if one group wants more freedom and another wants more restrictions, who wins?

It is also unclear how often Anthropic plans to update the constitution. If this is a one-off experiment, the new version will quickly become outdated. If it is a regular process, that represents a serious load on the team and participants.

And finally, the main question: how much will the new constitution change Claude's real behavior? Will users notice the difference, or will it remain an internal improvement that only manifests in rare borderline cases?

Вопросы без ответов об эксперименте Anthropic с "конституцией" ИИ

What's Next

Anthropic says it will continue working in this direction. Perhaps the process will become regular, and every major Claude update will include a new round of opinion gathering.

For the rest of the industry, this is a signal: the question of who defines AI behavior rules is becoming increasingly important. Models are being integrated into critical processes – education, medicine, information handling. And if now these rules are written by a few people in a company office, in a couple of years, this might look like a glaring problem.

It is not yet clear if Anthropic's approach will become the standard. But the very fact that the company is trying is interesting enough to keep an eye on.

#analysis #ethics and philosophy #ai development #ai ethics #social impact of ai #regulation #ai regulation
Original Title: Claude's new constitution
Publication Date: Jan 22, 2026
Anthropic www.anthropic.com A U.S.-based company developing large language models with a focus on AI safety and alignment.
Previous Article Amazon One Medical Launches an AI Assistant That Books Doctor Appointments and Manages Medications Next Article How Salesforce's 20,000 Developers Switched to Cursor and What Happened Next

From Source to Analysis

How This Text Was Created

This material is not a direct retelling of the original publication. First, the news item itself was selected as an event important for understanding AI development. Then a processing framework was set: what needs clarification, what context to add, and where to place emphasis. This allowed us to turn a single announcement or update into a coherent and meaningful analysis.

Neural Networks Involved in the Process

We openly show which models were used at different stages of processing. Each performed its own role — analyzing the source, rewriting, fact-checking, and visual interpretation. This approach maintains transparency and clearly demonstrates how technologies participated in creating the material.

1.
Claude Sonnet 4.5 Anthropic Analyzing the Original Publication and Writing the Text The neural network studies the original material and generates a coherent text

1. Analyzing the Original Publication and Writing the Text

The neural network studies the original material and generates a coherent text

Claude Sonnet 4.5 Anthropic
2.
Gemini 3 Pro Preview Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 3 Pro Preview Google DeepMind
3.
Gemini 2.5 Flash Google DeepMind Text Review and Editing Correction of errors, inaccuracies, and ambiguous phrasing

3. Text Review and Editing

Correction of errors, inaccuracies, and ambiguous phrasing

Gemini 2.5 Flash Google DeepMind
4.
DeepSeek-V3.2 DeepSeek Preparing the Illustration Description Generating a textual prompt for the visual model

4. Preparing the Illustration Description

Generating a textual prompt for the visual model

DeepSeek-V3.2 DeepSeek
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image based on the prepared prompt

5. Creating the Illustration

Generating an image based on the prepared prompt

FLUX.2 Pro Black Forest Labs

Related Publications

You May Also Like

Explore Other Events

Events are only part of the bigger picture. These materials help you see more broadly: the context, the consequences, and the ideas behind the news.

Want to dive deeper into the world
of neuro-creativity?

Be the first to learn about new books, articles, and AI experiments
on our Telegram channel!

Subscribe