Published February 7, 2026

SenseTime Unveils SenseNova-SI-1.3: A Model Featuring Advanced Spatial Intelligence

The Chinese firm has open-sourced an AI model that topped the charts in eight spatial environment understanding benchmarks.

Products
Event Source: SenseTime Reading Time: 3 – 4 minutes

Chinese company SenseTime has open-sourced its SenseNova-SI-1.3 model. This system operates with what is known as «spatial intelligence» – the ability to perceive a three-dimensional environment and interact with it effectively.

What Is Spatial Intelligence and Why Does It Matter?

Simply put, spatial intelligence is a model's ability to grasp where objects are located in space, how they relate to one another, and how they can be interacted with. It's not just about image recognition; it's about a deep understanding of distances, shapes, and the physical principles of the real world.

For humans, this is a natural skill: we instantly gauge distances, tell if a wardrobe will fit into a room, and navigate terrain with ease. For AI, such tasks have long been incredibly difficult because they require more than just visual analysis – they demand an understanding of context, geometry, and the laws of physics.

Systems equipped with spatial intelligence are critical for the advancement of robotics, autonomous vehicles, augmented reality, architectural design, and other fields requiring precise perception of a 3D environment.

SenseNova-SI-1.3 Performance

SenseTime claims their development took first place in eight benchmarks at once, all evaluating spatial intelligence capabilities. Benchmarks are standardized sets of tasks that allow for an objective comparison between different models.

While the company hasn't disclosed detailed results or the full list of all eight tests, the fact that it leads in several independent metrics simultaneously points to stable performance across various scenarios, rather than simple «overfitting» for a specific test.

Availability and Open Source

SenseTime has made the model open-access, allowing developers to freely use it in their projects. This is a common practice in the AI community: companies release their solutions to accelerate global research, gather valuable feedback from professionals, and showcase their technological prowess.

The open nature of SenseNova-SI-1.3 means it can be studied in detail, adapted for niche tasks, or integrated into existing applications. This is particularly valuable for creators of robots, computer vision systems, and any projects involving spatial analysis.

Limitations and Real-World Considerations

Behind the Scenes

Despite the impressive claims, it isn't yet fully clear how effectively the model will perform in the real world, outside of laboratory settings. Benchmarks are a great tool for measuring progress, but they don't always account for the complexities of «live» use: noisy data, edge cases, and limited computing resources.

Questions regarding hardware requirements also remain unanswered. If the model requires ultra-powerful servers to run, its scope of application will be significantly narrowed. Conversely, the ability to run it on standard equipment would open up massive opportunities for the technology.

Finally, we shouldn't overlook the fierce competition. Spatial intelligence is one of the hottest areas of research, with many tech giants working on similar systems. SenseNova-SI-1.3's current leadership is no guarantee it will hold its ground just a few months from now.

Significance for the Industry

The emergence of a powerful open model for spatial intelligence is a major milestone for the entire 3D data industry. If SenseNova-SI-1.3 is indeed as effective as its developer claims, it could serve as the foundation for a new generation of applications in navigation, design, and robotics.

However, for now, this is more about raw potential than a finished solution. To gauge the model's true utility, we'll need to wait for the results of its implementation in practical projects. Time will tell whether SenseTime's creation becomes an industry standard or simply remains «just another successful experiment».

Original Title: Scaling Spatial Intelligence: SenseTime Open-Sources SenseNova-SI-1.3, Ranked No.1 Overall Across Eight Spatial Intelligence Benchmarks
Publication Date: Feb 6, 2026
SenseTime www.sensetime.com A major Chinese AI company specializing in computer vision and intelligent systems.
Previous Article Hailuo AI and fal Meet in Istanbul: New Horizons for Creative Tools Next Article Perplexity Shows How to Train Trillion-Parameter Models on AWS

From Source to Analysis

How This Text Was Created

This material is not a direct retelling of the original publication. First, the news item itself was selected as an event important for understanding AI development. Then a processing framework was set: what needs clarification, what context to add, and where to place emphasis. This allowed us to turn a single announcement or update into a coherent and meaningful analysis.

Neural Networks Involved in the Process

We openly show which models were used at different stages of processing. Each performed its own role — analyzing the source, rewriting, fact-checking, and visual interpretation. This approach maintains transparency and clearly demonstrates how technologies participated in creating the material.

1.
Claude Sonnet 4.5 Anthropic Analyzing the Original Publication and Writing the Text The neural network studies the original material and generates a coherent text

1. Analyzing the Original Publication and Writing the Text

The neural network studies the original material and generates a coherent text

Claude Sonnet 4.5 Anthropic
2.
Gemini 3 Pro Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 3 Pro Google DeepMind
3.
Gemini 3 Flash Preview Google DeepMind Text Review and Editing Correction of errors, inaccuracies, and ambiguous phrasing

3. Text Review and Editing

Correction of errors, inaccuracies, and ambiguous phrasing

Gemini 3 Flash Preview Google DeepMind
4.
DeepSeek-V3.2 DeepSeek Preparing the Illustration Description Generating a textual prompt for the visual model

4. Preparing the Illustration Description

Generating a textual prompt for the visual model

DeepSeek-V3.2 DeepSeek
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image based on the prepared prompt

5. Creating the Illustration

Generating an image based on the prepared prompt

FLUX.2 Pro Black Forest Labs

Related Publications

You May Also Like

Explore Other Events

Events are only part of the bigger picture. These materials help you see more broadly: the context, the consequences, and the ideas behind the news.

Don’t miss a single experiment!

Subscribe to our Telegram channel —
we regularly post announcements of new books, articles, and interviews.

Subscribe