Published February 25, 2026

Liquid AI LFM2-24B: самая большая языковая модель для локального запуска

Liquid AI Releases LFM2-24B, Its Largest Language Model – And It Runs on a Regular Laptop

Liquid AI has introduced LFM2-24B – its largest language model, featuring an unconventional architecture and the ability to run both in the cloud and on local devices.

Products
Event Source: Liquid Reading Time: 4 – 6 minutes

Most powerful language models are hosted in the cloud: they run on servers, consume vast resources, and require a constant internet connection. But Liquid AI decided to take a slightly different path – to create a model that is both powerful and efficient enough to run directly on a user's device.

The result of this effort is LFM2-24B, a new language model with 24 billion parameters. It is the largest model in Liquid AI's LFM2 lineup and is already available through the company's cloud service partners, as well as a local version for what are known as «AI PCs» – personal computers with hardware support for artificial intelligence tasks.

Что такое LFM2 и чем она отличается от других трансформеров

What is LFM2 and Why It's Not Just Another Transformer

If you follow AI news even a little, you've probably heard the word «transformer» – the architecture behind most modern language models, including GPT, Claude, and many others. Liquid AI has taken a different path: their LFM2 family of models is built on a proprietary architecture that processes information differently.

Simply put, when processing long texts, a classic transformer «keeps in mind» everything it has seen before – and this requires more and more memory as the text grows. Liquid's architecture is designed differently: it stores context more compactly, allowing it to work faster and with fewer resources – without a significant loss in quality.

This is precisely why a model with 24 billion parameters can run not only in data centers but also on modern consumer devices.

Младшая модель Liquid AI: LFM2-A2B

Its Little Sister: LFM2-A2B

Along with LFM2-24B, the company introduced another model – LFM2-A2B. This is a so-called Mixture-of-Experts model with 2 billion active parameters. It sounds complex, but the idea is simple: instead of using the model's full «power» for every request, it activates only the necessary part. This makes the model very fast and lightweight, while remaining quite competitive in quality with heavier counterparts.

LFM2-A2B is primarily aimed at scenarios where speed and low resource consumption are critical: built-in assistants, mobile applications, and tools that run directly on the device without contacting a server.

Как LFM2 переносит ИИ из облака на ноутбук

From the Cloud to the Laptop – Literally

One of Liquid AI's key points in launching LFM2-24B is the model's readiness to run on AI PCs. This refers to a new class of personal computers equipped with specialized chips for accelerating AI computations – devices that are now being actively promoted by Intel, AMD, and Qualcomm.

The idea is that a user can run a sufficiently powerful language model locally – without sending data to the cloud, without depending on the internet, and without paying API subscription fees. For certain tasks, especially those involving sensitive data or offline work, this can be critically important.

LFM2-24B has become the first Liquid AI model certified for the Intel AI PC platform. This isn't just a marketing label: certification means the model has been tested and optimized to run on specific hardware.

Партнёры и экосистема Liquid AI

Partners and Ecosystem

Liquid AI is betting not only on its technology but also on its partner network. LFM2-24B is available through several cloud platforms, including Cloudflare Workers AI and OctoAI. This means developers can connect to the model using familiar tools without deploying their own infrastructure.

In parallel, the company is working with hardware manufacturers – particularly Intel – to ensure its models run correctly and efficiently on consumer devices. This «cloud plus local device» approach is becoming increasingly common in the industry, as it allows for flexibility in choosing where to run the model – depending on the task, privacy requirements, and available resources.

Насколько эффективна модель LFM2-24B

How Well Does It Work?

According to Liquid AI itself, LFM2-24B shows results comparable to significantly larger models – specifically, the company compares it to models in the 70–72 billion parameter range on a number of tasks. This is a rather bold claim, and such comparisons should always be taken with caution: benchmarks are not a universal tool, and a model's real-world usefulness depends on the specific use case.

Nevertheless, the very idea of creating a model that «weighs» three times less but is still on par with its heavier counterparts is precisely the direction the industry is heading. Efficiency is becoming a metric that is just as important as absolute quality.

LFM2-24B: практическое применение и перспективы

What This Means in Practice

Setting aside the technical details, the essence of what's happening is quite simple: Liquid AI is offering a language model that is powerful enough for serious tasks and lightweight enough to run directly on your computer – without the cloud and without huge computational costs.

For the average user, this is still more of a prospect than a daily reality: AI PCs are just beginning to enter the mass market, and running a 24-billion-parameter model on a laptop is still not a task for just any hardware. But the direction is clear: AI tools are gradually moving closer to the user – onto the device, rather than into the data center.

For developers and companies already working with language models, the arrival of LFM2-24B is another option in their arsenal: an efficient, well-documented model with options for both cloud and local deployment.

Original Title: From Cloud to AI PC: Launching Liquid's Largest LFM2 Model Alongside Our Growing Partner Ecosystem
Publication Date: Feb 24, 2026
Liquid www.liquid.ai A U.S.-based AI company researching alternative neural architectures and adaptive models.
Previous Article Anthropic's Responsible Scaling Policy: What's New in Version Three Next Article Smart Load Balancing: Managing AI Inference Across Multiple Cloud Clusters Simultaneously

From Source to Analysis

How This Text Was Created

This material is not a direct retelling of the original publication. First, the news item itself was selected as an event important for understanding AI development. Then a processing framework was set: what needs clarification, what context to add, and where to place emphasis. This allowed us to turn a single announcement or update into a coherent and meaningful analysis.

Neural Networks Involved in the Process

We openly show which models were used at different stages of processing. Each performed its own role — analyzing the source, rewriting, fact-checking, and visual interpretation. This approach maintains transparency and clearly demonstrates how technologies participated in creating the material.

1.
Claude Sonnet 4.6 Anthropic Analyzing the Original Publication and Writing the Text The neural network studies the original material and generates a coherent text

1. Analyzing the Original Publication and Writing the Text

The neural network studies the original material and generates a coherent text

Claude Sonnet 4.6 Anthropic
2.
Gemini 2.5 Pro Google DeepMind step.translate-en.title

2. step.translate-en.title

Gemini 2.5 Pro Google DeepMind
3.
Gemini 2.5 Flash Google DeepMind Text Review and Editing Correction of errors, inaccuracies, and ambiguous phrasing

3. Text Review and Editing

Correction of errors, inaccuracies, and ambiguous phrasing

Gemini 2.5 Flash Google DeepMind
4.
DeepSeek-V3.2 DeepSeek Preparing the Illustration Description Generating a textual prompt for the visual model

4. Preparing the Illustration Description

Generating a textual prompt for the visual model

DeepSeek-V3.2 DeepSeek
5.
FLUX.2 Pro Black Forest Labs Creating the Illustration Generating an image based on the prepared prompt

5. Creating the Illustration

Generating an image based on the prepared prompt

FLUX.2 Pro Black Forest Labs

Related Publications

You May Also Like

Explore Other Events

Events are only part of the bigger picture. These materials help you see more broadly: the context, the consequences, and the ideas behind the news.

Want to know about new
experiments first?

Subscribe to our Telegram channel — we share all the latest
and exciting updates from NeuraBooks.

Subscribe