Most powerful language models are hosted in the cloud: they run on servers, consume vast resources, and require a constant internet connection. But Liquid AI decided to take a slightly different path – to create a model that is both powerful and efficient enough to run directly on a user's device.
The result of this effort is LFM2-24B, a new language model with 24 billion parameters. It is the largest model in Liquid AI's LFM2 lineup and is already available through the company's cloud service partners, as well as a local version for what are known as «AI PCs» – personal computers with hardware support for artificial intelligence tasks.
Что такое LFM2 и чем она отличается от других трансформеров
What is LFM2 and Why It's Not Just Another Transformer
If you follow AI news even a little, you've probably heard the word «transformer» – the architecture behind most modern language models, including GPT, Claude, and many others. Liquid AI has taken a different path: their LFM2 family of models is built on a proprietary architecture that processes information differently.
Simply put, when processing long texts, a classic transformer «keeps in mind» everything it has seen before – and this requires more and more memory as the text grows. Liquid's architecture is designed differently: it stores context more compactly, allowing it to work faster and with fewer resources – without a significant loss in quality.
This is precisely why a model with 24 billion parameters can run not only in data centers but also on modern consumer devices.
Младшая модель Liquid AI: LFM2-A2B
Its Little Sister: LFM2-A2B
Along with LFM2-24B, the company introduced another model – LFM2-A2B. This is a so-called Mixture-of-Experts model with 2 billion active parameters. It sounds complex, but the idea is simple: instead of using the model's full «power» for every request, it activates only the necessary part. This makes the model very fast and lightweight, while remaining quite competitive in quality with heavier counterparts.
LFM2-A2B is primarily aimed at scenarios where speed and low resource consumption are critical: built-in assistants, mobile applications, and tools that run directly on the device without contacting a server.
Как LFM2 переносит ИИ из облака на ноутбук
From the Cloud to the Laptop – Literally
One of Liquid AI's key points in launching LFM2-24B is the model's readiness to run on AI PCs. This refers to a new class of personal computers equipped with specialized chips for accelerating AI computations – devices that are now being actively promoted by Intel, AMD, and Qualcomm.
The idea is that a user can run a sufficiently powerful language model locally – without sending data to the cloud, without depending on the internet, and without paying API subscription fees. For certain tasks, especially those involving sensitive data or offline work, this can be critically important.
LFM2-24B has become the first Liquid AI model certified for the Intel AI PC platform. This isn't just a marketing label: certification means the model has been tested and optimized to run on specific hardware.
Партнёры и экосистема Liquid AI
Partners and Ecosystem
Liquid AI is betting not only on its technology but also on its partner network. LFM2-24B is available through several cloud platforms, including Cloudflare Workers AI and OctoAI. This means developers can connect to the model using familiar tools without deploying their own infrastructure.
In parallel, the company is working with hardware manufacturers – particularly Intel – to ensure its models run correctly and efficiently on consumer devices. This «cloud plus local device» approach is becoming increasingly common in the industry, as it allows for flexibility in choosing where to run the model – depending on the task, privacy requirements, and available resources.
Насколько эффективна модель LFM2-24B
How Well Does It Work?
According to Liquid AI itself, LFM2-24B shows results comparable to significantly larger models – specifically, the company compares it to models in the 70–72 billion parameter range on a number of tasks. This is a rather bold claim, and such comparisons should always be taken with caution: benchmarks are not a universal tool, and a model's real-world usefulness depends on the specific use case.
Nevertheless, the very idea of creating a model that «weighs» three times less but is still on par with its heavier counterparts is precisely the direction the industry is heading. Efficiency is becoming a metric that is just as important as absolute quality.
LFM2-24B: практическое применение и перспективы
What This Means in Practice
Setting aside the technical details, the essence of what's happening is quite simple: Liquid AI is offering a language model that is powerful enough for serious tasks and lightweight enough to run directly on your computer – without the cloud and without huge computational costs.
For the average user, this is still more of a prospect than a daily reality: AI PCs are just beginning to enter the mass market, and running a 24-billion-parameter model on a laptop is still not a task for just any hardware. But the direction is clear: AI tools are gradually moving closer to the user – onto the device, rather than into the data center.
For developers and companies already working with language models, the arrival of LFM2-24B is another option in their arsenal: an efficient, well-documented model with options for both cloud and local deployment.