Liquid AI Launches New Language Model Series

2024-10-01

Recently, Liquid AI introduced a new suite of language models called Liquid Foundation Models (LFMs), aiming to revolutionize the landscape of the AI industry. In contrast to conventional Transformer-based architectures, LFMs maintain top-tier performance while achieving reduced memory usage and more efficient inference.

The newly released model lineup features three variants: LFM-1B, LFM-3B, and LFM-40B, each tailored to specific application scenarios within the AI ecosystem. LFM-1B, with 1.3 billion parameters, sets a new standard in its category, becoming the first non-GPT architecture to significantly outperform Transformer models of similar size across several public benchmark tests.

LFM-3B is equipped with 3.1 billion parameters, specifically designed for edge deployment and particularly suitable for mobile applications. According to Liquid AI, this model not only outperforms other 3 billion-parameter models but also surpasses certain 7B and 13B parameter models in some aspects. In various benchmark tests, LFM-3B's performance rivals Microsoft's Phi-3.5-mini, while being 18.4% smaller in size.

In the high-end segment, LFM-40B utilizes a Mixture of Experts (MoE) architecture, activating 1.2 billion parameters during operation. Liquid AI states that this model delivers performance comparable to larger models while achieving higher throughput on more cost-effective hardware.

A key distinction of LFMs lies in their approach to handling lengthy inputs. Unlike Transformer models, whose memory usage grows linearly with input length, LFMs maintain more stable memory consumption. This efficiency allows them to process longer sequences on the same hardware, with the company claiming that their models support an optimized context length of up to 32,000 tokens.

Liquid AI's approach diverges from mainstream Transformer architectures, instead building upon principles from dynamical systems, signal processing, and numerical linear algebra. The company asserts that this foundation enables LFMs to leverage decades of theoretical advancements in these fields.

Despite these capabilities, Liquid AI acknowledges the current limitations of their models, including challenges in handling zero-shot code tasks, performing precise numerical computations, and managing time-sensitive information.

Users interested in experimenting with LFMs can access them through platforms such as Liquid Playground, Lambda (including chat interfaces and APIs), and Perplexity Labs. Additionally, Cerebras Inference will soon support these models. Liquid AI is also optimizing the LFM stack for hardware from companies like NVIDIA, AMD, Qualcomm, Cerebras, and Apple, potentially expanding their availability across diverse computing environments.

However, despite the impressive performance of Liquid AI's Liquid Foundation Models in benchmark tests and architectural innovations, the technology remains in its early stages. While the models theoretically demonstrate significant potential, their practical effectiveness and scalability have yet to be thoroughly validated. As the AI field continues to explore new model architectures, LFMs are undoubtedly a noteworthy advancement, but their true impact on the AI landscape will require time and practical application to fully assess.