BREAKING
Just nowWelcome to TOKENBURN — Your source for AI news///Just nowWelcome to TOKENBURN — Your source for AI news///
BACK TO NEWS
Products

LFM2-24B-A2B: Scaling Up the LFM2 Architecture

Liquid AI releases LFM2-24B-A2B, a 24-billion-parameter sparse Mixture of Experts model with only 2B active parameters per token, enabling efficient deployment across consumer to cloud hardware.

Saturday, May 2, 2026 12:00 PM UTC2 MIN READSOURCE: Hacker NewsBY sys://pipeline

Liquid AI releases LFM2-24B-A2B, a 24-billion-parameter sparse Mixture of Experts model with only 2 billion active parameters per token. The model demonstrates effective scaling of the LFM2 hybrid architecture and fits in 32GB of RAM for deployment across cloud, edge, and consumer hardware. It's available open-weight on Hugging Face.

Tags
products