| LFM2.5-350M by Liquid AI was trained for reliable data extraction and tool use. At <500MB when quantized, it is built for environments where compute, memory, and latency are particularly constrained. Trained on 28T tokens with scaled RL, it outperforms larger models like Qwen3.5-0.8B in most benchmarks; while being significantly faster and more memory efficient.
Read more: http://www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind [link] [comments] |
Liquid AI releases LFM2.5-350M -> Agentic loops at 350M parameters
Reddit r/LocalLLaMA / 4/1/2026
📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- Liquid AI released the LFM2.5-350M model, positioned for reliable data extraction and tool use with “agentic loops” behavior at ~350M parameters.
- The model is designed to be usable in constrained environments, claiming under 500MB footprint when quantized and improved speed/memory efficiency versus larger baselines.
- Liquid AI reports training on 28T tokens with scaled RL, and claims it outperforms models like Qwen3.5-0.8B on most benchmarks.
- The release emphasizes cross-platform deployment (CPUs, GPUs, and mobile hardware) and reliability for function calling, agent workflows, and structured outputs.
- The article points readers to a Liquid AI blog post and an available Hugging Face checkpoint for downloading and experimenting with the model.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

Anthropic's Accidental Release of Claude Code's Source Code: Irretrievable and Publicly Accessible
Dev.to

Salesforce announces an AI-heavy makeover for Slack, with 30 new features
TechCrunch

Oracle’s Impersonal Mass Layoffs: Thousands Impacted in AI-Driven Cost Cuts
Dev.to