Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning

MarkTechPost / 4/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • Liquid AI has released LFM2.5-350M, a compact language model with 350M parameters aimed at demonstrating higher “intelligence density” rather than relying solely on larger parameter counts.
  • The model underwent additional pre-training, increasing training data from 10T to 28T tokens, to boost capability despite the smaller parameter budget.
  • Liquid AI also reports using large-scale reinforcement learning, including a “scaled reinforcement learning” approach, as part of the model’s overall training recipe.
  • The release is positioned as a technical case study that challenges conventional generative AI scaling assumptions linking parameter size directly to intelligence.

In the current landscape of generative AI, the ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of LFM2.5-350M. This model is actually a technical case study in intelligence density with additional pre-training (from 10T to 28T tokens) and large-scale reinforcement learning The […]

The post Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning appeared first on MarkTechPost.