MiniCPM-o 4.5: Towards Real-Time Full-Duplex Omni-Modal Interaction

arXiv cs.CL / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • MiniCPM-o 4.5 is presented as a new multimodal LLM aiming for more human-like interaction by enabling real-time full-duplex, omni-modal communication rather than alternating turn-based phases.
  • The article identifies two main bottlenecks in current multimodal systems—lack of timely input integration during generation and largely reactive behavior—and positions the new model as addressing both.
  • The core technical contribution is Omni-Flow, a streaming framework that aligns multimodal inputs and outputs on a shared temporal axis to support simultaneous perception and response.
  • The 9B-parameter model is reported to be competitive with larger systems in vision-language performance, surpass certain models in omni-modal understanding, and improve speech generation while boosting computation efficiency.
  • The model is claimed to run in real-time full-duplex omni-modal interaction on edge devices with under 12GB RAM, enabled by efficient architecture design and inference optimization.

Abstract

Recent progress in multimodal large language models (MLLMs) has brought AI capabilities from static offline data processing to real-time streaming interaction, yet they still remain far from human-level multimodal interaction. The key bottlenecks are no longer modality coverage or latency alone, but the interaction paradigm itself. First, perception and response are still separated into alternating phases, preventing models from incorporating new inputs for timely adjustment during generation. Second, most current models remain reactive, responding only to explicit user requests instead of acting proactively in the evolving multimodal environment. We present MiniCPM-o 4.5, our latest effort towards human-like multimodal interaction, which mitigates these gaps by real-time full-duplex omni-modal interaction. It can see, listen, and speak simultaneously in real-time, while also exhibiting proactive behaviors such as issuing reminders or comments based on its continuous understanding of the live scene. The key technique behind MiniCPM-o 4.5 is Omni-Flow, a unified streaming framework that aligns omni-modal inputs and outputs along a shared temporal axis. This formulation converts conventional turn-based interaction into a full-duplex, time-aligned process, enabling simultaneous perception and response and allowing proactive behavior to arise within the same framework. With a total of 9B parameters, MiniCPM-o 4.5 approaches Gemini 2.5 Flash in vision-language capabilities, delivering state-of-the-art open-source performance at its scale. It also surpasses Qwen3-Omni-30B-A3B in omni-modal understanding and delivers better speech generation, with significantly higher computation efficiency. Driven by its efficient architecture design and inference optimization, the model can perform real-time full-duplex omni-modal interaction on edge devices with less than 12GB RAM cost.