| https://huggingface.co/XiaomiMiMo/MiMo-V2.5 Model Summary
[link] [comments] |
feat: Add Mimo v2.5 model support by AesSedai · Pull Request #22493 · ggml-org/llama.cpp
Reddit r/LocalLLaMA / 5/7/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research
Key Points
- AesSedai has contributed a pull request to ggml-org/llama.cpp adding support for the XiaomiMiMo MiMo v2.5 model.
- MiMo v2.5 is a sparse Mixture-of-Experts (MoE) architecture with 310B total parameters and 15B activated parameters.
- The model supports very long context lengths—up to 1 million tokens—and is multimodal, handling text, image, video, and audio.
- The architecture includes a 729M-parameter ViT vision encoder, a 261M-parameter audio transformer encoder, and a Multi-Token Prediction (MTP) component with 329M parameters.
- This update broadens llama.cpp’s capability for local multimodal inference by enabling deployment of the MiMo v2.5 family.




