submitted by /u/KvAk_AKPlaysYT
[link] [comments]
Mistral-Small-4-119B-2603-GGUF is here!
Reddit r/LocalLLaMA / 3/17/2026
📰 NewsModels & Research
Key Points
- A new model named Mistral-Small-4-119B-2603-GGUF has been released, adding a 119B-parameter variant to the Mistral-Small family.
- The model is hosted on HuggingFace under the user AaryanK at Mistral-Small-4-119B-2603-GGUF, per the post's link.
- The release is being discussed on Reddit in r/LocalLLaMA with a link to the comments.
- The name indicates the GGUF format for local deployment, suggesting it is intended for local-inference workflows.
Related Articles

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA
QwenDean-4B | fine-tuned SLM for UIGen; our first attempt, looking for feedback!
Reddit r/LocalLLaMA
acestep.cpp: portable C++17 implementation of ACE-Step 1.5 music generation using GGML. Runs on CPU, CUDA, ROCm, Metal, Vulkan
Reddit r/LocalLLaMA

**Introducing SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding**
Hugging Face Blog

Newest GPU server in the lab! 72gb ampere vram!
Reddit r/LocalLLaMA