MiniMax-M2.7 GGUF Quants — Full Set (Q2_K to Q8_0 + BF16)

Reddit r/LocalLLaMA / 4/12/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • MiniMax-M2.7 has been quantized into the GGUF format, and the post claims a complete set of standard quantization variants is now available.

Just finished quantizing MiniMax-M2.7 to GGUF. All standard quant levels available:

- BF16 (~427 GB)

- Q8_0 (~243 GB)

- Q6_K (~188 GB)

- Q5_K_M (~162 GB)

- Q4_K_M (~138 GB)

- Q3_K_M (~109 GB)

- Q2_K (~83 GB)

https://huggingface.co/dennny123/MiniMax-M2.7-GGUF

submitted by /u/Asleep_Training3543
[link] [comments]