Just finished quantizing MiniMax-M2.7 to GGUF. All standard quant levels available:
- BF16 (~427 GB)
- Q8_0 (~243 GB)
- Q6_K (~188 GB)
- Q5_K_M (~162 GB)
- Q4_K_M (~138 GB)
- Q3_K_M (~109 GB)
- Q2_K (~83 GB)
[link] [comments]
Reddit r/LocalLLaMA / 4/12/2026
Just finished quantizing MiniMax-M2.7 to GGUF. All standard quant levels available:
- BF16 (~427 GB)
- Q8_0 (~243 GB)
- Q6_K (~188 GB)
- Q5_K_M (~162 GB)
- Q4_K_M (~138 GB)
- Q3_K_M (~109 GB)
- Q2_K (~83 GB)

AI Business

AI Business

Dev.to

Dev.to

Dev.to