Unsloth MiniMax M2.7 quants just finished uploading to HF

Reddit r/LocalLLaMA / 4/12/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • Unsloth has uploaded multiple quantized variants of the MiniMax M2.7 model to Hugging Face under the “unsloth/MiniMax-M2.7-GGUF” repository.
  • The release spans quantization levels from 1-bit through 16-bit (BF16), including several intermediate settings such as Q4/Q5/Q6 and Q8.
  • Each quantized checkpoint is provided with a corresponding GGUF size, giving users a clear sense of disk/storage requirements for different quality/performance tradeoffs.
  • The post credits u/danielhanchen and presents a complete current inventory of the available quantization labels and their sizes.
  • Users looking to run MiniMax M2.7 locally can choose among these files to balance model quality against hardware constraints.

They range from Q1 to BF16.

Grab them while they're still hot over at

https://huggingface.co/unsloth/MiniMax-M2.7-GGUF

Thanks to u/danielhanchen!

Here's the current list:

Bits Quantization Label Size
1-bit UD-IQ1_M 60.7 GB
2-bit UD-IQ2_XXS 65.4 GB
UD-IQ2_M 70.1 GB
UD-Q2_K_XL 75.3 GB
3-bit UD-IQ3_XXS 80.1 GB
UD-IQ3_S 83.6 GB
UD-Q3_K_S 93.6 GB
UD-Q3_K_M 101 GB
UD-Q3_K_XL 102 GB
4-bit UD-IQ4_XS 108 GB
UD-IQ4_NL 111 GB
UD-Q4_K_S 131 GB
MXFP4_MOE 136 GB
UD-Q4_K_M 140 GB
UD-Q4_K_XL 141 GB
5-bit UD-Q5_K_S 159 GB
UD-Q5_K_M 169 GB
UD-Q5_K_XL 169 GB
6-bit UD-Q6_K 188 GB
UD-Q6_K_XL 207 GB
8-bit Q8_0 243 GB
UD-Q8_K_XL 247 GB
16-bit BF16 457 GB
submitted by /u/Zyj
[link] [comments]