MiniMax M2.7 GGUF Investigation, Fixes, Benchmarks

Reddit r/LocalLLaMA / 4/15/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • An investigation found MiniMax-M2.7 GGUF files can produce NaNs during Perplexity evaluation, affecting an estimated 21%–38% of GGUF uploads on Hugging Face.
  • The issue was traced to overflowing behavior in llama.cpp, with NaNs showing up at specific evaluation blocks (notably block 32, and sometimes block 311).
  • The root trigger was identified as `blk.61.ffn_down_exps`, where particular quantization variants (e.g., Q4_K and Q5_K families) cause NaNs starting at chunk 32 during PPL evals.
  • The authors updated the M2.7 GGUF quant sets on Hugging Face (unsloth/MiniMax-M2.7-GGUF) to alleviate the NaN problem, though they still cannot confirm the exact underlying cause of the perplexity NaNs.
  • Benchmarks using higher-threshold metrics like 99.9% KLD indicated that many quality metrics remain fine even though Perplexity can fail for affected quant types.
  • categories: Array of category slugs.
MiniMax M2.7 GGUF Investigation, Fixes, Benchmarks

Hey r/LocalLLaMA, we did an investigation into MiniMax-M2.7 GGUF causing NaNs on perplexity. Our findings show the issue affects 21%-38% of all GGUFs on Hugging Face (not just ours).

  • Other popular community uploaders have 38% (10/26) NaNs, another deleted theirs (1/4), and 22% of ours had NaNs (5/23) - we fixed ours.
  • When running 99.9% KLD and other metrics, all are fine.
  • We found overflowing in llama.cpp to be the culprit.
  • We did PPL, KLD 99.9% benchmarks as well - lower left is better.

https://preview.redd.it/46i7z9e1m7vg1.png?width=1600&format=png&auto=webp&s=bbfe77263d210211c1fc0d7a6a973d7027ce18af

  • Perplexity NaNs during block 32 - this was also found by the community and other quant uploaders. We also found block 311 to cause issues.
  • We found that blk.61.ffn_down_exps was the culprit - Q5_K and Q4_K of these produce NaNs starting at chunk 32 during PPL evals. Interestingly IQ4_XS, IQ3_XXS and smaller I quant types do not NaN.
  • This was quite confusing, since lower bit quants (Q2_K_XL for eg) did NOT NaN, but medium sized quants did (Q4_K_XL)!
  • We’ve now updated the M2.7 quants at https://huggingface.co/unsloth/MiniMax-M2.7-GGUF to alleviate the issue, though we still do not know the exact cause of the NaN perplexities - it could be a fluke, or most likely large multiplies causing overflows.

Which quants did we test?

Also, CUDA 13.2 is still definitely an issue. This causes some low bit quants on all models to get gibberish. Some people have dismissed it as not being an issue, but from what we’ve seen, more than 50 people have now confirmed that using CUDA 13.1 and lower fixes it. You can also see some of the public comments in our Hugging Face discussions, Reddit posts etc. NVIDIA has acknowledged that they are investigating the issue - see Unsloth Issue 4849, llama.cpp issue 21255, issue 21371

If you have any questions please do ask and thank you again for all the support as always. Appreciate it and hope you have a lovely week.

submitted by /u/danielhanchen
[link] [comments]