You guys seen this? 1-bit model with an MMLU-R of 65.7, 8B params

Reddit r/LocalLLaMA / 4/1/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • A 1-bit quantized model called prism-ml/Bonsai-8B-gguf is being shared via Hugging Face, with claims of an MMLU-R score of 65.7 and 8B parameters.
  • The post is asking others to test the model, suggesting that community validation and real-world evaluation are not yet established.
  • The discussion is framed as a surprising or notable result, implying potential interest in how far extreme quantization can go without collapsing performance.
  • The main takeaway so far is an early signal about compact model deployment possibilities and the need for independent benchmarks.

This is nuts.

prism-ml/Bonsai-8B-gguf · Hugging Face

has anyone tested this thing?

submitted by /u/OmarBessa
[link] [comments]