Bartowski vs Unsloth for Gemma 4

Reddit r/LocalLLaMA / 4/6/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post discusses which quantization approach performs better for Gemma 4 variants (26B A4B and 31B) and notes that the community has not yet consolidated benchmark data for these sizes.
  • The author reports strong results with a 26B A4B Q4_K_M quant from Bartowski, compared with the “full version” available via OpenRouter and AI Studio.
  • The goal is to gather additional user testing and insights on the relative quality of Bartowski versus Unsloth quant packs for the same model family.
  • The discussion is framed as an anecdotal comparison rather than a standardized benchmark, inviting others to share reproducible results.

Hello everyone,

I have noticed there is no data yet what quants are better for 26B A4B and 31b. Personally, in my experience testing 26b a4b q4_k_m from Bartowski and the full version on openrouter and AI Studio, I have found this quant to perform exceptionally well. But I'm curious about your insights.

submitted by /u/dampflokfreund
[link] [comments]

Bartowski vs Unsloth for Gemma 4 | AI Navigate