Gemma 4 - 31b abliterated quants

Reddit r/LocalLLaMA / 4/3/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • 31B規模のGemma 4を対象に、FP16およびQ8_0・Q4_K_Mといった量子化モデルに対して「abliteration」手法(改変/利用用スクリプト付き)を適用した成果が共有されています。

Got inspired to try and crack this egg without using heretic.

FP16, Q8_0 and Q4_K_M quants, plus the abliteration script for modification/use is here:
https://huggingface.co/paperscarecrow/Gemma-4-31B-it-abliterated-gguf

based off of mlabonne's Orthogonalized Representation Intervention method, because I loved his ablits of gemma3 so much.

Edit:
Overestimated my internet speeds, still uploading the models.

submitted by /u/Polymorphic-X
[link] [comments]