| Abliterated (decensored) MiniMax model AWQ: https://huggingface.co/vpyn/MiniMax-M2.5-CARVE-v1-AWQ-W4A16 MLX: https://huggingface.co/mlx-community/MiniMax-M2.5-Uncensored-4bit [link] [comments] |
MiniMax-M2.5-CARVE-v1-BF16
Reddit r/LocalLLaMA / 3/13/2026
📰 NewsModels & Research
Key Points
- The post introduces MiniMax-M2.5-CARVE-v1-BF16, an abliterated (decensored) variant of the MiniMax model.
- It provides links to decensored variants (AWQ-W4A16 and MLX-Uncensored-4bit) on HuggingFace, indicating multiple community forks.
- The submission is by user /u/vpyno on r/LocalLLaMA and points to a HuggingFace page for the CARVE-v1-BF16 release.
- This reflects ongoing community experimentation with decensoring and alternate quantization for MiniMax-like models, signaling accessible deployment options for such variants.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA