| Abliterated (decensored) MiniMax model AWQ: https://huggingface.co/vpyn/MiniMax-M2.5-CARVE-v1-AWQ-W4A16 MLX: https://huggingface.co/mlx-community/MiniMax-M2.5-Uncensored-4bit [link] [comments] |
MiniMax-M2.5-CARVE-v1-BF16
Reddit r/LocalLLaMA / 3/13/2026
📰 NewsModels & Research
Key Points
- The post introduces MiniMax-M2.5-CARVE-v1-BF16, an abliterated (decensored) variant of the MiniMax model.
- It provides links to decensored variants (AWQ-W4A16 and MLX-Uncensored-4bit) on HuggingFace, indicating multiple community forks.
- The submission is by user /u/vpyno on r/LocalLLaMA and points to a HuggingFace page for the CARVE-v1-BF16 release.
- This reflects ongoing community experimentation with decensoring and alternate quantization for MiniMax-like models, signaling accessible deployment options for such variants.
Related Articles

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA

Andrej Karpathy's autonomous AI research agent ran 700 experiments in 2 days and gave a glimpse of where AI is heading
Reddit r/artificial

So cursor admits that Kimi K2.5 is the best open source model
Reddit r/LocalLLaMA