| Had to redo the model, I wanted this to be abso fucking lutely perfect. Only 43gb, and with reasoning on does an insane 95%. Uncensored fully. https://huggingface.co/dealignai/Nemotron-3-Super-120B-A12B-JANG\_2L-CRACK [link] [comments] |
Nemotron-3-Super Uncensored Only 43GB (mac only) scores 95.7% on MMLU.
Reddit r/LocalLLaMA / 3/21/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- Nemotron-3-Super is a 43GB Mac-only AI model that reportedly scores 95.7% on the MMLU benchmark, indicating strong multitask reasoning for a compact footprint.
- The post notes that the model was redone to be absolutely perfect and includes reasoning-on with an uncensored configuration.
- It links to a HuggingFace repository (dealignai/Nemotron-3-Super-120B-A12B-JANG_2L-CRACK), suggesting a 120B parameter base with a small download size.
- The development and release could affect how developers and organizations deploy efficient local LLMs on Mac systems, potentially impacting product strategy, engineering workflows, and marketing around AI tools.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to

From Early Adopter to AI Instructor: Teaching 500 Engineers to Build with LLMs
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA