| Absolutely amazing. M5 max should be like 50token/s and 400pp, we’re getting closer to being “sonnet 4.5 at home” levels. [link] [comments] |
MiniMax m2.7 (mac only) 63gb: 88% and 89gb: 95%, MMLU 200q
Reddit r/LocalLLaMA / 4/12/2026
💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The post shares claimed performance results for “MiniMax m2.7 (Mac only)” at two model sizes (63GB and 89GB), reporting 88%/95% outcomes respectively with an “MMLU 200q” evaluation framing.
- It links to two Hugging Face model entries (JANGQ-AI/MiniMax-M2.7-JANG_2L for 63GB and JANGQ-AI/MiniMax-M2.7-JANG_3L for 89GB), indicating availability for local use.
- The author compares the observed throughput/quality expectations to other Apple-silicon-oriented setups, suggesting the system is approaching “Sonnet 4.5 at home” style capability.
- The content is presented as community signal from r/LocalLLaMA rather than an official benchmark release, so the exact methodology and reproducibility may depend on readers verifying the linked models and tests.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot
VentureBeat

ChatGPT Prompt Engineering for Freelancers: A Step-by-Step Guide to Unlocking AI-Powered Client Acquisition
Dev.to

From Batch to Bot: AI for Specialty Food Label Compliance
Dev.to