My first impressions of Minimax M2.7 (Q5_K_M) vs Qwen 3.5 27b (Q8_0)

Reddit r/LocalLLaMA / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The author compares Minimax M2.7 (Q5_K_M) and Qwen 3.5 27b by generating AGENTS.md documentation for a Python/FastAPI/LangGraph project with similar recommended settings.
  • Minimax is reported as extremely slow on their machine and produces shallow, sometimes incorrect documentation, including wrong assumptions about core project components.
  • Qwen 3.5 is reported to generate more thorough, well-structured docs and to ask clarifying questions when it cannot infer details.
  • The post ends by inviting others to share their experiences, questioning whether Minimax’s issue is down to quantization (“lobotomized”) or overall model quality.
  • Overall, the perceived result is that Qwen 3.5 better handles complex codebase documentation tasks under the author’s test conditions.

I'm not sure if the AesSedai's Q5_K_M version of Minimax M2.7 is too much lobotomized or if the model itself is kind of weak.

I did a simple experiment with both models running with the recommended parameters. The task was simply to generate some AGENTS.md files for a Python/Fast API/LangGraph project of mine (Roo Code /init command), which has some degree of complexity.

Minimax runs painfully slowly on my setup, so I was expecting it to demolish Qwen 3.5... but it ended up generating shallow and useless documentation, and it even made wrong assumptions about some core components.

Qwen 3.5, on the other hand, dug deep into the codebase, created nicely organized docs and even asked me about aspects it could not initially infer from the context.

So... I am curious to hear about you guys experience with the latest version of Minimax. Is it a disappointing model or has Qwen 3.5 just set the bar to high?

submitted by /u/Septerium
[link] [comments]