AI Navigate

Qwen3.5 35b is sure one the best local model (pulling above its weight)

Reddit r/LocalLLaMA / 3/15/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The Reddit post argues that Qwen3.5-35B MOE is one of the best local models, capable of pulling above its weight compared with smaller fine-tuned models.
  • The author shares a concrete test setup (llama-server with reasoning disabled and --fit on, Qwen3.5-35B-A3B-GGUF, CLI Qwen-code, RTX 5080 Mobile, context 70K, PP 373, TG 53.57) to demonstrate its performance.
  • The tester used the model to design a visual app with interactive visualizations for a research paper and to generate a web app inspired by another large React app, referencing an arXiv paper (2601.00063v1).
  • The post links a Reddit gallery and comments, highlighting practical, real-world usage of local LLMs rather than just benchmarks.
  • Overall, the entry showcases the practical viability of local LLMs for building interactive applications and demos.
Qwen3.5 35b is sure one the best local model (pulling above its weight)

I am hearing a lot about many models smaller fine tuned models that are pulling above their weight and people are also claiming that those models perform much better than Qwen3.5 35B. I agree that some smaller fine-tuned models, and certainly larger models, are great.

But I want to share my experience where Qwen3.5 35B MOE has really surprised me. Here are some snippets i have attached that explain more:

Model: Qwen3.5-35B-A3B-GGUF\Qwen3.5-35B-A3B-UD-Q4_K_L.gguf
Server: llama-server with reasoning disabled and--fiton
CLI: Qwen-code
GPU: Nvidia RTX 5080 Mobile
Context used: 70K
PP: 373
TG: 53.57

What was tested
I provided a research paper and asked it to create a nice visual app with interactive visualizations. I also provided a reference to another app—which itself is a large React app—and asked it to generate a web app for the new paper.

research paper i used: https://arxiv.org/html/2601.00063v1

submitted by /u/dreamai87
[link] [comments]