AI Navigate

qwen3.5-35b-a3b is a gem

Reddit r/LocalLLaMA / 3/13/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • qwen3.5-35b-a3b is described as a fast and capable model for generating or updating code summaries and docstrings, with output that the author subjectively rates as slightly better than the 122b model.
  • In the tested setup, mlx-community/qwen3.5-35b-a3b (6-bit) on an M4 Max 128GB rewrote the file in about 12 seconds, running at 80–90 tokens per second.
  • The author used llmaid with the code-documenter.yaml profile to process files in the repository by sending them to the LLM and replacing them locally.
  • The post hides more details in a spoiler and provides concrete commands to reproduce the workflow, illustrating a practical usage pattern for automated code documentation.
qwen3.5-35b-a3b is a gem

I am using this model to generate or update code summaries (docstrings). This model seems to be the perfect spot for this task as it's super fast and produces great output. To my big surprise, it generated even slightly better docs than the 122b model. Highly subjective of course.

Current setup is mlx-community/qwen3.5-35b-a3b (6 bit) on an M4 Max 128GB, which just took 12 seconds to rewrite this file (with reasoning). This model runs at 80-90 tokens per second.

Some might ask for more details, some might blame "self promotion". I decided to hide more details within a spoiler.

I was using my own llmaid (GitHub) to go through all the files in my code repository, send them to the LLM with the instruction to rewrite the contents accordingly and then replace them locally. llmaid is using profiles that specify what to do and how. The one I used is code-documenter.yaml. The command I used looks like this:

llmaid --profile ./profiles/code-documenter.yaml --targetPath ~./testfiles --provider lmstudio --uri http://localhost:1234/v1 --model qwen3.5:35b-a3b --verbose

submitted by /u/waescher
[link] [comments]