Qwen 3.5 4b versus Qwen 2.5 7b for home assistant

Reddit r/LocalLLaMA / 3/29/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • A Reddit user asks whether Qwen 3.5 4B performs better than the commonly used Qwen 2.5 7B when integrated with Home Assistant for local use.
  • They note prior experience where Qwen 3 was disappointing and they reverted to Qwen 2.5 7B, and they are now testing Qwen 3.5 4B.
  • The user specifically expects improvements from Qwen 3.5 4B’s multimodal capabilities and its smaller/faster footprint, and wonders if it will be better at using Home Assistant’s tool set.
  • They run the model on a GTX 3060 12GB and plan to report back findings as part of the ongoing discussion.
  • The post is framed as community troubleshooting and benchmarking rather than an announcement of a new product or release.

Just curious if anyone here has tested out Qwen 3.5 4b with home assistant. Qwen 2.5 7b has been my go to for a long time and Qwen 3 was so disappointing that reverted back. Really curious to see how I can leverage its multimodal functionality plus its smaller/faster. Can I assume its better at using the Home assistant tool set?

For reference I'm running the model on a GTX 3060 12GB

Curious to hear back from anyone, keeping my fingers crossed that its going to be a big upgrade. Just starting the download now. I will over course report back with my findings as well.

submitted by /u/EvolveOrDie1
[link] [comments]