AI Navigate

Qwen3.5-9B GGUF tuned for reasoning + function-calling, now on Hugging Face

Reddit r/LocalLLaMA / 3/18/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • A Qwen3.5-9B GGUF model has been uploaded and fine-tuned on reasoning data and FunctionGemma function-calling data for llama.cpp/GGUF runtimes.
  • The tuning emphasizes structured responses, tool-use style behavior, and action-oriented prompting.
  • The author invites feedback on performance across general chat, reasoning tasks, structured outputs, and function-calling prompts when run with local runtimes.
  • The release links to the Hugging Face repo: slyfox1186/qwen3.5-9b-opus-4.6-functiongemma.gguf.

I just uploaded a Qwen3.5-9B GGUF that I fine-tuned on a mix of reasoning data and FunctionGemma-related function-calling data, then converted for llama.cpp/GGUF runtimes.

It’s still a Qwen-family model, but the tuning pushes it more toward structured responses, tool-use style behavior, and action-oriented prompting.

If you run local models with llama.cpp, LM Studio, Ollama, or similar, I’d be interested in hearing how it performs for:

  • general chat
  • reasoning tasks
  • structured outputs
  • function-calling style prompts

Repo link: Huggingface

submitted by /u/RiverRatt
[link] [comments]