Sorry if it's not the best place to ask this, of the models in the image, which is the best for (problem solving)/Coding and the best one for studying (ask LLM concepts) ? My PC build is RX 9060 XT 16GB + I3 12100F + 16 GB DDR4 + llama.cpp with Vulkan backend + Linux Mint.

Reddit r/LocalLLaMA / 4/29/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post compares several locally run LLMs (Qwen 3.5 27B, Qwen 3.6 27B, MoE models, Qwen3-Coder-30B, and GPT-OSS 20B) on a user’s Linux Mint setup using llama.cpp with a Vulkan backend.
  • Qwen 3.5 27B and Qwen 3.6 27B are described as very accurate for math/problem solving, but they are also reported as slow and power-hungry (about 5 minutes at ~120W per problem).
  • The MoE models are noted to respond faster, but the answers are perceived as more generic, making them less suitable for rigorous problem solving.
  • For studying and learning LLM concepts offline, the user suggests using the faster MoE models as a “Wikipedia-like” resource when internet access is limited.
  • Qwen3-Coder-30B is highlighted as the model the user most likes for coding, while noting that it is an older model.
Sorry if it's not the best place to ask this, of the models in the image, which is the best for (problem solving)/Coding and the best one for studying (ask LLM concepts) ? My PC build is RX 9060 XT 16GB + I3 12100F + 16 GB DDR4 + llama.cpp with Vulkan backend + Linux Mint.

I gave some math problems to Qwen 3.5 27B and Qwen 3.6 27B and they got all of them right, pretty smart models I would say, but very slow and electricity consuming, they took like 5 mins with my GPU at 120 W to solve a problem.

The MoE models answer quite fast but their answers feel generic, I wouldn't use them for problem solving, but to study or to learn something new, they can work as a Wikipedia if i'm without Internet.

Of those, the one that I most used is Qwen3-Coder-30B, I really like this one, but it's an old model.

In the beggining of the year I also used a lot of GPT-OSS 20B.

submitted by /u/Badhunter31415
[link] [comments]