| MacBook Pro M5 MAX 64GB. Tested coding primitives. The 27B model thinks more, but the result is more precise and correct. The 35B model handled the task worse, but did it faster. What's your experience? Prompt: Write a single HTML file with a full-page canvas and no libraries. Simulate a realistic side-view of a moving car as the main subject. Keep the car visible in the foreground while the background landscape scrolls continuously to create the feeling that the car is driving forward. Use layered scenery for depth: nearby ground, roadside elements, trees, poles, and distant hills or mountains should move at different speeds for a natural parallax effect. Animate the wheels spinning realistically and add subtle body motion so the car feels connected to the road. Let the environment pass smoothly behind it, with repeating but varied scenery that makes the movement feel believable. Use cinematic lighting and a cohesive sky, such as sunset, dusk, or daylight, to enhance atmosphere. The overall motion should feel calm, immersive, and realistic, with a seamless looping animation. [link] [comments] |
Compared QWEN 3.6 35B with QWEN 3.6 27B for coding primitives
Reddit r/LocalLLaMA / 4/24/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- A Reddit comparison tested Qwen 3.6 35B versus Qwen 3.6 27B on coding “primitives,” reporting roughly higher throughput for 35B (72 TPS) and lower for 27B (18 TPS).
- The 27B model was observed to “think more,” producing results that the tester considered more precise and correct for the coding task.
- The 35B model was reported to complete the task faster but with worse handling and lower correctness in the same test context.
- The post invites other users to share their own experiences with these models’ coding behavior and tradeoffs between speed and accuracy.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

r/LocalLLaMa Rule Updates
Reddit r/LocalLLaMA