AI Navigate

Senior engineer: are local LLMs worth it yet for real coding work?

Reddit r/LocalLLaMA / 3/16/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author is a senior software engineer and independent contractor who cannot use cloud LLMs due to client restrictions and is evaluating whether local models can meet professional coding needs.
  • They compare several local models (GPT-oss-120B, Qwen 3.5 122B, 27B) and contrast them with cloud options (Opus 4.6, GPT-5/Codex) to understand real-world usefulness for serious coding and agentic workflows.
  • Hardware considerations center on whether a Mac M5 with 128GB RAM is sufficient or if waiting for an M5 Studio would be preferable.
  • The post seeks practical, real-world experiences from people using local models for actual software development, not benchmarks or hobby use.

I know this comes up a lot, and I’ve gone through a bunch of the older threads, but I’m still having a hard time figuring out what actually makes sense for my situation.

I’m a senior software engineer working as an independent contractor, and a lot of my clients don’t allow cloud LLMs anywhere near their codebases.

Because of that, I’ve been following local LLMs for a while, but I still can’t tell whether they’re actually good enough for serious coding / agentic workflows in a professional setting.

I keep seeing GPT-oss-120B recommended, but my experience with it hasn’t been great. I’ve also seen a lot of praise for Qwen 3.5 122B and 27B.

On other projects I can use cloud models, so I know how good Opus 4.6 and GPT-5/Codex are. I’m not expecting local to match that, but I’d love to know whether local is now good enough to be genuinely useful day to day.

I’m also thinking about hardware. The new Mac M5 with 128GB RAM looks interesting, but I’m not sure whether 128GB is enough in practice or still too limiting. Part of me thinks it may make more sense to wait for an M5 Studio.

TL;DR:
I know there are already similar posts, but I’m still struggling to map the advice to my situation. I need local LLMs because cloud isn’t allowed for a lot of client work. Are they actually good enough now for professional coding, and is an M5 with 128GB enough to make it worth it?

Would love to hear from people using local models for actual software work, not just benchmarks or hobby use.

submitted by /u/Appropriate-Text2843
[link] [comments]