MacBook m4 pro for coding llm

Reddit r/LocalLLaMA / 3/29/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The post asks whether an M4 Pro MacBook with 48GB RAM is a worthwhile setup for experimenting with local LLMs for coding.
  • The author considers specific code-focused models (e.g., Qwen3-Coder 30B, Qwen3.5 27B, and Qwen2.5-Coder 7B) and wonders what local use would realistically enable.
  • It discusses using the continuous.dev extension and seeks practical benefits beyond concerns about sending proprietary work to public LLMs.
  • The author compares the value of local inference versus $20 subscription services and questions whether paid cloud access is likely to be superior.
  • Overall, the thread is centered on cost/benefit and workflow implications for developers evaluating local LLM coding support.

Hello,

Haven’t been working with local llms for long time.

Currently I have m4 pro with 48gb memory.

It is really worth to try with local llms? All I can is probably qwen3-coder:30b or qwen3.5:27b without thinking and qwen2.5-coder-7b for auto suggestions.

Do you think it is worth to play with it using continuous.dev extension? Any benefits except: “my super innovative application that will never be published can’t be send to public llm”?

Wouldn’t 20$ subscriptions won’t be better than local?

submitted by /u/TheRandomDividendGuy
[link] [comments]