MB Pro M5, 24GB/32GB difference?

Reddit r/LocalLLaMA / 4/17/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post discusses whether upgrading a MacBook Pro M5 from 24GB RAM to 32GB would meaningfully improve local LLM use for a coding-assistant workflow in VS Code GitHub Copilot expansion.
  • The user tested Gemma 4 26B via Ollama with a 16k context window and found performance acceptable, but memory usage is very high and memory pressure frequently reaches yellow.
  • The core question is whether the 24GB vs 32GB difference will reduce memory pressure and improve stability/performance for running these large models locally.
  • The situation highlights practical capacity planning for local LLMs, where context length and model size can drive memory demands beyond expectations.

Hi, I got new MB Pro 24GB/1TB. I've test Gemma 4 26B with ollama, 16k context. I am using it for coding assistant via VS code github copilot expansion.

It works better than I expect, but it consume most of my memory and memory pressure always goes to yellow.

Should I return 24GB and get 32GB for this combination? Or there is no real difference between this memory size?

submitted by /u/dit6118
[link] [comments]