Are there actually people here that get real productivity out of models fitting in 32-64GB RAM, or is that just playing around with little genuine usefulness?

Reddit r/LocalLLaMA / 4/24/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post questions whether anyone can derive real, practical productivity benefits from running LLMs on machines with only 32–64GB of RAM rather than much larger memory setups.
  • The author is weighing a new MacBook purchase and wants to understand what amount of RAM is genuinely necessary for useful model fitting or local usage.
  • The discussion asks respondents what concrete tasks they perform with models at smaller memory footprints and what they would use additional RAM (e.g., 128GB) for.
  • Overall, it’s a community prompt focused on feasibility, real-world utility, and hardware planning for local LLM workflows.

And if you do think it does genuinely (professionally or otherwise) help you, what do you use it for? 128GB would also interest me. Reason is that I need a new Macbook and I'm considering how much RAM I'll get.

Thank you

submitted by /u/ceo_of_banana
[link] [comments]