What kind of device is suitable for running local LLM?

Reddit r/LocalLLaMA / 5/2/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author considers running local LLMs due to rising costs after Copilot changed its billing model and asks what hardware is suitable for that use case.
  • They compare options including a Mac with large RAM (e.g., 128GB), a Windows PC with high-end RTX GPUs (5070/5080/5090) while worrying about GPU memory limits.
  • They also mention using mini supercomputers such as Spark DGX, noting they may be slower than other approaches based on what they’ve heard.
  • The post requests community experience and advice on how to choose a device for running local LLMs effectively.

Since copilot has changed it's billing model, become super expensive, I'm starting to think the possibility of running local LLM myself. But I'm not sure what kind of device is suitable for this kind of usage?

  1. A Mac with large RAM such as 128GB

  2. A Windows with RTX5070/5080/5090, but will the memory limit become a serious problem?

  3. A mini super computer, such as Spark DGX, but I've heard it's relatively slow in comparison to the others?

Can you share your experience about how to pick a device for running local LLm? Thanks for the advice!

submitted by /u/attic0218
[link] [comments]