Anyone else running local LLMs on older hardware?

Reddit r/LocalLLaMA / 4/15/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • A Reddit user asks the community about running local LLMs on older or unusual hardware, citing an older Xeon workstation with ample RAM as surprisingly usable.
  • The post focuses on collecting anecdotes and hardware benchmarks rather than introducing new technology or a specific model release.
  • Community responses are positioned around identifying the oldest/weirdest setups that can successfully run LLMs locally.
  • The discussion implicitly highlights practical constraints and optimization needs for deploying LLMs on limited compute environments.

I'm using an old Xeon workstation with a decent amount of RAM and it's surprisingly usable. What's the oldest/weirdest hardware you've successfully run a model on?

submitted by /u/lewd_peaches
[link] [comments]