What are you doing with your 60-128gb vram?

Reddit r/LocalLLaMA / 3/24/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • A Reddit user asks what others are using 60–128GB VRAM GPUs for, noting they recently bought an Evo X2 128GB to improve beyond 24B Q4 models for roleplay and experimentation.
  • They’re curious about use cases beyond image/video generation, including training models and coding small projects or building websites.
  • The user wonders how a ~120B local model compares in practice to hosted assistants like GPT or Claude Sonnet.
  • They plan to run the system on Linux headless and access it via API, describing themselves as technically inclined but new to the specific setup.
  • Overall, the post solicits community ideas and inspiration for what high-VRAM local LLM setups can realistically do.

I just bought an Evo X2 128gb, as i love roleplay and want to up my game from the 24b q4 models. Obviously, image and video generation are a thing. But what else? Training models?Coding for fun small projects, websites? I have really no clue how a 120b model compares to gpt or claude-sonnet.

I plan to run it in Linux headless mode and access via api - though im a tech guy, i have no clue what im doing (yet). Just playing around with things and hopefully getting inspired by you guys.

submitted by /u/Panthau
[link] [comments]