Setup advice. New RTX 5090 32gb ram + 96gb Ddr5 ram.

Reddit r/LocalLLaMA / 3/29/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The post asks for recommended local AI hardware and model setup to run “Kimi 2.5” for coding workflows similar to Opus, aiming to replace a local “CodeX” tool.
  • The user reports prior difficulties with other models, including tool-related issues (e.g., “tools using Goose”) and unreliable project-folder review by smaller models.
  • They also want their device to handle ComfyUI prompts and workflows locally, with an option to add another 96GB DDR5 RAM (two RAM slots still available).
  • They seek guidance on whether a workstation with more expandable RAM slots would be better than incremental RAM purchases, noting limited availability of 64GB DDR5 sticks locally/Amazon.
  • Overall, the thread is a practical troubleshooting and requirements-gathering discussion for an on-device LLM + UI/workflow use case, given their planned RTX 5090 and DDR5 configuration.

I was playing with different models but not quite what I'm after. I want to be able to run Kimi 2.5 for coding similar like Opus locally. Specifically I want to replace CodeX on my device. Running other models I had issues with tools using Goose. Even asking a smaller model to review projects in a folder wasnt working like I wanted.

In addition I wanted something to handle comfyui prompts and workflows on the device.

I can buy another 96gb ram if needed. I still have 2 slots open.

Any ideas on what the best model/setup would be? Should I get a workstation and just start buying more ram with more slots? I can't seem to find 64gb DDR 5 ram sticks here in my country and everything on Amazon seems limited.

submitted by /u/Wa1ker1
[link] [comments]