My New AI build - please be kind!

Reddit r/LocalLLaMA / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • A user describes a budget-focused local AI PC build for running models, detailing the hardware, cooling setup, and upgrade choices made across CPU, RAM, SSDs, and GPUs.
  • The system uses an AMD AM4 desktop platform with a 5800X CPU, 128GB DDR4 (running at 2666MT/s), and two AMD GPUs (Radeon Pro W6800 and Radeon Pro 9700AI) plus a high-power PSU.
  • The builder optimized power for the 9700AI by under-volting it (260W vs 300W) due to a 12V connector issue while monitoring temperatures and reporting minimal performance loss.
  • For software, they run Linux with ROCm and leverage tools like llama.cpp and Ollama, mentioning that a specific llama.cpp build was needed to get it working reliably.
  • Performance is reported as very fast for certain model sizes (up to about minimax M2.7 Q4), with one remaining issue related to gated delta net attention causing full reprocessing each turn.
My New AI build - please be kind!

This is my new AI machine!

Lianli Lancool 217 case with 2 large (170 x 30mm) front intake fans, 3 (120mm) bottom intake fans, 1 (120mm) back exhaust fan plus the 2x gpu exhaust back. 3 (120mm) ceiling exhaust. 3 of those fans I added to what came in the case as standard. Those were Arctic p12 pro fans.

Thermalrite Assassin cpu cooler.

ASUS ROG Strix B550a mobo. Which somehow is negotiating 2 times x16 pcie lanes simutaniously. That isn't in the spec sheet. But it is happening for sure.

5800x processor. Not the 3d version, but that isn't super consequential for my use case.

128gb ddr4 3200 running at 2666mt/s cl 18 (snappy for model weights overflow).

32gb Radeon Pro w6800

32gb Radeon Pro 9700AI

1 old mechanical 2tb spinning disk drive.

Main boot drive is a 2tb basic ssd. Snappy enough.

Another 1tb ssd mounted.

Corsair RM 850e PSU

\------

This was for local AI on a budget. I also needed to upgrade several existing pieces of hardware (adding ram and SSDs) so opted for an AM4 build for the desktop. My laptops are AM5, AM4, and an old intel notepad upgraded with 32gb ddr4 for cpu inference. So when I want to game I use the AM5 lappy. Won't discuss such heresy any further in this sacred sub.

I have under-volted the 9700ai to 260W down from its standard 300w, because of that 12v connector issue. Have been monitoring temps carefully and it seems fine with little to no performance reduction. Even when I allowed it, it rarely drew the full 300w.

I apologise to the PC Master Race overlords for my poor cable management.

Lastly, this is not its final home. I move apartment soon and will then have it all set up on desk and in a space with proper airflow.

Ok, fingers crossed this goes nicely and you guys don't sh\*t all over my lovely build. I am not a pro, so it was tough! And financially stressful!

Thanks :)

Edit: typos. And below:

Performance wise it is blisteringly fast up to minimax m2.7 q4. I haven't tried larger models that that yet.

As both GPUs are AMD, the OS is Linux, and I am using ROCm with llama.cpp, ollama, opencode, Claude Code/ cowork for cloud tasks, etc. I have had a few problems, and needed to use a specific llama.cpp build, but now it works beautifully, with the exception of having difficulty with gated delta net attention, causing full reprocessing each turn. Otherwise, works like a charm.

Single gpu tasks go to the 9700 while the 6800 handles display and system requirements. For larger models, I do split layer. Other approaches resulted in VERY slow responses as all queries took multiple turns going across pcei.

Here is an EG for my llama.cpp settings:

~/llama.cpp/build/bin/llama-server \ -m /home/ell/models/Mistral-Small-4/Mistral-Small-4-119B-2603-merged.gguf \ --alias mistral-small-4-119b \ --split-mode layer \ --parallel 1 \ --no-warmup \ --ctx-size 32768 \ --fit on \ --fit-target 4096 \ --cache-ram 0 \ -fa auto \ --no-mmap \ --host 0.0.0.0 --port 3000

submitted by /u/Ell2509
[link] [comments]