If it works, it ain’t stupid!

Reddit r/LocalLLaMA / 3/30/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post reports practical improvements to running an M40 GPU setup for Local LLaMA, noting it runs much cooler under load after addressing mounting/fit issues.
If it works, it ain’t stupid!

Card runs really hot under load, even with dedicated fan. M40 mounts semi fit on rtx 6000 with some fitting. Cut temps in half even though it still throttles in 30 min stress test.

submitted by /u/The_Covert_Zombie
[link] [comments]

If it works, it ain’t stupid! | AI Navigate