AI Navigate

Homelab has paid for itself! (at least this is how I justify it...)

Reddit r/LocalLLaMA / 3/15/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The post provides an update on a homelab used for LLM experiments, including mapping current LLMs like Qwen3.5 and GLM and showing 'Brain Scan' visualizations.
  • Power usage is logged with Tasmota and Grafana, with an estimated on-demand GPU cost of about $3.50 per GH100 module per hour, translating to roughly $10,000 at today's pricing.
  • The author paid $9,000 for the rig and estimates power costs under $1,000, claiming they are officially ahead financially.
  • The narrative emphasizes a personal cost-justification for owning a homelab to run AI experiments rather than paying for cloud compute.
  • The post references LLM Neuroanatomy and current LLMs to illustrate practical, hands-on hardware-backed ML experimentation.
Homelab has paid for itself! (at least this is how I justify it...)

Hey, I thought I'd do an update on my Homelab I posted a while back.

I have it running on LLM experiments, which I wrote up here. Basically, it seems I may have discovered LLM Neuroanatomy, and am now using the server to map out current LLM's like the Qwen3.5 and GLM series (thats the partial 'Brain Scan' images here).

Anyway, I have the rig power though a Tasmota, and log everything to Grafana. My power costs are pretty high over here in Munich, but calculating with a cost of about $3.50 per GH100 module per hour (H100s range in price, but these have 480GB system RAM and 8TB SSD per chip, so I think $3.50 is about right), I would have paid today $10,000.00 in on-demand GPU use.

As I paid $9000 all up, and power was definitely less than $1000, I am officially ahead! Remember, stick to the story if my wife asks!

submitted by /u/Reddactor
[link] [comments]