AI Navigate

My company just handed me a 2x H200 (282GB VRAM) rig. Help me pick the "Intelligence" ceiling.

Reddit r/LocalLLaMA / 3/18/2026

📰 News

Key Points

  • The author has access to 2x Nvidia H200 GPUs (141GB HBM3e each) totaling 282GB VRAM and aims to test LLMs locally rather than relying on cloud options.

My workplace just got a server equipped with 2x Nvidia H200 GPUs (141GB HBM3e each). I've been asked to test LLMs on it since they know "I do that at home".

While I have experience with smaller local setups, 282GB of VRAM is a different beast entirely. I want to suggest something more "interesting" and powerful than just the standard gpt oss or something. Im interested in raw "intelligence" over ultra high speeds. So what models / quants would you suggest for them to put on it?

EDIT: They were actually a bit more specific about the use case. They want to use the LLM for local coding for the developers IDE (code completion and generation as well as reviews). The person I spoke to was also really interested in OpenClaw and AI agents and that I could set one up for us to evaluate once I found a good model. So its basically a playground for us.

submitted by /u/_camera_up
[link] [comments]