I have an RTX 5080 with 16 GB of VRAM and 64 GB of RAM. What's the best quantized model I can run locally on this setup for agentic programming?
[link] [comments]
Reddit r/LocalLLaMA / 5/3/2026
I have an RTX 5080 with 16 GB of VRAM and 64 GB of RAM. What's the best quantized model I can run locally on this setup for agentic programming?

AI Business

Dev.to

Dev.to

Dev.to

TechCrunch