Docker vllm config for Qwen3-5-122B-A10B-NVFP4

Reddit r/LocalLLaMA / 3/22/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • It shares a Docker/vLLM configuration used to deploy Qwen3-5-122B-A10B-NVFP4 on a single 6000 Pro GPU, providing a practical setup reference.
  • The post links to a GitHub repository with the exact config needed for this deployment.
  • It serves as a hands-on guide for practitioners aiming to run Qwen-based LLMs locally using vLLM in Docker.
  • The submission originates from a Reddit post by user /u/1-a-n in the r/LocalLLaMA community.

In case it helps anyone I'm sharing the config I am using for Qwen3-5-122B-A10B-NVFP4 deployed on a single 6000 Pro.

https://github.com/ian-hailey/vllm-docker-Qwen3-5-122B-A10B-NVFP4

submitted by /u/1-a-n
[link] [comments]