NVIDIA launched NemoClaw at GTC yesterday — an enterprise sandbox for AI agents built on OpenShell (k3s + Landlock + seccomp). By default it expects cloud API connections and heavily restricts local networking.
I wanted 100% local inference on WSL2 + RTX 5090, so I punched through the sandbox to reach my vLLM instance.
- Host iptables: allowed traffic from Docker bridge to vLLM (port 8000)
- Pod TCP Relay: custom Python relay in the Pod's main namespace bridging sandbox veth → Docker bridge
- Sandbox iptables injection:
nsenterto inject ACCEPT rule into the sandbox's OUTPUT chain, bypassing the default REJECT
Tool Call Translation: Nemotron 9B outputs tool calls as <TOOLCALL>[...]</TOOLCALL> text. Built a custom Gateway that intercepts the streaming SSE response from vLLM, buffers it, parses the tags, and rewrites them into OpenAI-compatible tool_calls in real-time. This lets opencode inside the sandbox use Nemotron as a fully autonomous agent.
Everything runs locally — no data leaves the machine. It's volatile (WSL2 reboots wipe the iptables hacks), but seeing a 9B model execute terminal commands inside a locked-down enterprise container is satisfying.
GitHub repo coming once I clean it up. Anyone else tried running NemoClaw locally?
[link] [comments]




