新しいQwen3.6-35B-A3Bの実行方法はこちらです
> 4090でフルコンテキスト - llama cpp のIQ4_XS gguf
> Sparkでフルコンテキスト - 調整したvLLMでFP8
llama cpp を使った docker compose はこちらです
services: llamacpp: container_name: llamacpp-qwen3-6-35b-a3b-iq4xs image: ghcr.io/ggml-org/llama.cpp:server-cuda restart: unless-stopped gpus: all shm_size: "8gb" ipc: host environment: - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=compute,utility command: - -m - /models/Qwen3.6-35B-A3B-UD-IQ4_XS.gguf/Qwen3.6-35B-A3B-UD-IQ4_XS.gguf - --host - 0.0.0.0 - --port - "8000" - --alias - qwen3.6-35b-a3b-iq4xs - --ctx-size - "262144" - --n-gpu-layers - "999" - --parallel - "1" - --threads - "8" - --flash-attn - on - --batch-size - "256" - --ubatch-size - "256" - --cache-type-k - f16 - --cache-type-v - f16 - --temp - "0.6" - --top-p - "0.95" - --top-k - "20" - --min-p - "0.0" - --presence-penalty - "0.0" - --repeat-penalty - "1.0" volumes: - /root/tank/models:/models:ro ports: - 9998:8000
vllm を使った docker compose はこちらです
何らかの理由で、pandas を入れた上で vllm/vllm-openai:cu130-nightly をパッチ適用した dockerfile が必要です
services: vllm: build: context: . dockerfile: Dockerfile image: vllm-qwen3.6-35b-a3b-fp8:local container_name: vllm-qwen3.6-35b-a3b-fp8 runtime: nvidia ports: - "8000:8000" volumes: - /home/etoprak/Documents/models/Qwen-Qwen3.6-35B-A3B-FP8:/models/Qwen3.6-35B-A3B-FP8:ro environment: - NVIDIA_VISIBLE_DEVICES=all - VLLM_LOGGING_LEVEL=INFO ipc: host command: - --model - /models/Qwen3.6-35B-A3B-FP8 - --served-model-name - Qwen3.6-35B-A3B-FP8 - --gpu-memory-utilization - "0.70" - --reasoning-parser - qwen3 - --enable-auto-tool-choice - --tool-call-parser - hermes deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] restart: unless-stopped
投稿者