Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090

Reddit r/LocalLLaMA / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • Luce DFlash is a GGUF-based, standalone C++/CUDA speculative decoding implementation that runs Qwen3.6-27B on a single 24GB RTX 3090 and claims up to about 2× throughput.
  • The project reportedly improves performance without retraining, reporting ~1.98× mean speedup on Qwen3.6 benchmarks (HumanEval, GSM8K, Math500) using a matched Qwen3.6-DFlash draft.
  • It optimizes inference by compressing the KV cache (TQ3_0), enabling long contexts (up to 256K within 24GB), increasing prefill batching (up to 16→192 for long prompts), and using sliding-window flash attention to maintain decode speed at large context lengths.
  • Users can build and run it via provided CMake + Hugging Face download commands, and it can be served through an OpenAI-compatible HTTP endpoint or a local chat REPL.
Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090

Hey fellow Llamas, your time is precious, so I'll keep it short.

We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B.

We call it Luce DFlash (https://github.com/Luce-Org/lucebox-hub; MIT)

~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining (z-lab published a matched Qwen3.6-DFlash draft on 2026-04-26, still under training, so AL should keep climbing).

If you have CUDA 12+ and an NVIDIA GPU (RTX 3090 / 4090 / 5090, DGX Spark, other Blackwell, or Jetson AGX Thor with CUDA 13+), all you need is

# After cloning the repo (link in the first comment):

cd lucebox-hub/dflash

cmake -B build -S . -DCMAKE_BUILD_TYPE=Release

cmake --build build --target test_dflash -j

# Fetch target (~16 GB)

huggingface-cli download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/

# Matched 3.6 draft is gated: accept terms + set HF_TOKEN first

huggingface-cli download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/

# Run

DFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):"

That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. The binary links libggml*.a and never libllama.

Luce DFlash will

  • Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify).
  • Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K.
  • Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts).
  • Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s.
  • Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL.

Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256:

Bench AR tok/s DFlash tok/s AL Speedup

HumanEval 34.90 78.16 5.94 2.24x

Math500 35.13 69.77 5.15 1.99x

GSM8K 34.89 59.65 4.43 1.71x

Mean 34.97 69.19 5.17 1.98x

As you can see, the speedup is real on consumer hardware, not a paper number. Target graph produces bit-identical output to autoregressive in AR mode; the draft graph matches the z-lab PyTorch reference at cos sim 0.999812. Q4_0 KV costs ~3% AL at short context (8.56 to 8.33) and wins at long context where F16 won't fit anyway.

Constraints: CUDA only, greedy verify only (temperature/top_p on the OpenAI server are accepted and ignored), no Metal / ROCm / multi-GPU. Repo started single-3090, recent community PRs added support for RTX 5090, DGX Spark / GB10, other Blackwell cards, and Jetson AGX Thor (sm_110 + CUDA 13).

Feedback more than welcome!

submitted by /u/sandropuppo
[link] [comments]