AI Navigate

I wrote a PowerShell script to sweep llama.cpp MoE nCpuMoe vs batch settings

Reddit r/LocalLLaMA / 3/22/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • A Reddit post describes a PowerShell script to sweep llama.cpp MOE nCpuMoe vs batch size to find a sweet spot for speed under VRAM constraints.
  • It performs a binary-search style sweep across MOE settings and batch sizes, benchmarking each run and tracking the best results per a chosen metric (e.g., time to finish, output quality, prompt processing).
  • The workflow uses llama bench under the hood and outputs a final top-5 table of runs, highlighting non-linear relationships between batch size and MOE performance.
  • The project is available on GitHub at DenysAshikhin/llama_moe_optimiser and the author asks for feedback if such tools already exist.
I wrote a PowerShell script to sweep llama.cpp MoE nCpuMoe vs batch settings

Hi all,

I have been playing around with Qwen 3.5 MOE models and found the sweetspot tradeoff between nCpuMoe and the batchsize for speed isn't linear.

I also kept rerunning the same tests across different quants, which got tedious.

If there is a tool/script that does this already, and I missed also let me know (I didn't find any).

How it works:

  1. Start at your chosen lowest NCpuMoe and batch size
  2. benchmark that as the baseline
  3. Proceed to (using binary search) increase the batch size and run benchmarks
  4. keep track of the best run (based on your selected metric, i.e. time to finish, output, prompt process)
  5. Run through all min to max moe settings
  6. show final table of the top 5 runs based on your selected metric

The whole thing uses the llama bench under the hood, but does a binary sweep while respecting the VRAM constraint.

https://preview.redd.it/s0rfxr4eegqg1.png?width=1208&format=png&auto=webp&s=3d288046376ab462147c82b036b72f6f3d4e51c6

If interested you can find it here: https://github.com/DenysAshikhin/llama_moe_optimiser

submitted by /u/TheLastSpark
[link] [comments]