Intel B70: LLama.ccp SYCL vs LLama.cpp OpenVino vs LLM-Scaler

Reddit r/LocalLLaMA / 4/27/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • The post reports a benchmark comparison on Intel GPUs between llama.cpp’s new OpenVINO backend, the existing SYCL backend, and LLM-Scaler (an Intel VLLM fork).
  • The OpenVINO backend appears to outperform the prior SYCL best-case in the author’s initial tests, but LLM-Scaler still shows higher performance, likely due to hardware-specific optimizations for GPTQ/Int4.
  • Although tg512 was fastest under SYCL, the author notes that real-world performance on that card is mainly constrained by prompt processing latency rather than peak tokens-per-second.
  • The author also criticizes Intel’s model compatibility/selection experience: it took time to find a model in the validated OpenVINO list that would both run correctly and have a sufficiently “close” counterpart for LLM-Scaler comparison.
  • The article is framed as an informal, user-run bench write-up rather than a formal release or official announcement.

In case anyone is interested, I decided to test out LLama.cpp's new OpenVino backend to see how it compares on Intel GPUs. At first glance, it stomps all over the previous best-case, SYCL, but lags behind LLM-Scaler (Intel's VLLM fork), likely just due to the hardware optimizations against GPTQ/Int4. Interestingly tg512 was fastest on SYCL, but in real world, the prompt processing always seems the be the indicator on this card.

As usual with Intel, model selection is... poor. It took a while to even find a model that was in the validated OpenVino list that would not only run properly, but also have a counterpart that was "close enough" for LLM Scaler.

Edit: Really Reddit? Can't edit a title? Haven't used this heap in so long, now I'm remembering why.

## Llama.cpp OpenVino llama-benchy http://localhost:8000/v1 bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) | |:---------------------------------------------------|-------:|-----------------:|-------------:|---------------:|---------------:|----------------:| | bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | pp2048 | 3845.61 ± 524.73 | | 659.99 ± 56.95 | 489.07 ± 56.95 | 739.42 ± 56.84 | | bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | tg512 | 40.89 ± 0.55 | 44.33 ± 1.25 | | | | ## Llama.cpp SYCL llama-benchy http://localhost:8000/v1 bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) | |:---------------------------------------------------|-------:|---------------:|-------------:|----------------:|----------------:|----------------:| | bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | pp2048 | 844.64 ± 19.25 | | 2199.90 ± 23.63 | 2178.96 ± 23.63 | 2229.67 ± 24.84 | | bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF:Q4_K_M | tg512 | 73.87 ± 1.17 | 78.00 ± 2.16 | | | | ## LLM-Scaler llama-benchy http://localhost:8000/v1 jakiAJK/DeepSeek-R1-Distill-Llama-8B_GPTQ-int4 | model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) | |:--------|-------:|-----------------:|-------------:|---------------:|---------------:|----------------:| | jakiAJK/DeepSeek-R1-Distill-Llama-8B_GPTQ-int4 | pp2048 | 7875.52 ± 642.20 | | 268.09 ± 20.50 | 240.11 ± 20.50 | 268.34 ± 20.45 | | jakiAJK/DeepSeek-R1-Distill-Llama-8B_GPTQ-int4 | tg512 | 52.75 ± 0.10 | 54.00 ± 0.00 | | | |## Llama.cpp OpenVino 
submitted by /u/Fmstrat
[link] [comments]