| In the
However, in If we compare it with [link] [comments] |
Decreased Intelligence Density in DeepSeek V4 Pro
Reddit r/LocalLLaMA / 4/25/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The discussion claims that DeepSeek V4 Pro uses more tokens than DeepSeek V3.2 even in non-thinking mode, indicating reduced “intelligence density.”
- It notes that V4 Pro (1.6T) is much larger than V3.2 (0.67T), and the token usage increase suggests efficiency did not improve.
- Compared with GPT-5.4 and GPT-5.5, the gap is reported to be larger, with DeepSeek allegedly needing around 10× more tokens for similar performance.
- Given similar token processing speeds (TPS), the post infers DeepSeek V4 Pro may take roughly 10× longer to complete the same tasks.
- Overall, the excerpt challenges the expectation that scaling would optimize reasoning efficiency, arguing that compute/token efficiency has worsened in the newer model.
Related Articles

Day 6: Why Real Health AI for India Needs 22 Languages, Not Just English
Dev.to

أتمتة العمليات RPA والذكاء الاصطناعي: دليل مبسط لتحويل أعمالك
Dev.to

Perplexity vs ChatGPT: Which AI Tool Wins in 2026?
Dev.to

I Fixed 5 Chained AI Bugs in My Sales Chatbot — Each Solution Revealed the Next Problem
Dev.to

DeepSeek V4 Pro Just Dropped — Here's What Changed for AI Agents
Dev.to