The Tool-Overuse Illusion: Why Does LLM Prefer External Tools over Internal Knowledge?
arXiv cs.AI / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a widespread phenomenon where LLMs overuse external tools during reasoning, even when internal knowledge would suffice.
- It attributes part of the problem to a “knowledge epistemic illusion,” where models misjudge their actual internal knowledge boundaries and thus unnecessarily invoke tools.
- To address this, the authors propose a knowledge-aware epistemic boundary alignment strategy using direct preference optimization, reducing tool usage by 82.8% while improving accuracy.
- The study also shows that reward design matters: outcome-only rewards can causally promote tool overuse because they reward final correctness without considering tool efficiency, and balanced reward signals reduce unnecessary tool calls by 66.7% (7B) and 60.7% (32B) without hurting accuracy.
- The authors provide a theoretical explanation combining both lenses—knowledge boundary perception and reward-structure effects—to better understand and mitigate tool overuse.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to