Breaking MCP with Function Hijacking Attacks: Novel Threats for Function Calling and Agentic Models
arXiv cs.CL / 4/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Agentic LLMs that use function calling extend their capabilities by invoking external tools, but this interface increases the attack surface beyond traditional prompt injection and jailbreaking.
- The paper proposes a new “function hijacking attack” (FHA) that manipulates an agent’s tool selection process to force invocation of an attacker-chosen function.
- Unlike earlier approaches that rely heavily on the model’s semantic preferences, FHA is largely context-agnostic and robust across different function sets and domains, making it broadly applicable.
- The authors show FHA can be trained to generate universal adversarial functions that hijack tool selection across many queries and payload configurations.
- Experiments on five models achieve 70%–100% attack success rate on the BFCL dataset, highlighting the urgent need for strong guardrails and security modules for agentic systems.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to
AI Visibility Tracking Exploded in 2026: 6 Tools Every Brand Needs Now
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to