Don't Make the LLM Read the Graph: Make the Graph Think
arXiv cs.AI / 4/28/2026
📰 NewsModels & Research
Key Points
- The study finds that whether explicit belief graphs help LLMs in cooperative multi-agent reasoning depends strongly on the integration architecture and model strength.
- In Hanabi with controlled trials across four LLM families, belief graphs are mostly decorative for strong models when used as prompt context, but they become crucial when they gate action selection via ranked shortlists.
- The research identifies a failure mode called “Planner Defiance,” where some model families override correct planner recommendations at partial competence, with large differences observed across Gemini and Llama 70B.
- Full-game experiments show that inter-agent conventions plus properly combined belief-graph components outperform single-agent interventions, and preliminary scaling results suggest shallow graphs may offer the best cost-benefit while deeper ToM graphs can degrade performance at larger player counts.
- Overall, the paper argues for shifting from “making the LLM read the graph” to “making the graph think,” by using graph structure to drive decision-making rather than just providing information.
Related Articles

A beginner's guide to the Gemini-2.5-Flash model by Google on Replicate
Dev.to

Qwen 3.6 27B vs Gemma 4 31B - making Packman game!
Reddit r/LocalLLaMA
Our evaluation of OpenAI's GPT-5.5 cyber capabilities
Simon Willison's Blog

Cuda + ROCm simultaneously with -DGGML_BACKEND_DL=ON !
Reddit r/LocalLLaMA

Final Monster: 32x AMD MI50 32GB at 9.7 t/s (TG) & 264 t/s (PP) with Kimi K2.6
Reddit r/LocalLLaMA