MirrorBench: Evaluating Self-centric Intelligence in MLLMs by Introducing a Mirror
arXiv cs.AI / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MirrorBench, a simulation-based benchmark designed to evaluate “self-centric” intelligence in multimodal large language models (MLLMs), going beyond existing benchmarks focused on external-object understanding.
- MirrorBench is inspired by the Mirror Self-Recognition (MSR) test from psychology and uses a tiered set of tasks that increase in difficulty, from basic visual perception to higher-level self-representation.
- Experiments on leading MLLMs show that performance is substantially worse than humans even at the lowest benchmark tier, indicating fundamental limits in self-referential understanding.
- The authors propose a framework that connects psychological self-recognition paradigms with embodied intelligence evaluation to support principled measurement of general intelligence emergence in large models.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
