Children's Intelligence Tests Pose Challenges for MLLMs? KidGym: A 2D Grid-Based Reasoning Benchmark for MLLMs

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper proposes KidGym, a new 2D grid-based benchmark designed to evaluate multimodal large language models (MLLMs) using a child-intelligence-test inspired framework.
  • KidGym targets five interpretable capabilities—Execution, Perception Reasoning, Learning, Memory, and Planning—across 12 distinct tasks.
  • The benchmark uses randomly generated layouts and varied scenarios/objects to provide more robust and generalizable evaluation of MLLM abilities.
  • It is built to be user-customizable and extensible, enabling researchers to add scenarios and tune difficulty to fit different research needs.
  • Experiments with state-of-the-art MLLMs reveal both strengths and notable limitations, and the authors release the benchmark publicly via their website.

Abstract

Multimodal Large Language Models (MLLMs) combine the linguistic strengths of LLMs with the ability to process multimodal data, enbaling them to address a broader range of visual tasks. Because MLLMs aim at more general, human-like competence than language-only models, we take inspiration from the Wechsler Intelligence Scales - an established battery for evaluating children by decomposing intelligence into interpretable, testable abilities. We introduce KidGym, a comprehensive 2D grid-based benchmark for assessing five essential capabilities of MLLMs: Execution, Perception Reasoning, Learning, Memory and Planning. The benchmark comprises 12 unique tasks, each targeting at least one core capability, specifically designed to guage MLLMs' adaptability and developmental potential, mirroring the stages of children's cognitive growth. Additionally, our tasks encompass diverse scenarios and objects with randomly generated layouts, ensuring a more accurate and robust evluation of MLLM capabilities. KidGym is designed to be fully user-customizable and extensible, allowing researchers to create new evaluation scenarios and adjust difficuly levels to accommodate the rapidly growing MLLM community. Through the evaluation of state-of-the-art MLLMs using KidGym, we identified significant insights into model capabilities and revealed several limitations of current models. We release our benchmark at: https://kidgym.github.io/KidGym-Website/.