Grounding Before Generalizing: How AI Differs from Humans in Causal Transfer

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether state-of-the-art LLMs and VLMs can perform human-like abstract causal structure transfer learned through sequential causal exploration.
  • Using the OpenLock paradigm (discovering Common Cause and Common Effect structures), the authors find models show delayed or even absent transfer across contexts compared with humans.
  • Models require “environmental grounding” (initial mapping to the specific environment) before they gain efficiency, while humans transfer using prior structural knowledge from the first attempt.
  • In text-only experiments, model discovery efficiency can match or exceed humans, but adding visual inputs generally degrades performance, indicating reliance on symbolic/text processing rather than integrated multimodal causal reasoning.
  • The models also display systematic CC/CE asymmetries not seen in humans, suggesting heuristic biases and limiting the idea that large-scale statistical learning yields decontextualized causal schemas.

Abstract

Extracting abstract causal structures and applying them to novel situations is a hallmark of human intelligence. While Large Language Models (LLMs) and Vision Language Models (VLMs) have shown strong performance on a wide range of reasoning tasks, their capacity for interactive causal learning -- inducing latent structures through sequential exploration and transferring them across contexts -- remains uncharacterized. Human learners accomplish such transfer after minimal exposure, whereas classical Reinforcement Learning (RL) agents fail catastrophically. Whether state-of-the-art Artificial Intelligence (AI) models possess human-like mechanisms for abstract causal structure transfer is an open question. Using the OpenLock paradigm requiring sequential discovery of Common Cause (CC) and Common Effect (CE) structures, here we show that models exhibit fundamentally delayed or absent transfer: even successful models require initial environmental-specific mapping -- what we term environmental grounding -- before efficiency gains emerge, whereas humans leverage prior structural knowledge from the very first solution attempt. In the text-only condition, models matched or exceeded human discovery efficiency. In contrast, visual information -- in both the image-only and text-and-image conditions -- overall degraded rather than enhanced performance, revealing a broad reliance on symbolic processing rather than integrated multimodal reasoning. Models further exhibited systematic CC/CE asymmetries absent in humans, suggesting heuristic biases rather than direction-neutral causal abstraction. These findings reveal that large-scale statistical learning does not produce the decontextualized causal schemas underpinning human analogical reasoning, establishing grounding-dependent transfer as a fundamental limitation of current LLMs and VLMs.