Grounding Before Generalizing: How AI Differs from Humans in Causal Transfer
arXiv cs.AI / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether state-of-the-art LLMs and VLMs can perform human-like abstract causal structure transfer learned through sequential causal exploration.
- Using the OpenLock paradigm (discovering Common Cause and Common Effect structures), the authors find models show delayed or even absent transfer across contexts compared with humans.
- Models require “environmental grounding” (initial mapping to the specific environment) before they gain efficiency, while humans transfer using prior structural knowledge from the first attempt.
- In text-only experiments, model discovery efficiency can match or exceed humans, but adding visual inputs generally degrades performance, indicating reliance on symbolic/text processing rather than integrated multimodal causal reasoning.
- The models also display systematic CC/CE asymmetries not seen in humans, suggesting heuristic biases and limiting the idea that large-scale statistical learning yields decontextualized causal schemas.
Related Articles
LLMs will be a commodity
Reddit r/artificial

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Citation Registry: Why Daily Updates Leave No Time for Data Structuring
Dev.to