AI Navigate

OrigamiBench: An Interactive Environment to Synthesize Flat-Foldable Origamis

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • OrigamiBench is introduced as an interactive benchmark that combines visual perception, geometric/physical reasoning, and sequential planning through origami folding tasks.
  • The benchmark lets models iteratively propose folds and receive feedback on physical validity and similarity to a target configuration.
  • Experiments with modern vision-language models indicate that simply scaling model size does not yield reliable causal reasoning about physical transformations.
  • The work highlights that current visual and language representations are weakly integrated, suggesting the need for better multimodal grounding for planning in the physical world.

Abstract

Building AI systems that can plan, act, and create in the physical world requires more than pattern recognition. Such systems must understand the causal mechanisms and constraints governing physical processes in order to guide sequential decisions. This capability relies on internal representations, analogous to an internal language model, that relate observations, actions, and resulting environmental changes. However, many existing benchmarks treat visual perception and programmatic reasoning as separate problems, focusing either on visual recognition or on symbolic tasks. The domain of origami provides a natural testbed that integrates these modalities. Constructing shapes through folding operations requires visual perception, reasoning about geometric and physical constraints, and sequential planning, while remaining sufficiently structured for systematic evaluation. We introduce OrigamiBench, an interactive benchmark in which models iteratively propose folds and receive feedback on physical validity and similarity to a target configuration. Experiments with modern vision-language models show that scaling model size alone does not reliably produce causal reasoning about physical transformations. Models fail to generate coherent multi-step folding strategies, suggesting that visual and language representations remain weakly integrated.