SceneTeract: Agentic Functional Affordances and VLM Grounding in 3D Scenes

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SceneTeract is a new framework for verifying whether 3D scenes support specific agent-driven activities by combining high-level semantic reasoning with low-level geometric feasibility checks.
  • The approach decomposes tasks into atomic action sequences and validates each step using physical accessibility constraints such as reachability, clearance, and navigability via explicit geometric and physical simulation.
  • Experiments show that many synthetic indoor environments exhibit frequent functional failures that block even basic interactions, highlighting a gap in how current scenes are assessed.
  • Evaluations of frontier vision-language models (VLMs) indicate systematic mismatches between semantic confidence and actual physical feasibility in 3D, even for the strongest models.
  • The authors use SceneTeract as a reward engine for VLM post-training to distill geometric constraints into reasoning models, and they release the verification suite and associated data.

Abstract

Embodied AI depends on interactive 3D environments that support meaningful activities for diverse users, yet assessing their functional affordances remains a core challenge. We introduce SceneTeract, a framework that verifies 3D scene functionality under agent-specific constraints. Our core contribution is a grounded verification engine that couples high-level semantic reasoning with low-level geometric checks. SceneTeract decomposes complex activities into sequences of atomic actions and validates each step against accessibility requirements (e.g., reachability, clearance, and navigability) conditioned on an embodied agent profile, using explicit physical and geometric simulations. We deploy SceneTeract to perform an in-depth evaluation of (i) synthetic indoor environments, uncovering frequent functional failures that prevent basic interactions, and (ii) the ability of frontier Vision-Language Models (VLMs) to reason about and predict functional affordances, revealing systematic mismatches between semantic confidence and physical feasibility even for the strongest current models. Finally, we leverage SceneTeract as a reward engine for VLM post-training, enabling scalable distillation of geometric constraints into reasoning models. We release the SceneTeract verification suite and data to bridge perception and physical reality in embodied 3D scene understanding.