PhysNote: Self-Knowledge Notes for Evolvable Physical Reasoning in Vision-Language Model

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models struggle with real-world physics tasks due to spatio-temporal identity drift and the inability to persistently consolidate inference-time insights across frames.
  • It introduces PhysNote, an agentic framework that lets VLMs externalize and iteratively refine physical understanding via self-generated “Knowledge Notes.”
  • PhysNote improves temporal perception using spatio-temporal canonicalization, stores insights in a hierarchical knowledge repository, and runs an iterative reasoning loop grounded in visual evidence before consolidation.
  • Experiments on PhysBench show PhysNote reaches 56.68% overall accuracy, improving by 4.96% over the best multi-agent baseline and delivering consistent gains across four physics reasoning domains.
  • Overall, the work focuses on making VLM physical reasoning more temporally consistent and reusable rather than only correct within single, static evaluations.

Abstract

Vision-Language Models (VLMs) have demonstrated strong performance on textbook-style physics problems, yet they frequently fail when confronted with dynamic real-world scenarios that require temporal consistency and causal reasoning across frames. We identify two fundamental challenges underlying these failures: (1) spatio-temporal identity drift, where objects lose their physical identity across successive frames and break causal chains, and (2) volatility of inference-time insights, where a model may occasionally produce correct physical reasoning but never consolidates it for future reuse. To address these challenges, we propose PhysNote, an agentic framework that enables VLMs to externalize and refine physical knowledge through self-generated "Knowledge Notes." PhysNote stabilizes dynamic perception through spatio-temporal canonicalization, organizes self-generated insights into a hierarchical knowledge repository, and drives an iterative reasoning loop that grounds hypotheses in visual evidence before consolidating verified knowledge. Experiments on PhysBench demonstrate that PhysNote achieves 56.68% overall accuracy, a 4.96% improvement over the best multi-agent baseline, with consistent gains across all four physical reasoning domains.