How to Correctly Make Mistakes: A Framework for Constructing and Benchmarking Mistake Aware Egocentric Procedural Videos

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces PIE-V, a framework for creating and benchmarking egocentric procedural videos that include realistic human mistakes and subsequent recoveries.
  • PIE-V augments clean “keystep” procedures with controlled, human-plausible deviations using an error planner and a correction planner that models recovery behavior.
  • An LLM-based writer performs cascade-consistent rewrites, while an LLM judge checks and repairs procedural coherence to keep the resulting instructions and actions consistent.
  • For evaluation, the authors propose a unified mistake taxonomy and a human rubric with nine metrics covering step-level and procedure-level quality, plausibility, and alignment between text and video.
  • Experiments on 17 tasks and 50 Ego-Exo4D scenarios inject 102 mistakes and produce 27 recovery corrections, and the authors audit existing datasets and compare against an LLM freeform generation baseline under the same criteria.

Abstract

Reliable procedural monitoring in video requires exposure to naturally occurring human errors and the recoveries that follow. In egocentric recordings, mistakes are often partially occluded by hands and revealed through subtle object state changes, while existing procedural datasets provide limited and inconsistent mistake and correction traces. We present PIE-V (Psychologically Inspired Error injection for Videos), a framework for constructing and benchmarking mistake-aware egocentric procedural videos by augmenting clean keystep procedures with controlled, human-plausible deviations. PIE-V combines a psychology-informed error planner conditioned on procedure phase and semantic step load, a correction planner that models recovery behavior, an LLM writer that performs cascade-consistent rewrites, and an LLM judge that validates procedural coherence and repairs failures. For video segment edits, PIE-V synthesizes replacement clips with text-guided video generation and stitches them into the episode to preserve visual plausibility. Applied to 17 tasks and 50 Ego-Exo4D scenarios, PIE-V injects 102 mistakes and generates 27 recovery corrections. For benchmarking, we introduce a unified taxonomy and a human rubric with nine metrics that cover step-level and procedure-level quality, including plausibility, procedure logic with annotator confidence, state change coherence, and grounding between text and video. Using this protocol, we audit several existing resources and compare PIE-V against a freeform LLM generation baseline under the same criteria. Together, the framework and rubric support post-completion verification for egocentric procedural mistake detection and correction.