Mash, Spread, Slice! Learning to Manipulate Object States via Visual Spatial Progress

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SPARTA is a unified framework for object state change manipulation tasks, addressing progressive changes such as mashing, spreading, and slicing rather than just changing an object's position.
  • It introduces spatially-progressing, object-centric changes represented as regions transitioning from actionable to transformed states, enabling structured policy observations and dense rewards.
  • The framework offers two policy variants: reinforcement learning for fine-grained control without demonstrations or simulation, and greedy control for fast, lightweight deployment.
  • It is validated on a real robot across 10 diverse objects, achieving significant improvements in training time and accuracy over sparse rewards and visual goal-conditioned baselines.
  • The results suggest progress-aware visual representations as a versatile foundation for the broader family of object state manipulation tasks, with a project website for more details.

Abstract

Most robot manipulation focuses on changing the kinematic state of objects: picking, placing, opening, or rotating them. However, a wide range of real-world manipulation tasks involve a different class of object state change--such as mashing, spreading, or slicing--where the object's physical and visual state evolve progressively without necessarily changing its position. We present SPARTA, the first unified framework for the family of object state change manipulation tasks. Our key insight is that these tasks share a common structural pattern: they involve spatially-progressing, object-centric changes that can be represented as regions transitioning from an actionable to a transformed state. Building on this insight, SPARTA integrates spatially progressing object change segmentation maps, a visual skill to perceive actionable vs. transformed regions for specific object state change tasks, to generate a) structured policy observations that strip away appearance variability, and b) dense rewards that capture incremental progress over time. These are leveraged in two SPARTA policy variants: reinforcement learning for fine-grained control without demonstrations or simulation; and greedy control for fast, lightweight deployment. We validate SPARTA on a real robot for three challenging tasks across 10 diverse real-world objects, achieving significant improvements in training time and accuracy over sparse rewards and visual goal-conditioned baselines. Our results highlight progress-aware visual representations as a versatile foundation for the broader family of object state manipulation tasks. Project website: https://vision.cs.utexas.edu/projects/sparta-robot