SutureAgent: Learning Surgical Trajectories via Goal-conditioned Offline RL in Pixel Space

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents SutureAgent, which predicts surgical needle trajectories from endoscopic video by reframing the problem as goal-conditioned sequential decision-making in pixel space.
  • By modeling the needle tip as an agent that moves step-by-step in pixel coordinates, the method captures continuity between adjacent motion steps and enforces physically plausible state transitions over time.
  • It leverages sparse waypoint annotations by converting them into denser supervisory signals using cubic spline interpolation to create reward structure that guides learning.
  • The approach uses a variable-length clip observation encoder for both spatial and long-range temporal understanding, and predicts future waypoints autoregressively with discrete direction choices plus continuous magnitudes.
  • Using Conservative Q-Learning with Behavioral Cloning regularization for stable offline optimization, SutureAgent is reported to reduce Average Displacement Error by 58.6% on a new kidney wound suturing dataset (1,158 trajectories from 50 patients) versus the strongest baseline.

Abstract

Predicting surgical needle trajectories from endoscopic video is critical for robot-assisted suturing, enabling anticipatory planning, real-time guidance, and safer motion execution. Existing methods that directly learn motion distributions from visual observations tend to overlook the sequential dependency among adjacent motion steps. Moreover, sparse waypoint annotations often fail to provide sufficient supervision, further increasing the difficulty of supervised or imitation learning methods. To address these challenges, we formulate image-based needle trajectory prediction as a sequential decision-making problem, in which the needle tip is treated as an agent that moves step by step in pixel space. This formulation naturally captures the continuity of needle motion and enables the explicit modeling of physically plausible pixel-wise state transitions over time. From this perspective, we propose SutureAgent, a goal-conditioned offline reinforcement learning framework that leverages sparse annotations to dense reward signals via cubic spline interpolation, encouraging the policy to exploit limited expert guidance while exploring plausible future motion paths. SutureAgent encodes variable-length clips using an observation encoder to capture both local spatial cues and long-range temporal dynamics, and autoregressively predicts future waypoints through actions composed of discrete directions and continuous magnitudes. To enable stable offline policy optimization from expert demonstrations, we adopt Conservative Q-Learning with Behavioral Cloning regularization. Experiments on a new kidney wound suturing dataset containing 1,158 trajectories from 50 patients show that SutureAgent reduces Average Displacement Error by 58.6% compared with the strongest baseline, demonstrating the effectiveness of modeling needle trajectory prediction as pixel-level sequential action learning.