A Vision-Language-Action Model for Adaptive Ultrasound-Guided Needle Insertion and Needle Tracking
arXiv cs.RO / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a Vision-Language-Action (VLA) model to unify robotic ultrasound (RUS)-based needle tracking and adaptive needle insertion under dynamic imaging conditions.
- It introduces a Cross-Depth Fusion (CDF) tracking head that combines shallow positional signals with deep semantic features to support real-time end-to-end tracking.
- To adapt a large pretrained vision backbone for tracking efficiently, the authors add a Tracking-Conditioning (TraCon) register for parameter-efficient feature conditioning.
- For insertion control, the method uses an uncertainty-aware control policy plus an asynchronous VLA pipeline to enable timely, safer decisions.
- Experiments report consistent improvements over state-of-the-art trackers and manual operation, including higher tracking accuracy, better insertion success rates, and shorter procedure time.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to