SAW: Toward a Surgical Action World Model via Controllable and Scalable Video Generation
arXiv cs.CV / 3/16/2026
📰 NewsModels & Research
Key Points
- SAW introduces Surgical Action World, a surgical world model capable of generating realistic surgical action videos with precise control over tool-tissue interactions using a diffusion-based approach.
- It conditions video generation on four lightweight signals: language prompts encoding tool-action context, a reference surgical scene, a tissue affordance mask, and 2D tool-tip trajectories, enabling trajectory-conditioned action synthesis.
- The backbone diffusion model is fine-tuned on a dataset of 12,044 laparoscopic clips and uses a depth-consistency loss to enforce geometric plausibility without requiring depth data at inference.
- SAW achieves state-of-the-art temporal consistency (CD-FVD: 199.19 vs. 546.82) and demonstrates downstream utility for surgical AI (improved action recognition) and surgical simulation (more faithful rendering of tool-tissue interactions).
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA