AdaTracker: Learning Adaptive In-Context Policy for Cross-Embodiment Active Visual Tracking
arXiv cs.RO / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- AdaTracker tackles the challenge of active visual tracking across diverse robots by using a single unified model instead of training separate models per robot embodiment.
- The approach infers embodiment-specific physical constraints from prior history via an Embodiment Context Encoder, then uses that context to dynamically modulate a context-aware policy for control action selection.
- It targets zero-shot adaptation to unseen embodiments by ensuring the inferred context is accurate and temporally consistent through two auxiliary objectives.
- Experiments in both simulation and real-world settings show AdaTracker improves performance over state-of-the-art methods in cross-embodiment generalization, sample efficiency, and adaptation without additional training.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to