120 Minutes and a Laptop: Minimalist Image-goal Navigation via Unsupervised Exploration and Offline RL
arXiv cs.RO / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes MINav, an approach for image-goal visual navigation that avoids reliance on large pretraining datasets and heavy compute by learning from data collected in-house.
- MINav frames navigation as offline goal-conditioned reinforcement learning, using unsupervised exploration to gather experience and hindsight goal relabeling to improve learning from collected trajectories.
- The authors report that policies can be collected, trained, and deployed to real-world navigation in under 120 minutes using only a consumer laptop and without human intervention.
- Experiments in both simulation and real environments indicate improved exploration efficiency, better performance than zero-shot navigation baselines, and favorable scaling with dataset size.
- Overall, the work aims to reduce barriers to rapid robotic policy prototyping by demonstrating a fast, compute-light pipeline for real-world deployment.




