ToFormer: Towards Large-scale Scenario Depth Completion for Lightweight ToF Camera
arXiv cs.RO / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ToFormer, a full-stack approach to perform large-scale depth completion for short-range Time-of-Flight (ToF) cameras, aiming to overcome ToF range limitations in robotics.
- It builds a multi-sensor data collection platform and releases the LASER-ToF dataset with dense, large-scale real-world ground truth specifically for ToF depth completion.
- The proposed sensor-aware network uses a novel 3D branch with 3D-2D Joint Propagation Pooling (JPP) and Multimodal Cross-Covariance Attention (MXCA) to better capture long-range dependencies under non-uniform ToF sparsity.
- The method can further improve accuracy by leveraging sparse point clouds from visual SLAM alongside ToF measurements.
- Experiments report an 8.6% reduction in mean absolute error versus the second-best baseline, and the system is validated on a quadrotor running at 10Hz for real-world long-range mapping and planning.
Related Articles

Black Hat Asia
AI Business

"The Agent Didn't Decide Wrong. The Instructions Were Conflicting — and Nobody Noticed."
Dev.to
Top 5 LLM Gateway Alternatives After the LiteLLM Supply Chain Attack
Dev.to

Stop Counting Prompts — Start Reflecting on AI Fluency
Dev.to

Reliable Function Calling in Deeply Recursive Union Types: Fixing Qwen Models' Double-Stringify Bug
Dev.to