DinoRADE: Full Spectral Radar-Camera Fusion with Vision Foundation Model Features for Multi-class Object Detection in Adverse Weather
arXiv cs.CV / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- DinoRADE is a radar-centered multi-modal perception pipeline designed to improve object detection robustness in adverse weather, using dense FMCW radar tensors fused with camera vision features.
- The method aggregates vision features around camera-transformed reference points using deformable cross-attention to better recover fine-grained spatial detail needed for detecting small vulnerable road users (VRUs).
- Vision input comes from a DINOv3 vision foundation model, enabling feature extraction that is then fused with radar features for multi-class detection.
- The authors evaluate on the K-Radar dataset across all weather conditions, report performance per five object classes, and achieve a 12.1% improvement over prior radar-camera approaches.
- Code is released publicly under the RADE-Net repository, supporting reproducibility and further research on radar-camera fusion for safety-critical driving perception.
Related Articles

Black Hat Asia
AI Business

GLM 5.1 tops the code arena rankings for open models
Reddit r/LocalLLaMA

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

My Bestie Built a Free MCP Server for Job Search — Here's How It Works
Dev.to
can we talk about how AI has gotten really good at lying to you?
Reddit r/artificial