Learning Spatial Structure from Pre-Beamforming Per-Antenna Range-Doppler Radar Data via Visibility-Aware Cross-Modal Supervision
arXiv cs.CV / 4/3/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines whether automotive radar models can learn meaningful spatial structure directly from pre-beamforming per-antenna range-Doppler (RD) measurements, avoiding explicit angle-domain beamforming steps.
- Using a 6-TX x 8-RX commodity automotive radar with an A/B chirp-sequence CS-FMCW scheme that changes effective transmit aperture across chirps, the authors analyze how chirp-dependent transmit configurations impact spatial recoverability.
- A dual-chirp shared-weight, end-to-end encoder is trained on pre-beamforming per-antenna RD tensors, evaluated via bird’s-eye-view (BEV) occupancy as a geometry-focused probe rather than a purely performance metric.
- The supervision is visibility-aware and cross-modal: LiDAR-derived labels incorporate radar field-of-view and occlusion-aware LiDAR observability via ray-based visibility modeling.
- Chirp ablations and range-band analyses, alongside physics-aligned baselines, conclude that spatial structure can be recovered without hand-crafted signal-processing or explicit angle-domain construction.
Related Articles

Black Hat Asia
AI Business

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

Portable eye scanner powered by AI expands access to low-cost community screening
Reddit r/artificial