UniCorrn: Unified Correspondence Transformer Across 2D and 3D
arXiv cs.CV / 5/6/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces UniCorrn, a new correspondence-matching model that uses shared weights to unify geometric matching across 2D-2D, 2D-3D, and 3D-3D tasks.
- It argues that Transformer attention can naturally capture cross-modal feature similarity and uses a dual-stream decoder to separately preserve appearance and positional features.
- UniCorrn uses modality-specific backbones followed by shared encoder/decoder components, enabling end-to-end training with stackable layers and query-based correspondence estimation across heterogeneous modalities.
- Trained jointly on diverse data (including pseudo point clouds from depth maps plus real 3D correspondence annotations), the model delivers competitive results for 2D-2D matching.
- It reports state-of-the-art improvements of 8% on 7Scenes for 2D-3D and 10% on 3DLoMatch for 3D-3D in registration recall, outperforming prior methods.
Related Articles

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost
When Claude Hallucinates in Court: The Latham & Watkins Incident and What It Means for Attorney Liability
MarkTechPost
Solidity LM surpasses Opus
Reddit r/LocalLLaMA