Dual-Foundation Models for Unsupervised Domain Adaptation
arXiv cs.CV / 5/6/2026
📰 NewsModels & Research
Key Points
- The paper tackles unsupervised domain adaptation (UDA) for semantic segmentation by addressing the persistent domain gap between labeled synthetic data and unlabeled real images.
- It identifies two weaknesses in prior methods: dependence on high-confidence pseudo-labels that limits learning coverage, and prototype/contrastive approaches that use biased, unstable anchors from source-trained models.
- The proposed dual-foundation framework combines SAM with superpixel-guided prompting to learn from a wider set of target pixels beyond only high-confidence predictions.
- It also integrates DINOv3 to build stable, domain-invariant class prototypes via robust representation learning, improving alignment during adaptation.
- Experiments on GTA→Cityscapes and SYNTHIA→Cityscapes show consistent gains of +1.3% and +1.4% mIoU over strong UDA baselines, respectively.
Related Articles

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost
Solidity LM surpasses Opus
Reddit r/LocalLLaMA

Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...)
Reddit r/LocalLLaMA

We measured the real cost of running a GPT-5.4 chatbot on live websites
Reddit r/artificial

AI ecosystems in China and US grow apart amid tech war
SCMP Tech