Are Independently Estimated View Uncertainties Comparable? Unified Routing for Trusted Multi-View Classification
arXiv cs.LG / 4/13/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Trusted multi-view classification often assumes that per-view evidential uncertainties are numerically comparable, but that assumption breaks when views differ in feature space, noise, or semantic granularity and branches are trained without cross-view evidence-strength consistency constraints.
- The paper argues that fusion uncertainties can become dominated by branch-specific scale bias rather than reflecting true sample-level reliability, since independently trained branches optimize mainly for prediction accuracy.
- It proposes TMUR (Trusted Multi-view learning with Unified Routing), which decouples view-specific evidence extraction from fusion arbitration using view-private experts plus a collaborative expert.
- TMUR introduces a unified router that uses global multi-view context to produce sample-level expert weights, with soft load-balancing and diversity regularization to promote balanced expert usage and specialization.
- The authors provide theoretical analysis explaining why independent evidential supervision cannot recover a shared cross-view evidence scale, and why unified global routing is more appropriate than branch-local arbitration when reliability varies by sample.
Related Articles

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to
วิธีใช้ AI ทำ SEO ให้เว็บติดอันดับ Google (2026)
Dev.to

Free AI Tools With No Message Limits — The Definitive List (2026)
Dev.to
Why Domain Knowledge Is Critical in Healthcare Machine Learning
Dev.to