BLOSSOM: Block-wise Federated Learning Over Shared and Sparse Observed Modalities
arXiv cs.LG / 3/31/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces BLOSSOM, a task-agnostic multimodal federated learning framework built for realistic conditions where clients have different and missing modality sets.
- BLOSSOM allows flexible sharing of model components and uses a block-wise aggregation strategy to aggregate shared blocks while keeping task-specific blocks private for partial personalization.
- The approach is designed to handle both client heterogeneity and task heterogeneity more effectively than methods that assume uniform modality availability.
- Experiments across multiple multimodal datasets show that block-wise personalization can substantially improve performance under severe modality sparsity.
- Reported gains include an average 18.7% improvement over full-model aggregation in modality-incomplete settings and 37.7% in modality-exclusive scenarios, underscoring BLOSSOM’s practical value for multimodal FL deployments.



