Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • SeGroS is proposed as a fine-tuning framework to address granularity mismatch and supervisory redundancy in Unified Multimodal Models (UMMs).
  • It introduces a novel visual grounding map that yields two complementary supervision signals: semantic Visual Hints and a semantically-grounded Corrupted Input.
  • Semantic Visual Hints compensate for sparse text prompts, and the semantically-grounded Corrupted Input restricts the reconstruction loss to core text-aligned regions to strengthen masking-based UMMs.
  • Evaluations on GenEval, DPGBench, and CompBench demonstrate improved generation fidelity and cross-modal alignment across multiple UMM architectures.
  • The results suggest SeGroS can enhance alignment and generation quality for future unified multimodal systems.

Abstract

Unified Multimodal Models (UMMs) have emerged as a promising paradigm that integrates multimodal understanding and generation within a unified modeling framework. However, current generative training paradigms suffer from inherent limitations. We present Semantically-Grounded Supervision (SeGroS), a fine-tuning framework designed to resolve the granularity mismatch and supervisory redundancy in UMMs. At its core, we propose a novel visual grounding map to construct two complementary supervision signals. First, we formulate semantic Visual Hints to compensate for the sparsity of text prompts. Second, we generate a semantically-grounded Corrupted Input to explicitly enhance the supervision of masking-based UMMs by restricting the reconstruction loss to core text-aligned regions. Extensive evaluations on GenEval, DPGBench, and CompBench demonstrate that SeGroS significantly improves generation fidelity and cross-modal alignment across various UMM architectures.