MAST: Mask-Guided Attention Mass Allocation for Training-Free Multi-Style Transfer

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MAST, a training-free diffusion attention framework for multi-style image transfer that targets common problems like boundary artifacts, unstable stylization, and structural inconsistency.
  • MAST uses four connected modules—Layout-preserving Query Anchoring, Logit-level Attention Mass Allocation, Sharpness-aware Temperature Scaling, and Discrepancy-aware Detail Injection—to control how content and multiple style representations interact.
  • Layout-preserving Query Anchoring is designed to prevent global layout collapse by anchoring semantic structure using content queries.
  • Logit-level Attention Mass Allocation deterministically redistributes attention probability mass across spatial regions to fuse multiple styles while reducing boundary artifacts.
  • Experiments reported in the study indicate that MAST maintains structural consistency and texture fidelity while improving robustness as the number of applied styles increases.

Abstract

Style transfer aims to render a content image with the visual characteristics of a reference style while preserving its underlying semantic layout and structural geometry. While recent diffusion-based models demonstrate strong stylization capabilities by leveraging powerful generative priors and controllable internal representations, they typically assume a single global style. Extending them to multi-style scenarios often leads to boundary artifacts, unstable stylization, and structural inconsistency due to interference between multiple style representations. To overcome these limitations, we propose MAST (Mask-Guided Attention Mass Allocation for Training-Free Multi-Style Transfer), a novel training-free framework that explicitly controls content-style interactions within the diffusion attention mechanism. To achieve artifact-free and structure-preserving stylization, MAST integrates four connected modules. First, Layout-preserving Query Anchoring prevents global layout collapse by firmly anchoring the semantic structure using content queries. Second, Logit-level Attention Mass Allocation deterministically distributes attention probability mass across spatial regions, seamlessly fusing multiple styles without boundary artifacts. Third, Sharpness-aware Temperature Scaling restores the attention sharpness degraded by multi-style expansion. Finally, Discrepancy-aware Detail Injection adaptively compensates for localized high-frequency detail losses by measuring structural discrepancies. Extensive experiments demonstrate that MAST effectively mitigates boundary artifacts and maintains structural consistency, preserving texture fidelity and spatial coherence even as the number of applied styles increases.