M2StyleGS: Multi-Modality 3D Style Transfer with Gaussian Splatting

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • M2StyleGS is a proposed real-time 3D style transfer method that uses 3D Gaussian Splatting to produce color-mapped sequences of novel views.
  • Instead of relying only on a fixed reference image, the approach supports flexible multi-modal inputs such as text descriptions and diverse images, using CLIP to refine reference style features.
  • The method addresses abnormal transformations with “subdivisive flow” for precise feature alignment, improving how the mapped CLIP text-visual feature projects into VGG-based style features.
  • It introduces observation loss to better match the reference style during generation and suppression loss to reduce drift of reference color information across decoding.
  • Experiments report improved visual quality and up to 32.92% better consistency than prior work, suggesting stronger generalization for stylized 3D view synthesis.

Abstract

Conventional 3D style transfer methods rely on a fixed reference image to apply artistic patterns to 3D scenes. However, in practical applications such as virtual or augmented reality, users often prefer more flexible inputs, including textual descriptions and diverse imagery. In this work, we introduce a novel real-time styling technique M2StyleGS to generate a sequence of precisely color-mapped views. It utilizes 3D Gaussian Splatting (3DGS) as a 3D presentation and multi-modality knowledge refined by CLIP as a reference style. M2StyleGS resolves the abnormal transformation issue by employing a precise feature alignment, namely subdivisive flow, it strengthens the projection of the mapped CLIP text-visual combination feature to the VGG style feature. In addition, we introduce observation loss, which assists in the stylized scene better matching the reference style during the generation, and suppression loss, which suppresses the offset of reference color information throughout the decoding process. By integrating these approaches, M2StyleGS can employ text or images as references to generate a set of style-enhanced novel views. Our experiments show that M2StyleGS achieves better visual quality and surpasses the previous work by up to 32.92% in terms of consistency.

M2StyleGS: Multi-Modality 3D Style Transfer with Gaussian Splatting | AI Navigate