StyleVAR: Controllable Image Style Transfer via Visual Autoregressive Modeling

arXiv cs.CV / 4/24/2026

📰 NewsModels & Research

Key Points

  • The paper proposes StyleVAR, a controllable image style transfer method built on the Visual Autoregressive Modeling (VAR) framework and cast as conditional discrete sequence modeling in a learned latent space.
  • It tokenizes multi-scale image representations using a VQ-VAE, then uses a transformer to autoregressively generate target tokens conditioned on both style and content, guided by a blended cross-attention mechanism.
  • A scale-dependent blending coefficient balances the influence of style versus content at each stage to preserve VAR’s autoregressive continuity while matching both content structure and style texture.
  • StyleVAR is trained in two stages (SFT on content–style–target triplets, then reinforcement fine-tuning with GRPO using a DreamSim-based perceptual reward), and it improves metrics beyond an AdaIN baseline across multiple benchmarks.
  • The results show strong qualitative transfer for landscapes and architectural scenes, but performance gaps on internet images and challenges with human faces indicate a need for better content diversity and structural priors.

Abstract

We build on the Visual Autoregressive Modeling (VAR) framework and formulate style transfer as conditional discrete sequence modeling in a learned latent space. Images are decomposed into multi-scale representations and tokenized into discrete codes by a VQ-VAE; a transformer then autoregressively models the distribution of target tokens conditioned on style and content tokens. To inject style and content information, we introduce a blended cross-attention mechanism in which the evolving target representation attends to its own history, while style and content features act as queries that decide which aspects of this history to emphasize. A scale-dependent blending coefficient controls the relative influence of style and content at each stage, encouraging the synthesized representation to align with both the content structure and the style texture without breaking the autoregressive continuity of VAR. We train StyleVAR in two stages from a pretrained VAR checkpoint: supervised fine-tuning on a large triplet dataset of content--style--target images, followed by reinforcement fine-tuning with Group Relative Policy Optimization (GRPO) against a DreamSim-based perceptual reward, with per-action normalization weighting to rebalance credit across VAR's multi-scale hierarchy. Across three benchmarks spanning in-, near-, and out-of-distribution regimes, StyleVAR consistently outperforms an AdaIN baseline on Style Loss, Content Loss, LPIPS, SSIM, DreamSim, and CLIP similarity, and the GRPO stage yields further gains over the SFT checkpoint, most notably on the reward-aligned perceptual metrics. Qualitatively, the method transfers texture while maintaining semantic structure, especially for landscapes and architectural scenes, while a generalization gap on internet images and difficulty with human faces highlight the need for better content diversity and stronger structural priors.