StyleVAR: Controllable Image Style Transfer via Visual Autoregressive Modeling
arXiv cs.CV / 4/24/2026
📰 NewsModels & Research
Key Points
- The paper proposes StyleVAR, a controllable image style transfer method built on the Visual Autoregressive Modeling (VAR) framework and cast as conditional discrete sequence modeling in a learned latent space.
- It tokenizes multi-scale image representations using a VQ-VAE, then uses a transformer to autoregressively generate target tokens conditioned on both style and content, guided by a blended cross-attention mechanism.
- A scale-dependent blending coefficient balances the influence of style versus content at each stage to preserve VAR’s autoregressive continuity while matching both content structure and style texture.
- StyleVAR is trained in two stages (SFT on content–style–target triplets, then reinforcement fine-tuning with GRPO using a DreamSim-based perceptual reward), and it improves metrics beyond an AdaIN baseline across multiple benchmarks.
- The results show strong qualitative transfer for landscapes and architectural scenes, but performance gaps on internet images and challenges with human faces indicate a need for better content diversity and structural priors.
Related Articles

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA

Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
DeepSeek-V4 Runs on Huawei Ascend Chips at 85% Utilization — Here's What That Means for AI Infrastructure and Pricing
Dev.to