Image-to-Image Translation Framework Embedded with Rotation Symmetry Priors

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an image-to-image translation framework that embeds rotation symmetry priors using rotation group equivariant convolutions to preserve domain-invariant rotational structure end-to-end in the network.
  • It introduces “transformation learnable equivariant convolutions” (TL-Conv), which adaptively learns transformation groups to improve symmetry preservation across different datasets.
  • The authors provide theoretical guarantees, including exact equivariance in continuous domains and an error bound for discrete settings, based on an equivariance error analysis of TL-Conv.
  • Extensive experiments across multiple I2I tasks reportedly show improved generation quality and demonstrate the approach’s broad applicability, with code released on GitHub.

Abstract

Image-to-image translation (I2I) is a fundamental task in computer vision, focused on mapping an input image from a source domain to a corresponding image in a target domain while preserving domain-invariant features and adapting domain-specific attributes. Despite the remarkable success of deep learning-based I2I approaches, the lack of paired data and unsupervised learning framework still hinder their effectiveness. In this work, we address the challenge by incorporating transformation symmetry priors into image-to-image translation networks. Specifically, we introduce rotation group equivariant convolutions to achieve rotation equivariant I2I framework, a novel contribution, to the best of our knowledge, along this research direction. This design ensures the preservation of rotation symmetry, one of the most intrinsic and domain-invariant properties of natural and scientific images, throughout the network. Furthermore, we conduct a systematic study on image symmetry priors on real dataset and propose a novel transformation learnable equivariant convolutions (TL-Conv) that adaptively learns transformation groups, enhancing symmetry preservation across diverse datasets. We also provide a theoretical analysis of the equivariance error of TL-Conv, proving that it maintains exact equivariance in continuous domains and provide a bound for the error in discrete cases. Through extensive experiments across a range of I2I tasks, we validate the effectiveness and superior performance of our approach, highlighting the potential of equivariant networks in enhancing generation quality and its broad applicability. Our code is available at https://github.com/tanfy929/Equivariant-I2I