AI Navigate

Component-Aware Sketch-to-Image Generation Using Self-Attention Encoding and Coordinate-Preserving Fusion

arXiv cs.CV / 3/11/2026

Models & Research

Key Points

  • The paper introduces a novel two-stage framework for translating freehand sketches into photorealistic images, addressing challenges related to abstractness, sparsity, and style diversity in sketches.
  • The framework includes a Self-Attention-based Autoencoder Network (SA2N) for component-wise feature extraction, a Coordinate-Preserving Gated Fusion (CGF) module for spatial layout integration, and a Spatially Adaptive Refinement Revisor (SARR) based on StyleGAN2 for iterative refinement.
  • Extensive experiments on multiple facial and non-facial datasets demonstrate that the proposed method significantly outperforms existing GAN and diffusion-based models in terms of fidelity, semantic accuracy, and perceptual quality.
  • The method achieves impressive improvements on CelebAMask-HQ compared to prior approaches, including 21% better FID, 58% better IS, 41% better KID, and 20% better SSIM scores.
  • The framework shows promise for applications in forensics, digital art restoration, and general sketch-based image synthesis due to its robustness, efficiency, and cross-domain generalizability.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09484 (cs)
[Submitted on 10 Mar 2026]

Title:Component-Aware Sketch-to-Image Generation Using Self-Attention Encoding and Coordinate-Preserving Fusion

View a PDF of the paper titled Component-Aware Sketch-to-Image Generation Using Self-Attention Encoding and Coordinate-Preserving Fusion, by Ali Zia and 5 other authors
View PDF HTML (experimental)
Abstract:Translating freehand sketches into photorealistic images remains a fundamental challenge in image synthesis, particularly due to the abstract, sparse, and stylistically diverse nature of sketches. Existing approaches, including GAN-based and diffusion-based models, often struggle to reconstruct fine-grained details, maintain spatial alignment, or adapt across different sketch domains. In this paper, we propose a component-aware, self-refining framework for sketch-to-image generation that addresses these challenges through a novel two-stage architecture. A Self-Attention-based Autoencoder Network (SA2N) first captures localised semantic and structural features from component-wise sketch regions, while a Coordinate-Preserving Gated Fusion (CGF) module integrates these into a coherent spatial layout. Finally, a Spatially Adaptive Refinement Revisor (SARR), built on a modified StyleGAN2 backbone, enhances realism and consistency through iterative refinement guided by spatial context. Extensive experiments across both facial (CelebAMask-HQ, CUFSF) and non-facial (Sketchy, ChairsV2, ShoesV2) datasets demonstrate the robustness and generalizability of our method. The proposed framework consistently outperforms state-of-the-art GAN and diffusion models, achieving significant gains in image fidelity, semantic accuracy, and perceptual quality. On CelebAMask-HQ, our model improves over prior methods by 21% (FID), 58% (IS), 41% (KID), and 20% (SSIM). These results, along with higher efficiency and visual coherence across diverse domains, position our approach as a strong candidate for applications in forensics, digital art restoration, and general sketch-based image synthesis.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09484 [cs.CV]
  (or arXiv:2603.09484v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09484
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Muhammad Umer Ramzan [view email]
[v1] Tue, 10 Mar 2026 10:39:24 UTC (21,576 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Component-Aware Sketch-to-Image Generation Using Self-Attention Encoding and Coordinate-Preserving Fusion, by Ali Zia and 5 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.