AI Navigate

IntroSVG: 内省的ジェネレーター・クリティックフレームワークによるレンダリングフィードバックから学ぶテキストからSVG生成

arXiv cs.CV / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • IntroSVGは、視覚的フィードバックをトレーニングループに組み込むことでテキストからSVG生成を強化する新しい内省的SVG生成フレームワークを提案します。
  • フレームワークはジェネレーターとクリティックの両方として機能する統合されたビジュアル言語モデルを採用し、繰り返しSVG出力の品質を向上させる生成・レビュー・改善サイクルを可能にします。
  • モデルはSVGドラフティングとレンダリング画像の批評の両方に関する教師あり学習でファインチューニングされ、初期の失敗を価値ある誤り修正のトレーニングデータに変換します。
  • 高容量の教師VLMを用いたダイレクト・プリファレンス・オプティマイゼーションにより、生成器のポリシーをさらに整合させ、意味的に整った複雑かつ編集可能なSVG画像を生成します。
  • 実験結果は最先端の性能を示し、明示的な視覚的フィードバックを統合することがテキストからSVG生成能力を大幅に向上させることを確認しました。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09312 (cs)
[Submitted on 10 Mar 2026]

Title:IntroSVG: Learning from Rendering Feedback for Text-to-SVG Generation via an Introspective Generator-Critic Framework

View a PDF of the paper titled IntroSVG: Learning from Rendering Feedback for Text-to-SVG Generation via an Introspective Generator-Critic Framework, by Feiyu Wang and 6 other authors
View PDF HTML (experimental)
Abstract:Scalable Vector Graphics (SVG) are central to digital design due to their inherent scalability and editability. Despite significant advancements in content generation enabled by Visual Language Models (VLMs), existing text-to-SVG generation methods are limited by a core challenge: the autoregressive training process does not incorporate visual perception of the final rendered image, which fundamentally constrains generation quality. To address this limitation, we propose an Introspective SVG Generation Framework (IntroSVG). At its core, the framework instantiates a unified VLM that operates in a closed loop, assuming dual roles of both generator and critic. Specifically, through Supervised Fine-Tuning (SFT), the model learns to draft SVGs and to provide feedback on their rendered outputs; moreover, we systematically convert early-stage failures into high-quality error-correction training data, thereby enhancing model robustness. Subsequently, we leverage a high-capacity teacher VLM to construct a preference dataset and further align the generator's policy through Direct Preference Optimization (DPO). During inference, the optimized generator and critic operate collaboratively in an iterative "generate-review-refine" cycle, starting from imperfect intermediate drafts to autonomously improve output quality. Experimental results demonstrate that our method achieves state-of-the-art performance across several key evaluation metrics, generating SVGs with more complex structures, stronger semantic alignment, and greater editability. These results corroborate the effectiveness of incorporating explicit visual feedback into the generation loop.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09312 [cs.CV]
  (or arXiv:2603.09312v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09312
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Feiyu Wang [view email]
[v1] Tue, 10 Mar 2026 07:44:51 UTC (1,458 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled IntroSVG: Learning from Rendering Feedback for Text-to-SVG Generation via an Introspective Generator-Critic Framework, by Feiyu Wang and 6 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.