AnyDoc: Enhancing Document Generation via Large-Scale HTML/CSS Data Synthesis and Height-Aware Reinforcement Optimization

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • AnyDoc is a document-generation framework that unifies multiple document tasks into a single HTML/CSS representation, covering a wide range of document categories and styles.
  • The project introduces a scalable HTML/CSS data synthesis pipeline to create DocHTML, a large dataset with 265,206 samples across 111 categories and 32 styles, including rich metadata (intentions, source code, assets, and screenshots).
  • AnyDoc fine-tunes multimodal LLMs for three tasks: intention-to-document, document derendering, and element-to-document.
  • To reduce overflow during fine-tuning, the method adds height-aware reinforcement learning (HARL) that penalizes differences in predicted vs. target document height.
  • Experiments reportedly show AnyDoc outperforming both general-purpose MLLMs and task-specific baselines across all three document generation tasks.

Abstract

Document generation has gained growing attention in the field of AI-driven content creation. In this work, we push its boundaries by introducing AnyDoc, a framework capable of handling multiple generation tasks across a wide spectrum of document categories, all represented in a unified HTML/CSS format. To overcome the limited coverage and scale of existing human-crafted document datasets, AnyDoc first establishes a scalable data synthesis pipeline to automatically generate documents in HTML/CSS form. This pipeline yields DocHTML, a large-scale dataset containing 265,206 document samples, while spanning 111 categories and 32 distinct styles. Additionally, all documents are equipped with comprehensive metadata, including design intentions, HTML/CSS source code, visual assets, and rendered screenshots. Building on the curated dataset, AnyDoc fine-tunes multi-modal large language models (MLLMs) to achieve three practical document generation tasks: intention-to-document, document derendering, and element-to-document. To address the content overflow issue observed during fine-tuning, AnyDoc further incorporates a height-aware reinforcement learning (HARL) post-training procedure. By defining a reward function based on the difference between predicted and target document heights, overflow is penalized and gradually mitigated during HARL, thereby enhancing overall performance. Qualitative and quantitative experiments demonstrate that AnyDoc outperforms both general-purpose MLLMs and task-specific baselines across all three tasks.
広告