AeSlides: Incentivizing Aesthetic Layout in LLM-Based Slide Generation via Verifiable Rewards

arXiv cs.CV / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • AeSlides addresses the “modality gap” in LLM slide generation by adding explicit, aesthetic-layout supervision rather than relying on text-only training or costly visual reflection.
  • The framework proposes a set of carefully designed, verifiable metrics that quantify slide layout quality (e.g., aspect ratio compliance, whitespace usage, element collisions, and visual balance) with low inference cost.
  • It uses GRPO-based reinforcement learning to directly optimize slide-generation models for aesthetically coherent layouts using these verifiable rewards.
  • Experiments show that with only 5K training prompts on GLM-4.7-Flash, AeSlides substantially improves layout outcomes (aspect ratio compliance 36%→85%) and reduces layout defects (whitespace −44%, collisions −43%, imbalance −28%).
  • Human evaluation indicates a clear overall quality gain (3.31→3.56, +7.6%) and the approach outperforms other reward/agentic methods, with results that even slightly surpass Claude-Sonnet-4.5; the code is released on GitHub.

Abstract

Large language models (LLMs) have demonstrated strong potential in agentic tasks, particularly in slide generation. However, slide generation poses a fundamental challenge: the generation process is text-centric, whereas its quality is governed by visual aesthetics. This modality gap leads current models to frequently produce slides with aesthetically suboptimal layouts. Existing solutions typically rely either on heavy visual reflection, which incurs high inference cost yet yields limited gains; or on fine-tuning with large-scale datasets, which still provides weak and indirect aesthetic supervision. In contrast, the explicit use of aesthetic principles as supervision remains unexplored. In this work, we present AeSlides, a reinforcement learning framework with verifiable rewards for Aesthetic layout supervision in Slide generation. We introduce a suite of meticulously designed verifiable metrics to quantify slide layout quality, capturing key layout issues in an accurate, efficient, and low-cost manner. Leveraging these verifiable metrics, we develop a GRPO-based reinforcement learning method that directly optimizes slide generation models for aesthetically coherent layouts. With only 5K training prompts on GLM-4.7-Flash, AeSlides improves aspect ratio compliance from 36% to 85%, while reducing whitespace by 44%, element collisions by 43%, and visual imbalance by 28%. Human evaluation further shows a substantial improvement in overall quality, increasing scores from 3.31 to 3.56 (+7.6%), outperforming both model-based reward optimization and reflection-based agentic approaches, and even edging out Claude-Sonnet-4.5. These results demonstrate that such a verifiable aesthetic paradigm provides an efficient and scalable approach to aligning slide generation with human aesthetic preferences. Our repository is available at https://github.com/ympan0508/aeslides.