Factual and Edit-Sensitive Graph-to-Sequence Generation via Graph-Aware Adaptive Noising
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces DLM4G, a non-autoregressive diffusion-based model for graph-to-sequence (G2S) generation that targets two key weaknesses of fine-tuned autoregressive approaches: factual grounding and edit sensitivity.
- DLM4G uses graph-to-sequence alignment and an adaptive noising scheme that adjusts noise per token based on denoising error, helping preserve graph structure during generation.
- The method supports localized updates under graph edits, aiming to improve how generated text changes when the input graph is modified.
- Across three datasets, DLM4G outperforms other diffusion G2S baselines on both surface-form and embedding-based metrics, and it also surpasses fine-tuned autoregressive baselines up to much larger scales.
- The authors report improvements over strong PLM and diffusion baselines in factual grounding (FGT@0.5) and edit sensitivity (ESR), and they show generality by extending experiments to molecule captioning beyond purely textual graphs.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to