iTAG: Inverse Design for Natural Text Generation with Accurate Causal Graph Annotations
arXiv cs.CL / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces iTAG, a method to generate natural-language text paired with accurate causal graph annotations, addressing the lack of causally annotated ground-truth text due to high labeling costs.
- Unlike earlier template-based approaches (which improve annotation accuracy at the expense of naturalness) and LLM-only approaches (which may not guarantee annotation correctness), iTAG assigns real-world concepts to graph nodes first and then converts the graph into text.
- iTAG treats concept selection as an inverse problem and iteratively refines node concept choices using Chain-of-Thought reasoning so that the induced relations match the target causal graph as closely as possible.
- Experiments report both very high causal annotation accuracy and strong text naturalness, and downstream testing shows generated data correlates statistically with real-world data for text-based causal discovery.
- The authors argue that iTAG-generated datasets can act as a scalable surrogate benchmark for evaluating text-based causal discovery algorithms.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to