iTAG: Inverse Design for Natural Text Generation with Accurate Causal Graph Annotations

arXiv cs.CL / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces iTAG, a method to generate natural-language text paired with accurate causal graph annotations, addressing the lack of causally annotated ground-truth text due to high labeling costs.
  • Unlike earlier template-based approaches (which improve annotation accuracy at the expense of naturalness) and LLM-only approaches (which may not guarantee annotation correctness), iTAG assigns real-world concepts to graph nodes first and then converts the graph into text.
  • iTAG treats concept selection as an inverse problem and iteratively refines node concept choices using Chain-of-Thought reasoning so that the induced relations match the target causal graph as closely as possible.
  • Experiments report both very high causal annotation accuracy and strong text naturalness, and downstream testing shows generated data correlates statistically with real-world data for text-based causal discovery.
  • The authors argue that iTAG-generated datasets can act as a scalable surrogate benchmark for evaluating text-based causal discovery algorithms.

Abstract

A fundamental obstacle to causal discovery from text is the lack of causally annotated text data for use as ground truth, due to high annotation costs. This motivates an important task of generating text with causal graph annotations. Early template-based generation methods sacrifice text naturalness in exchange for high causal graph annotation accuracy. Recent Large Language Model (LLM)-dependent methods directly generate natural text from target graphs through LLMs, but do not guarantee causal graph annotation accuracy. Therefore, we propose iTAG, which performs real-world concept assignment to nodes before converting causal graphs into text in existing LLM-dependent methods. iTAG frames this process as an inverse problem with the causal graph as the target, iteratively examining and refining concept selection through Chain-of-Thought (CoT) reasoning so that the induced relations between concepts are as consistent as possible with the target causal relationships described by the causal graph. iTAG demonstrates both extremely high annotation accuracy and naturalness across extensive tests, and the results of testing text-based causal discovery algorithms with the generated data show high statistical correlation with real-world data. This suggests that iTAG-generated data can serve as a practical surrogate for scalable benchmarking of text-based causal discovery algorithms.