AI Navigate

Synthetic Data Generation for Training Diversified Commonsense Reasoning Models

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a two-stage method to generate CommonSyn, the first large synthetic dataset for diversified Generative Commonsense Reasoning (GCR).
  • It targets overcoming annotation costs and narrow diversity in existing GCR datasets by providing scalable synthetic data.
  • Experiments show fine-tuning models on CommonSyn improves both generation diversity and quality versus vanilla or human-crafted datasets, across various LLM sizes.
  • The work could advance conversational agents by enabling them to reason over multiple plausible scenarios and produce more diverse responses.

Abstract

Conversational agents are required to respond to their users not only with high quality (i.e. commonsense bearing) responses, but also considering multiple plausible alternative scenarios, reflecting the diversity in their responses. Despite the growing need to train diverse commonsense generators, the progress of this line of work has been significantly hindered by the lack of large-scale high-quality diverse commonsense training datasets. Due to the high annotation costs, existing Generative Commonsense Reasoning (GCR) datasets are created using a small number of human annotators, covering only a narrow set of commonsense scenarios. To address this training resource gap, we propose a two-stage method to create the first-ever synthetic dataset CommonSyn for diversified (GCR). The model fine-tuned on our synthetic data jointly increase both generation diversity and quality compared with vanilla models and the model fine-tuned on human-crafted dataset across different size Large Language Models (LLMs)