AI Navigate

Design and evaluation of an agentic workflow for crisis-related synthetic tweet datasets

arXiv cs.CL / 3/17/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The article presents an agentic workflow for generating crisis-related synthetic tweet datasets to overcome real-data access and annotation limitations.
  • It describes an iterative process where synthetic tweets are conditioned on target characteristics, evaluated with compliance checks, and refined over subsequent iterations.
  • A case study on post-earthquake damage assessment demonstrates that the synthetic data can encode labels like location and damage level.
  • The authors argue that these synthetic datasets offer a flexible, scalable alternative for evaluating AI systems on tasks such as geolocat​ion and damage prediction across diverse crisis contexts.

Abstract

Twitter (now X) has become an important source of social media data for situational awareness during crises. Crisis informatics research has widely used tweets from Twitter to develop and evaluate artificial intelligence (AI) systems for various crisis-relevant tasks, such as extracting locations and estimating damage levels from tweets to support damage assessment. However, recent changes in Twitter's data access policies have made it increasingly difficult to curate real-world tweet datasets related to crises. Moreover, existing curated tweet datasets are limited to past crisis events in specific contexts and are costly to annotate at scale. These limitations constrain the development and evaluation of AI systems used in crisis informatics. To address these limitations, we introduce an agentic workflow for generating crisis-related synthetic tweet datasets. The workflow iteratively generates synthetic tweets conditioned on prespecified target characteristics, evaluates them using predefined compliance checks, and incorporates structured feedback to refine them in subsequent iterations. As a case study, we apply the workflow to generate synthetic tweet datasets relevant to post-earthquake damage assessment. We show that the workflow can generate synthetic tweets that capture their target labels for location and damage level. We further demonstrate that the resulting synthetic tweet datasets can be used to evaluate AI systems on damage assessment tasks like geolocalization and damage level prediction. Our results indicate that the workflow offers a flexible and scalable alternative to real-world tweet data curation, enabling the systematic generation of synthetic social media data across diverse crisis events, societal contexts, and crisis informatics applications.