CIG: Measuring Conversational Information Gain in Deliberative Dialogues with Semantic Memory Dynamics
arXiv cs.CL / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a Conversational Information Gain (CIG) framework to measure how much each utterance advances a group’s understanding during public deliberation, beyond civility or argument structure.
- It operationalizes CIG by building an evolving semantic memory from utterances, extracting atomic claims and consolidating them into an incrementally updated structured state.
- Each utterance is scored on three interpretable dimensions—Novelty, Relevance, and Implication Scope—using the constructed memory dynamics.
- Experiments on 80 annotated dialogue segments from two moderated settings (TV debates and community discussions) show that memory-based signals (e.g., claim updates) correlate more strongly with human judgments of CIG than heuristics like utterance length or TF-IDF.
- The authors also train LLM-based CIG predictors, enabling information-focused evaluation of dialogue quality for deliberative success analysis.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to