Beyond One Output: Visualizing and Comparing Distributions of Language Model Generations

arXiv cs.AI / 4/22/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper argues that users’ single-sample interactions with language models hide important distributional structure like multiple modes, rare edge cases, and sensitivity to small prompt changes.
  • It introduces GROVE, an interactive visualization that represents many language model generations as overlapping paths through a text graph, helping users see shared structure, branching points, and clusters while keeping access to raw outputs.
  • The authors ground the design in a formative study (n=13) on when stochasticity matters and where existing workflows fail when reasoning about distributions over text.
  • Across three crowdsourced user studies (N=47, 44, and 40) on distribution-focused tasks, the results suggest a hybrid workflow where graph-based summaries help with structural judgments, while direct inspection of outputs works better for detail-oriented questions.

Abstract

Users typically interact with and evaluate language models via single outputs, but each output is just one sample from a broad distribution of possible completions. This interaction hides distributional structure such as modes, uncommon edge cases, and sensitivity to small prompt changes, leading users to over-generalize from anecdotes when iterating on prompts for open-ended tasks. Informed by a formative study with researchers who use LMs (n=13) examining when stochasticity matters in practice, how they reason about distributions over language, and where current workflows break down, we introduce GROVE. GROVE is an interactive visualization that represents multiple LM generations as overlapping paths through a text graph, revealing shared structure, branching points, and clusters while preserving access to raw outputs. We evaluate across three crowdsourced user studies (N=47, 44, and 40 participants) targeting complementary distributional tasks. Our results support a hybrid workflow: graph summaries improve structural judgments such as assessing diversity, while direct output inspection remains stronger for detail-oriented questions.

Beyond One Output: Visualizing and Comparing Distributions of Language Model Generations | AI Navigate