Text-to-Distribution Prediction with Quantile Tokens and Neighbor Context

arXiv cs.CL / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Quantile Token Regression for text-to-distribution (text regression) tasks that require predicting an entire conditional distribution rather than a single value.
  • It introduces dedicated quantile tokens inserted into the input sequence so self-attention creates direct input-to-quantile pathways for each predicted quantile.
  • The method improves local grounding by retrieving semantically similar neighbor instances and using their empirical distributions as contextual evidence for more accurate estimates.
  • It includes theoretical analysis that clarifies which loss functions correspond to which distributional objectives in quantile regression.
  • Experiments on Inside Airbnb and StackSample using LLMs from 1.7B to 14B parameters show consistent improvements over baselines, including lower MAPE and substantially narrower, sharper prediction intervals.

Abstract

Many applications of LLM-based text regression require predicting a full conditional distribution rather than a single point value. We study distributional regression under empirical-quantile supervision, where each input is paired with multiple observed quantile outcomes, and the target distribution is represented by a dense grid of quantiles. We address two key limitations of current approaches: the lack of local grounding for distribution estimates, and the reliance on shared representations that create an indirect bottleneck between inputs and quantile outputs. In this paper, we introduce Quantile Token Regression, which, to our knowledge, is the first work to insert dedicated quantile tokens into the input sequence, enabling direct input-output pathways for each quantile through self-attention. We further augment these quantile tokens with retrieval, incorporating semantically similar neighbor instances and their empirical distributions to ground predictions with local evidence from similar instances. We also provide the first theoretical analysis of loss functions for quantile regression, clarifying which distributional objectives each optimizes. Experiments on the Inside Airbnb and StackSample benchmark datasets with LLMs ranging from 1.7B to 14B parameters show that quantile tokens with neighbors consistently outperform baselines (~4 points lower MAPE and 2x narrower prediction intervals), with especially large gains on smaller and more challenging datasets where quantile tokens produce substantially sharper and more accurate distributions.