Reasoning-Intensive Regression

arXiv cs.CL / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper defines reasoning-intensive regression (RiR) as the task of inferring subtle numerical scores from text, which differs from standard regression targets like sentiment or similarity.
  • It introduces four realistic RiR problems to form an initial benchmark and tests the claim that both prompting frozen LLMs and fine-tuning Transformer encoders via gradient descent often perform poorly on RiR.
  • The authors propose MENTAT, a lightweight approach that combines batch-reflective prompt optimization with neural ensemble learning to improve RiR performance.
  • Experiments show MENTAT can deliver up to 65% improvement over the two baseline strategies, while also indicating that there is still significant scope for further research.

Abstract

AI researchers and practitioners increasingly apply large language models (LLMs) to what we call reasoning-intensive regression (RiR), i.e., deducing subtle numerical scores from text. Unlike standard language regression tasks such as sentiment or similarity analysis, RiR often appears instead in ad-hoc applications such as rubric-based scoring, modeling dense rewards in complex environments, or domain-specific retrieval, where much deeper analysis of context is required while only limited task-specific training data and computation are available. We cast four realistic problems as RiR tasks to establish an initial benchmark, and use that to test our hypothesis that prompting frozen LLMs and fine-tuning Transformer encoders via gradient descent will both often struggle in RiR. We then propose MENTAT, a simple and lightweight method that combines batch-reflective prompt optimization with neural ensemble learning. MENTAT achieves up to 65% improvement over both baselines, though substantial room remains for future advances.