LLM REgression with a Latent Iterative State Head
arXiv cs.CL / 4/3/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces RELISH, a lightweight architecture for LLM-based text regression that predicts scalar targets directly rather than generating text-form numeric outputs or combining multiple generations.
- RELISH uses a learned latent iterative state refined via cross-attention over token-level representations, ending with a linear regressor to produce the final point estimate.
- Experiments across five datasets, four LLM backbones, and two training regimes show RELISH consistently outperforms prior baselines across multiple LLM regression families, including autoregressive decoding and existing predictive-head approaches.
- The approach is highly parameter-efficient, adding only about 3.4–3.7M trainable parameters on top of frozen backbones (roughly 0.01–0.04%), which is far smaller than LoRA-style methods reported as adding 0.26–0.42% overhead.
- Overall, RELISH targets improved accuracy for regression tasks while keeping fine-tuning cost low by training only a compact head/state module.
Related Articles

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

The house asked me a question
Dev.to

Precision Clip Selection: How AI Suggests Your In and Out Points
Dev.to