Sketching the Readout of Large Language Models for Scalable Data Attribution and Valuation

arXiv cs.LG / 4/20/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper proposes RISE (Readout Influence Sketching Estimator), a new method for scalable data attribution and valuation in large language models by avoiding full-model gradient computation.
  • RISE targets influence “hotspots” at the output layer, using an outer-product gradient decomposition and dual-channel representations (lexical residual and semantic projected-error) compressed via CountSketch.
  • Experiments across OLMo (1B–32B) and Pythia (14M–6.9B) show up to 112× reduction in index storage versus RapidIn, enabling influence analysis even at 32B parameters where gradient-based baselines are memory-infeasible.
  • The method is evaluated on tasks including backdoor data detection (Howdy), domain separation (Finance-Medical), and data quality selection (Brain Rot), and it improves results in a closed-loop setting where further pretraining uses RISE-selected data.
  • Overall, RISE is presented as a practical scalable primitive for influence analysis and zero-shot candidate data utility scoring to support better training-data selection for LLMs.

Abstract

Data attribution and valuation are critical for understanding data-model synergy for Large Language Models (LLMs), yet existing gradient-based methods suffer from scalability challenges on LLMs. Inspired by human cognition, where decision making relies on a focused readout of relevant memories rather than replaying all pathways, we introduce RISE (Readout Influence Sketching Estimator). Instead of computing and indexing gradients across the entire LLM, RISE focuses on influence hotspots at the output layer, where influence signals concentrate, and the gradient admits a decomposed outer-product form. This enables a dual-channel representation combining a lexical residual channel (RH) and a semantic projected-error channel (GH). Applying CountSketch projections to these channels achieves strong compression while maintaining accurate attribution. Across the OLMo (1B-32B) and Pythia (14M-6.9B) families, RISE reduces index storage by up to 112\times compared to RapidIn and scales to 32B parameters LLM, where gradient-based baselines such as RapidIn and ZO-Inf become memory-infeasible. We evaluate RISE on two paradigms: (1) retrospective attribution, retrieving influential training examples for specific predictions, and (2) prospective valuation, scoring candidate data utility zero-shot. We validate RISE on three tasks: Howdy backdoor data detection, Finance-Medical domain separation, and Brain Rot high-quality data selection. In a closed-loop Brain Rot study, continued pretraining on RISE-selected data yields consistent downstream improvements. Overall, RISE provides a practical and scalable primitive for influence analysis and training-data selection in modern large language models.