Revisit, Extend, and Enhance Hessian-Free Influence Functions
arXiv stat.ML / 3/24/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- Influence functions are reviewed as a way to estimate how individual training samples affect model behavior without expensive retraining, using first-order Taylor approximations.
- The paper explains why directly applying influence functions to deep, non-convex models is difficult (Hessian inversion can be costly or ill-defined) and revisits TracIn as a practical approximation that replaces the inverse Hessian with an identity matrix.
- It provides theoretical/insight-based reasoning for why TracIn’s simple approximation can work despite the limitations of Hessian-based methods in deep networks.
- The authors extend TracIn to new evaluation goals including fairness and robustness, and further improve it via an ensemble strategy.
- Experiments on synthetic data and large-scale evaluations show TracIn’s effectiveness for noisy label detection, selecting subsets for large language model fine-tuning, and defending against adversarial attacks.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to