Revisit, Extend, and Enhance Hessian-Free Influence Functions
arXiv stat.ML / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisTools & Practical UsageModels & Research
要点
- Influence functions are reviewed as a way to estimate how individual training samples affect model behavior without expensive retraining, using first-order Taylor approximations.
- The paper explains why directly applying influence functions to deep, non-convex models is difficult (Hessian inversion can be costly or ill-defined) and revisits TracIn as a practical approximation that replaces the inverse Hessian with an identity matrix.
- It provides theoretical/insight-based reasoning for why TracIn’s simple approximation can work despite the limitations of Hessian-based methods in deep networks.
- The authors extend TracIn to new evaluation goals including fairness and robustness, and further improve it via an ensemble strategy.
- Experiments on synthetic data and large-scale evaluations show TracIn’s effectiveness for noisy label detection, selecting subsets for large language model fine-tuning, and defending against adversarial attacks.

