Shape of Memory: a Geometric Analysis of Machine Unlearning in Second-Order Optimizers
arXiv cs.LG / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that existing definitions of machine unlearning do not adequately account for how second-order optimizers behave during data deletion.
- It compares first-order and second-order learners under unlearning scenarios that vary eigen-decomposition structure to model stored “memory” in the loss.
- While both types can match the ideal counterfactual in performance and gradients, second-order optimization shows large volatility in its optimizer state.
- The authors find that residual (supposedly deleted) information may persist in second-order optimizer states and is not captured by first-order gradient-based checks.
- Stability and effective information erasure are recovered only when state perturbations are controlled such that geometric information (memory) is explicitly removed.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Same Agent, Different Risk | How Microsoft 365 Copilot Grounding Changes the Security Model | Rahsi Framework™
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to