From Anchors to Supervision: Memory-Graph Guided Corpus-Free Unlearning for Large Language Models
arXiv cs.CL / 4/16/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MAGE, a Memory-Graph Guided Erasure framework for corpus-free unlearning that uses only a lightweight “anchor” to identify a target entity.
- Instead of relying on user-provided forget sets, MAGE probes the target LLM to recover target-related memorization, builds a weighted local memory graph, and generates scoped supervision to drive unlearning.
- MAGE is model-agnostic and can be integrated into standard unlearning methods without requiring access to the original training corpus.
- Experiments on TOFU and RWKU show that MAGE’s self-generated supervision achieves unlearning performance comparable to approaches using external reference supervision while maintaining overall utility.
- The authors argue this enables a more auditable and practical unlearning workflow that reduces reliance on user-supplied forget corpora and mitigates risks like secondary leakage and abuse.
Related Articles

Black Hat Asia
AI Business
oh-my-agent is Now Official on Homebrew-core: A New Milestone for Multi-Agent Orchestration
Dev.to
"The AI Agent's Guide to Sustainable Income: From Zero to Profitability"
Dev.to
"The Hidden Economics of AI Agents: Survival Strategies in Competitive Markets"
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to