RePAIR: Interactive Machine Unlearning through Prompt-Aware Model Repair
arXiv cs.AI / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Interactive Machine Unlearning (IMU), enabling users to request targeted forgetting of harmful knowledge, misinformation, or personal data via natural-language instructions at inference time rather than relying on provider-run retraining pipelines.
- It introduces RePAIR, a prompt-aware model repair framework that uses a watchdog model to detect unlearning intent, a surgeon model to produce repair procedures, and a patient model that updates parameters autonomously.
- The core technique, STAMP, performs training-free, single-sample unlearning by redirecting MLP activations toward a refusal subspace using closed-form pseudo-inverse updates.
- A low-rank variant reduces computation complexity from O(d^3) to O(r^3 + r^2·d), making on-device unlearning more feasible and reporting about a ~3× speedup over training-based baselines.
- Experiments across three unlearning targets report near-zero forget scores while preserving utility, with results outperforming six state-of-the-art baselines and suggesting extensibility to multimodal foundation models.
Related Articles

Black Hat Asia
AI Business

The Complete Guide to Better Meeting Productivity with AI Note-Taking
Dev.to

5 Ways Real-Time AI Can Boost Your Sales Call Performance
Dev.to

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning