Modeling LLM Unlearning as an Asymmetric Two-Task Learning Problem
arXiv cs.CL / 4/17/2026
📰 NewsModels & Research
Key Points
- The paper reframes LLM “unlearning” as an asymmetric two-task learning setup where retaining general capability is the primary objective and forgetting targeted knowledge is an auxiliary objective.
- It proposes a retention-prioritized gradient synthesis framework that decouples task-specific gradient extraction from a conflict-aware method for combining gradients.
- Using this framework, the authors adapt PCGrad for conflict resolution and introduce SAGO, a new retention-prioritized gradient synthesis method based on constructive sign-constrained synthesis.
- Theoretical analysis shows both methods maintain non-negative cosine similarity with the retain gradient, while SAGO provides strictly tighter alignment.
- Experiments on WMDP Bio/Cyber and RWKU demonstrate improved Pareto-optimal trade-offs, with WMDP Bio SimNPO+GD target-model MMLU recovery rising from 44.6% (naive) to 94.0% (+PCGrad) and 96.0% (+SAGO) while preserving comparable forgetting strength.
Related Articles

FastAPI With LangChain and MongoDB
Dev.to
![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup
Dev.to

The AI Education Product on Product Hunt Worth Watching
Dev.to

The joy and pain of training an LLM from scratch
Reddit r/LocalLLaMA

Did you know that you can use Qwen3.5-35B-A3B-Base as an instruction/reasoning Model?
Reddit r/LocalLLaMA