GAIN: Multiplicative Modulation for Domain Adaptation
arXiv cs.LG / 4/7/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- LLM domain adaptation can cause catastrophic forgetting because common adaptation methods like full fine-tuning or LoRA introduce new directions in the model’s weight space.
- The paper proposes GAIN (Multiplicative Modulation), which re-emphasizes existing features via multiplicative scaling W_new = S * W using a learned diagonal matrix S applied to the attention output projection and optionally the FFN.
- Experiments across five model families (774M–70B) and eight sequential domain adaptations show GAIN-FFN matches LoRA on in-domain validation PPL.
- Critically, GAIN-FFN reduces forgetting: previously trained domains improve by 7–13% in validation PPL, while LoRA degrades them by 18–36%, with examples like BoolQ degrading far less under GAIN-FFN than LoRA after multiple adaptations.
- GAIN introduces a modest parameter overhead (46K–230K per model) and can be absorbed into pretrained weights, yielding zero additional inference cost.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to