Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation
arXiv cs.CL / 3/17/2026
📰 NewsTools & Practical UsageModels & Research
Key Points
- The work validates that high-bias prompts constitute a distribution shift (OOD) and that static debiasing models degrade under this shift.
- It introduces CAP-TTA, a test-time adaptation framework that performs context-aware LoRA updates only when a bias-risk trigger exceeds a threshold, using a precomputed diagonal preconditioner for fast and stable updates.
- Across toxic-prompt benchmarks, CAP-TTA reduces bias (per human evaluation) while achieving substantially lower update latency than AdamW/SGD, and it mitigates catastrophic forgetting while improving narrative fluency compared with state-of-the-art debiasing baselines.
- The approach emphasizes practical deployment potential for narrative generation, balancing debiasing effectiveness, fluency, and efficiency under distribution shifts.

