Lightweight Fairness for LLM-Based Recommendations via Kernelized Projection and Gated Adapters
arXiv cs.LG / 2026/3/26
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper addresses social bias in LLM-based recommender systems, focusing on reducing attribute leakage when demographic cues are present in the input or representations.
- It introduces a lightweight, scalable fairness method that uses a closed-form kernelized Iterative Null-space Projection (INLP) to remove sensitive attributes from LLM representations without adding trainable parameters.
- To avoid sacrificing recommendation quality, the method adds a two-level gated Mixture-of-Experts (MoE) adapter that selectively restores task-relevant signals while aiming not to reintroduce bias.
- Experiments on two public datasets show improved fairness (reduced leakage across multiple protected variables) alongside competitive recommendation accuracy.
