KARMA: Knowledge-Action Regularized Multimodal Alignment for Personalized Search at Taobao

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a “Knowledge–Action Gap” in personalized search fine-tuning with LLMs, where optimizing for personalized actions can conflict with preserving pre-trained semantic knowledge.
  • It reports that action-only training objectives can cause “Semantic Collapse,” including attention “sinks,” which harms generalization for personalized search.
  • The authors propose KARMA (Knowledge–Action Regularized Multimodal Alignment), a framework that keeps semantic knowledge by using semantic reconstruction as a train-time regularizer while still optimizing retrieval-oriented next-interest embeddings.
  • KARMA uses two complementary constraints—history-conditioned semantic generation and embedding-conditioned semantic reconstruction—to maintain semantic decodability during training.
  • Experiments on Taobao show KARMA mitigates semantic collapse and improves ranking and retrieval metrics, with reported gains such as up to +22.5 HR@200 from semantic decodability and an online deployment result of +0.5% Item Click with low inference overhead.

Abstract

Large Language Models (LLMs) are equipped with profound semantic knowledge, making them a natural choice for injecting semantic generalization into personalized search systems. However, in practice we find that directly fine-tuning LLMs on industrial personalized tasks (e.g. next item prediction) often yields suboptimal results. We attribute this bottleneck to a critical Knowledge--Action Gap: the inherent conflict between preserving pre-trained semantic knowledge and aligning with specific personalized actions by discriminative objectives. Empirically, action-only training objectives induce Semantic Collapse, such as attention ``sinks''. This degradation severely cripples the LLM's generalization, failing to bring improvements to personalized search systems. We propose KARMA (Knowledge--Action Regularized Multimodal Alignment), a unified framework that treats semantic reconstruction as a train-only regularizer. KARMA optimizes a next-interest embedding for retrieval (Action) while enforcing semantic decodability (Knowledge) through two complementary objectives: (i) history-conditioned semantic generation, which anchors optimization to the LLM's native next-token distribution, and (ii) embedding-conditioned semantic reconstruction, which constrains the interest embedding to remain semantically recoverable. On Taobao search system, KARMA mitigates semantic collapse (attention-sink analysis) and improves both action metrics and semantic fidelity. In ablations, semantic decodability yields up to +22.5 HR@200. With KARMA, we achieve +0.25 CTR AUC in ranking, +1.86 HR in pre-ranking and +2.51 HR in recalling. Deployed online with low inference overhead at ranking stage, KARMA drives +0.5% increase in Item Click.