KARMA: Knowledge-Action Regularized Multimodal Alignment for Personalized Search at Taobao
arXiv cs.LG / 3/26/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a “Knowledge–Action Gap” in personalized search fine-tuning with LLMs, where optimizing for personalized actions can conflict with preserving pre-trained semantic knowledge.
- It reports that action-only training objectives can cause “Semantic Collapse,” including attention “sinks,” which harms generalization for personalized search.
- The authors propose KARMA (Knowledge–Action Regularized Multimodal Alignment), a framework that keeps semantic knowledge by using semantic reconstruction as a train-time regularizer while still optimizing retrieval-oriented next-interest embeddings.
- KARMA uses two complementary constraints—history-conditioned semantic generation and embedding-conditioned semantic reconstruction—to maintain semantic decodability during training.
- Experiments on Taobao show KARMA mitigates semantic collapse and improves ranking and retrieval metrics, with reported gains such as up to +22.5 HR@200 from semantic decodability and an online deployment result of +0.5% Item Click with low inference overhead.
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Why I Switched from Cloud AI to a Dedicated AI Box (And Why You Should Too)
Dev.to
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to