AI Navigate

MoKus: Leveraging Cross-Modal Knowledge Transfer for Knowledge-Aware Concept Customization

arXiv cs.AI / 3/16/2026

💬 OpinionModels & Research

Key Points

  • MoKus introduces a knowledge-aware concept customization task that binds diverse textual knowledge to target visual concepts to improve fidelity and stability when using rare tokens.
  • The core idea is cross-modal knowledge transfer: modifying knowledge within the text prompt naturally transfers to the visual generation.
  • The framework uses two stages: visual concept learning to create an anchor representation, and textual knowledge updating to align knowledge queries with the anchor.
  • The authors present KnowCusBench as the first benchmark for this task and show MoKus outperforms state-of-the-art methods on the benchmark and related world-knowledge tests.
  • The approach can extend to other knowledge-aware applications like virtual concept creation and concept erasure, indicating broader applicability across multimodal generation tasks.

Abstract

Concept customization typically binds rare tokens to a target concept. Unfortunately, these approaches often suffer from unstable performance as the pretraining data seldom contains these rare tokens. Meanwhile, these rare tokens fail to convey the inherent knowledge of the target concept. Consequently, we introduce Knowledge-aware Concept Customization, a novel task aiming at binding diverse textual knowledge to target visual concepts. This task requires the model to identify the knowledge within the text prompt to perform high-fidelity customized generation. Meanwhile, the model should efficiently bind all the textual knowledge to the target concept. Therefore, we propose MoKus, a novel framework for knowledge-aware concept customization. Our framework relies on a key observation: cross-modal knowledge transfer, where modifying knowledge within the text modality naturally transfers to the visual modality during generation. Inspired by this observation, MoKus contains two stages: (1) In visual concept learning, we first learn the anchor representation to store the visual information of the target concept. (2) In textual knowledge updating, we update the answer for the knowledge queries to the anchor representation, enabling high-fidelity customized generation. To further comprehensively evaluate our proposed MoKus on the new task, we introduce the first benchmark for knowledge-aware concept customization: KnowCusBench. Extensive evaluations have demonstrated that MoKus outperforms state-of-the-art methods. Moreover, the cross-model knowledge transfer allows MoKus to be easily extended to other knowledge-aware applications like virtual concept creation and concept erasure. We also demonstrate the capability of our method to achieve improvements on world knowledge benchmarks.