Joint Knowledge Base Completion and Question Answering by Combining Large Language Models and Small Language Models

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • 本論文は、知識ベース完成(KBC)と知識ベース質問応答(KBQA)を同時に扱い、互いを強化し合う“joint”設定の有効性を示しつつ、従来は小型言語モデル(SLM)中心でLLMの推論力が活かされていない点を指摘しています。
  • 提案フレームワークJCQLは、LLMとSLMの強みを組み合わせ、両タスクを反復的に相互強化する設計により、KBCとKBQAの相補性を利用します。
  • KBCがKBQAを強化するために、SLMで学習したKBCモデルをLLMエージェント型KBQAの“行動”として組み込み、推論経路を拡張することで、幻覚(hallucination)とKBQAの高計算コストの課題を緩和します。
  • KBQAがKBCを強化するために、KBQAの推論経路を補助学習データとして用いてKBCモデルを段階的に微調整し、KBC側(SLM)の能力を高めます。
  • 2つの公開ベンチマークデータセットでの実験結果では、JCQLがKBCとKBQAの双方で既存ベースラインを上回ると報告しています。

Abstract

Knowledge Bases (KBs) play a key role in various applications. As two representative KB-related tasks, knowledge base completion (KBC) and knowledge base question answering (KBQA) are closely related and inherently complementary with each other. Thus, it will be beneficial to solve the task of joint KBC and KBQA to make them reinforce each other. However, existing studies usually rely on the small language model (SLM) to enhance them jointly, and the large language model (LLM)'s strong reasoning ability is ignored. In this paper, by combining the strengths of the LLM with the SLM, we propose a novel framework JCQL, which can make these two tasks enhance each other in an iterative manner. To make KBC enhance KBQA, we augment the LLM agent-based KBQA model's reasoning paths by incorporating an SLM-trained KBC model as an action of the agent, alleviating the LLM's hallucination and high computational costs issue in KBQA. To make KBQA enhance KBC, we incrementally fine-tune the KBC model by leveraging KBQA's reasoning paths as its supplementary training data, improving the ability of the SLM in KBC. Extensive experiments over two public benchmark data sets demonstrate that JCQL surpasses all baselines for both KBC and KBQA tasks.