Dual-Enhancement Product Bundling: Bridging Interactive Graph and Large Language Model

arXiv cs.CL / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses product bundling in e-commerce by combining collaborative filtering-style interactive graph learning with LLM-based semantic understanding to overcome cold-start and graph-encoding limitations.
  • It proposes a dual-enhancement approach using a graph-to-text paradigm where a Dynamic Concept Binding Mechanism (DCBM) converts graph structures into natural-language prompts aligned with LLM tokenization.
  • DCBM is designed to map domain-specific entities into LLM-friendly representations, helping the model capture combinatorial constraints implied by the interactive graph.
  • Experiments on three benchmarks (POG, POG_dense, Steam) show reported gains of 6.3%–26.5% over state-of-the-art baselines, indicating improved bundle recommendation quality.

Abstract

Product bundling boosts e-commerce revenue by recommending complementary item combinations. However, existing methods face two critical challenges: (1) collaborative filtering approaches struggle with cold-start items owing to dependency on historical interactions, and (2) LLMs lack inherent capability to model interactive graph directly. To bridge this gap, we propose a dual-enhancement method that integrates interactive graph learning and LLM-based semantic understanding for product bundling. Our method introduces a graph-to-text paradigm, which leverages a Dynamic Concept Binding Mechanism (DCBM) to translate graph structures into natural language prompts. The DCBM plays a critical role in aligning domain-specific entities with LLM tokenization, enabling effective comprehension of combinatorial constraints. Experiments on three benchmarks (POG, POG_dense, Steam) demonstrate 6.3%-26.5% improvements over state-of-the-art baselines.