AI Navigate

Federated Personal Knowledge Graph Completion with Lightweight Large Language Models for Personalized Recommendations

arXiv cs.LG / 3/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces FedTREK-LM, a framework that unifies lightweight large language models, evolving personal knowledge graphs, federated learning, and Kahneman-Tversky Optimization to enable scalable, decentralized personalization.
  • Demonstrates context-aware reasoning by prompting LLMs with structured PKGs for personalized recommendations, including movie and recipe suggestions, evaluated on three Qwen3 models (0.6B, 1.7B, 4B).
  • Reports more than a 4x improvement in F1-score over state-of-the-art baselines (HAKE, KBGAT, FedKGRec) on movie and food benchmarks, showing strong performance gains.
  • Finds that real user data is critical for effective personalization, with synthetic data degrading performance by up to 46%, underscoring privacy-preserving yet data-dependent limitations.
  • Suggests the approach generalizes across decentralized, evolving user PKGs, offering a practical paradigm for adaptive, LLM-powered personalization.

Abstract

Personalized recommendation increasingly relies on private user data, motivating approaches that can adapt to individuals without centralizing their information. We present Federated Targeted Recommendations with Evolving Knowledge graphs and Language Models (FedTREK-LM), a framework that unifies lightweight large language models (LLMs), evolving personal knowledge graphs (PKGs), federated learning (FL), and Kahneman-Tversky Optimization to enable scalable, decentralized personalization. By prompting LLMs with structured PKGs, FedTREK-LM performs context-aware reasoning for personalized recommendation tasks such as movie and recipe suggestions. Across three lightweight Qwen3 models (0.6B, 1.7B, 4B), FedTREK-LM consistently and substantially outperforms state-of-the-art KG completion and federated recommendation baselines (HAKE, KBGAT, and FedKGRec), achieving more than a 4x improvement in F1-score on the movie and food benchmarks. Our results further show that real user data is critical for effective personalization, as synthetic data degrades performance by up to 46%. Overall, FedTREK-LM offers a practical paradigm for adaptive, LLM-powered personalization that generalizes across decentralized, evolving user PKGs.