Hierarchical Long-Term Semantic Memory for LinkedIn's Hiring Agent

arXiv cs.LG / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureIndustry & Market MovesModels & Research

Key Points

  • The paper introduces a Hierarchical Long-Term Semantic Memory (HLTM) framework to help LLM agents build industrial-grade long-term semantic memory for personalized, context-aware interactions.
  • HLTM addresses key deployment challenges—scalability, low-latency retrieval, privacy constraints, cross-domain generalizability, and observability—by organizing text into a schema-aligned memory tree across multiple granularities.
  • It includes an adaptation mechanism to generalize the memory system across diverse use cases, improving robustness beyond a single domain.
  • Evaluations on LinkedIn’s Hiring Assistant indicate HLTM boosts answer correctness and retrieval F1 by over 10%, while improving the latency tradeoff (Pareto frontier) between query and indexing.
  • HLTM is reported as already deployed in production within LinkedIn’s Hiring Agent to power core personalization features in hiring workflows.

Abstract

Large Language Model (LLM) agents are increasingly used in real-world products, where personalized and context-aware user interactions are essential. A central enabler of such capabilities is the agent's long-term semantic memory system, which extracts implicit and explicit signals from noisy longitudinal behavioral data, stores them in a structured form, and supports low-latency retrieval. Building industrial-grade long-term memory for LLM agents raises five challenges: scalability, low-latency retrieval, privacy constraints, cross-domain generalizability, and observability. We introduce the Hierarchical Long-Term Semantic Memory (HLTM) framework, which organizes textual data into a schema-aligned memory tree that captures semantic knowledge at multiple levels of granularity, enabling scalable ingestion, privacy-aware storage, low-latency retrieval, and transparent provenance; HLTM further incorporates an adaptation mechanism to generalize across diverse use cases. Extensive evaluations on LinkedIn's Hiring Assistant show that HLTM improves answer correctness and retrieval F1 significantly by more than 10%, while significantly advancing the Pareto frontier between query and indexing latency. HLTM has been deployed in LinkedIn's Hiring Assistant to power core personalization features in production hiring workflows.