LLMs Reading the Rhythms of Daily Life: Aligned Understanding for Behavior Prediction and Generation

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of modeling human daily behaviors as long sequence patterns influenced by intentions, preferences, and context, which is important for systems like assistants and recommendation engines.
  • It argues that while LLMs have strong semantic understanding and generation ability, they cannot be directly applied because behavioral data differs structurally and modality-wise from natural language.
  • The authors introduce Behavior Understanding Alignment (BUA), which aligns LLMs with behavior modeling by using sequence embeddings from pretrained behavior models as “alignment anchors.”
  • BUA trains the LLM via a three-stage curriculum and uses a multi-round dialogue setup to support both behavior prediction and behavior generation.
  • Experiments on two real-world datasets show BUA significantly improves performance over existing methods on both prediction and generation tasks.

Abstract

Human daily behavior unfolds as complex sequences shaped by intentions, preferences, and context. Effectively modeling these behaviors is crucial for intelligent systems such as personal assistants and recommendation engines. While recent advances in deep learning and behavior pre-training have improved behavior prediction, key challenges remain--particularly in handling long-tail behaviors, enhancing interpretability, and supporting multiple tasks within a unified framework. Large language models (LLMs) offer a promising direction due to their semantic richness, strong interpretability, and generative capabilities. However, the structural and modal differences between behavioral data and natural language limit the direct applicability of LLMs. To address this gap, we propose Behavior Understanding Alignment (BUA), a novel framework that integrates LLMs into human behavior modeling through a structured curriculum learning process. BUA employs sequence embeddings from pretrained behavior models as alignment anchors and guides the LLM through a three-stage curriculum, while a multi-round dialogue setting introduces prediction and generation capabilities. Experiments on two real-world datasets demonstrate that BUA significantly outperforms existing methods in both tasks, highlighting its effectiveness and flexibility in applying LLMs to complex human behavior modeling.

LLMs Reading the Rhythms of Daily Life: Aligned Understanding for Behavior Prediction and Generation | AI Navigate