Action-Aware Generative Sequence Modeling for Short Video Recommendation

arXiv cs.AI / 4/29/2026

📰 NewsIndustry & Market MovesModels & Research

Key Points

  • The paper argues that conventional short-video recommender models, which treat each video as a single binary item, struggle to capture users’ differing attitudes toward diverse segments within a video over time.
  • It proposes Action-Aware Generative Sequence Network (A2Gen), modeling user consumption as a temporal process where the timing and patterns of user actions reveal different intentions.
  • A2Gen uses a Context-aware Attention Module (CAM) to incorporate item-specific contextual features, a Hierarchical Sequence Encoder (HSE) to learn temporal action patterns from history, and an Action-seq Autoregressive Generator (AAG) to generate action sequences.
  • Experiments on Kuaishou and Tmall datasets show the approach outperforms prior methods, and large-scale Kuaishou A/B tests report significant gains in watch time, interaction rate, and 7-day retention, leading to full deployment for traffic serving 400M+ daily users.
  • Overall, the work demonstrates that action-timing-aware generative sequential modeling can improve multi-task short-video recommendation in both offline and online settings.

Abstract

With the rapid development of the Internet, users have increasingly higher expectations for the recommendation accuracy of online content consumption platforms. However, short videos often contain diverse segments, and users may not hold the same attitude toward all of them. Traditional binary-classification recommendation models, which treat a video as a single holistic entity, face limitations in accurately capturing such nuanced preferences. Considering that user consumption is a temporal process, this paper demonstrates that the timing of user actions can represent diverse intentions through statistical analysis and examination of action patterns. Based on this insight, we propose a novel modeling paradigm: Action-Aware Generative Sequence Network (A2Gen), which refines user actions along the temporal dimension and chains them into sequences for unified processing and prediction. First, we introduce the Context-aware Attention Module (CAM) to model action sequences enriched with item-specific contextual features. Building upon this, we develop the Hierarchical Sequence Encoder (HSE) to learn temporal action patterns from users' historical actions. Finally, through leveraging CAM, we design a module for action sequence generation: the Action-seq Autoregressive Generator (AAG). Extensive offline experiments on the Kuaishou's dataset and the Tmall public dataset demonstrate the superiority of our proposed model. Furthermore, through large-scale online A/B testing deployed on Kuaishou's platform, our model achieves significant improvements over baseline methods in multi-task prediction by leveraging sequential information. Specifically, it yields increases of 0.34% in user watch time, 8.1% in interaction rate, and 0.162% in overall user retention (LifeTime-7), leading to successful deployment across all traffic, serving over 400 million users every day.