AI Navigate

Collaborative Temporal Feature Generation via Critic-Free Reinforcement Learning for Cross-User Sensor-Based Activity Recognition

arXiv cs.LG / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles cross-user variability in wearable-sensor Human Activity Recognition and proposes a collaborative temporal feature generation framework (CTFG) that uses a Transformer-based autoregressive generator.
  • It introduces a critic-free Group-Relative Policy Optimization algorithm to evaluate each generated feature sequence against alternatives sampled from the same input, avoiding critic-based value estimation.
  • A tri-objective reward comprising class discrimination, cross-user invariance, and temporal fidelity guides the feature space to be discriminative, user-agnostic, and temporally faithful.
  • On DSADS and PAMAP2 benchmarks, the approach achieves state-of-the-art cross-user accuracy (88.53% and 75.22%), reduces training variance, accelerates convergence, and generalizes across varying action-space dimensionalities.

Abstract

Human Activity Recognition using wearable inertial sensors is foundational to healthcare monitoring, fitness analytics, and context-aware computing, yet its deployment is hindered by cross-user variability arising from heterogeneous physiological traits, motor habits, and sensor placements. Existing domain generalization approaches either neglect temporal dependencies in sensor streams or depend on impractical target-domain annotations. We propose a different paradigm: modeling generalizable feature extraction as a collaborative sequential generation process governed by reinforcement learning. Our framework, CTFG (Collaborative Temporal Feature Generation), employs a Transformer-based autoregressive generator that incrementally constructs feature token sequences, each conditioned on prior context and the encoded sensor input. The generator is optimized via Group-Relative Policy Optimization, a critic-free algorithm that evaluates each generated sequence against a cohort of alternatives sampled from the same input, deriving advantages through intra-group normalization rather than learned value estimation. This design eliminates the distribution-dependent bias inherent in critic-based methods and provides self-calibrating optimization signals that remain stable across heterogeneous user distributions. A tri-objective reward comprising class discrimination, cross-user invariance, and temporal fidelity jointly shapes the feature space to separate activities, align user distributions, and preserve fine-grained temporal content. Evaluations on the DSADS and PAMAP2 benchmarks demonstrate state-of-the-art cross-user accuracy (88.53\% and 75.22\%), substantial reduction in inter-task training variance, accelerated convergence, and robust generalization under varying action-space dimensionalities.