Learning Cross-Joint Attention for Generalizable Video-Based Seizure Detection

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation in video-based seizure detection: models often fail to generalize to new subjects due to background bias and dependence on subject-specific appearance cues.
  • It proposes a joint-centric attention approach that detects body joints, extracts joint-centered video clips to suppress background context, and then tokenizes them with a Video Vision Transformer (ViViT).
  • The model learns cross-joint attention to capture spatial-temporal interactions among body parts, aiming to represent coordinated movement patterns linked to seizure semiology.
  • Experiments across cross-subject settings indicate the method outperforms prior CNN-, graph-, and transformer-based approaches on unseen subjects, supporting improved generalizability.

Abstract

Automated seizure detection from long-term clinical videos can substantially reduce manual review time and enable real-time monitoring. However, existing video-based methods often struggle to generalize to unseen subjects due to background bias and reliance on subject-specific appearance cues. We propose a joint-centric attention model that focuses exclusively on body dynamics to improve cross-subject generalization. For each video segment, body joints are detected and joint-centered clips are extracted, suppressing background context. These joint-centered clips are tokenized using a Video Vision Transformer (ViViT), and cross-joint attention is learned to model spatial and temporal interactions between body parts, capturing coordinated movement patterns characteristic of seizure semiology. Extensive cross-subject experiments show that the proposed method consistently outperforms state-of-the-art CNN-, graph-, and transformer-based approaches on unseen subjects.