PPG-Based Affect Recognition with Long-Range Deep Models: A Measurement-Driven Comparison of CNN, Transformer, and Mamba Architectures
arXiv cs.LG / 4/30/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates four deep learning architectures—CNN, CNN-LSTM, Transformers, and Mamba—for classifying affect states (arousal, valence, relaxation) from wrist-based photoplethysmography (PPG) signals.
- Using identical preprocessing, segmentation, and training pipelines with subject-independent 5-fold cross-validation, the study directly compares whether long-range sequence models bring advantages over CNN/LSTM baselines on small, noisy datasets.
- Results show Transformers and Mamba reach performance comparable to the CNN baseline, but they do not consistently beat CNNs across all affect recognition tasks.
- Overall, CNNs are the most effective, achieving the highest accuracy with the smallest model size, while Transformers offer a better balance of F1 scores for arousal and relaxation.
- The work is positioned as the first evaluation of Transformer and Mamba for PPG-based affect recognition, providing guidance for selecting models in wearable affective monitoring systems.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to

Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to

Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to

Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to