Better Models, Faster Training: Sigmoid Attention for single-cell Foundation Models

arXiv cs.LG / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper argues that sigmoid attention can replace softmax attention in single-cell biological foundation model training to yield better learned representations, including ~25% higher cell-type separation and improved cohesion metrics across six datasets.
  • It reports faster and more stable training with sigmoid attention, attributing stability to bounded derivatives (≤ 0.25) and a more favorable diagonal-Jacobian structure versus softmax’s dense coupling.
  • In large-scale stress tests on 160M-parameter bidirectional attention models trained without gradient clipping on 8K-token sequences, softmax diverged catastrophically with gradients exploding by four orders of magnitude, while sigmoid stayed stable.
  • The authors release an open-source efficient GPU implementation, TritonSigmoid, claiming high performance (515 TFLOPS on H100) with native padding support and outperforming FlashAttention-2 and FlashSigmoid.
  • Overall, the work presents sigmoid attention as both theoretically motivated and empirically superior for biological foundation models, with code publicly available on GitHub.

Abstract

Training stable biological foundation models requires rethinking attention mechanisms: we find that using sigmoid attention as a drop in replacement for softmax attention a) produces better learned representations: on six diverse single-cell datasets, sigmoid achieves 25% higher cell-type separation, better cell-type cohesion metrics, and lower validation loss, b) faster training, models with sigmoid attention train up to 10% faster than their softmax counterparts, and c) more stable training by eliminating inherent sources of instability in softmax attention. We establish that sigmoid attention has globally bounded derivatives (\leq 0.25) as opposed to softmax, and a diagonal Jacobian structure in contrast with softmax's dense coupling, which together help alleviate training instabilities. In stress tests on 160M-parameter bidirectional attention models trained without gradient clipping on 8K-token sequences, softmax diverges catastrophically, with gradients exploding by four orders of magnitude, while sigmoid remains stable. Finally, we implement and open-source TritonSigmoid, an efficient GPU kernel that achieves 515 TFLOPS on H100 GPUs, outperforming both FlashAttention-2 and FlashSigmoid, with native padding support, which is essential for biological sequences. Our results establish sigmoid attention as both theoretically grounded and empirically superior for biological foundation models. Code is available at https://github.com/MSDLLCpapers/triton-sigmoid