Higher-Order Modular Attention: Fusing Pairwise and Triadic Interactions for Protein Sequences
arXiv cs.LG / 3/13/2026
📰 NewsModels & Research
Key Points
- The paper proposes Higher-Order Modular Attention (HOMA), a unified attention operator that fuses pairwise attention with a triadic interaction pathway for protein sequences.
- To maintain scalability on long sequences, HOMA uses block-structured, windowed triadic attention.
- It is evaluated on three TAPE benchmarks—Secondary Structure, Fluorescence, and Stability—and shows consistent improvements over standard self-attention and other efficient variants.
- The results suggest that explicit triadic terms provide complementary representations for protein sequence prediction with controllable additional computational cost.
Related Articles

ラピダス、半導体設計AIエージェント「国内2社海外1社が使用中」
日経XTECH

Superposition and the Capsule: Quantum State Collapse Meets AI Identity
Dev.to

The Basilisk Inversion: Why Coercive AI Futures Are Thermodynamically Unlikely
Dev.to

The Loop as Laboratory: What 3,190 Cycles of Autonomous AI Operation Reveal
Dev.to

MiMo-V2-Pro & Omni & TTS: "We will open-source — when the models are stable enough to deserve it."
Reddit r/LocalLLaMA