Higher-Order Modular Attention: Fusing Pairwise and Triadic Interactions for Protein Sequences
arXiv cs.LG / 3/13/2026
📰 NewsModels & Research
Key Points
- The paper proposes Higher-Order Modular Attention (HOMA), a unified attention operator that fuses pairwise attention with a triadic interaction pathway for protein sequences.
- To maintain scalability on long sequences, HOMA uses block-structured, windowed triadic attention.
- It is evaluated on three TAPE benchmarks—Secondary Structure, Fluorescence, and Stability—and shows consistent improvements over standard self-attention and other efficient variants.
- The results suggest that explicit triadic terms provide complementary representations for protein sequence prediction with controllable additional computational cost.
Related Articles
Self-Refining Agents in Spec-Driven Development
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA

M2.7 open weights coming in ~2 weeks
Reddit r/LocalLLaMA

MiniMax M2.7 Will Be Open Weights
Reddit r/LocalLLaMA
Best open source coding models for claude code? LB?
Reddit r/LocalLLaMA