AI Navigate

Higher-Order Modular Attention: Fusing Pairwise and Triadic Interactions for Protein Sequences

arXiv cs.LG / 3/13/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Higher-Order Modular Attention (HOMA), a unified attention operator that fuses pairwise attention with a triadic interaction pathway for protein sequences.
  • To maintain scalability on long sequences, HOMA uses block-structured, windowed triadic attention.
  • It is evaluated on three TAPE benchmarks—Secondary Structure, Fluorescence, and Stability—and shows consistent improvements over standard self-attention and other efficient variants.
  • The results suggest that explicit triadic terms provide complementary representations for protein sequence prediction with controllable additional computational cost.

Abstract

Transformer self-attention computes pairwise token interactions, yet protein sequence to phenotype relationships often involve cooperative dependencies among three or more residues that dot product attention does not capture explicitly. We introduce Higher-Order Modular Attention, HOMA, a unified attention operator that fuses pairwise attention with an explicit triadic interaction pathway. To make triadic attention practical on long sequences, HOMA employs block-structured, windowed triadic attention. We evaluate on three TAPE benchmarks for Secondary Structure, Fluorescence, and Stability. Our attention mechanism yields consistent improvements across all tasks compared with standard self-attention and efficient variants including block-wise attention and Linformer. These results suggest that explicit triadic terms provide complementary representational capacity for protein sequence prediction at controllable additional computational cost.