AI Navigate

FEAT: A Linear-Complexity Foundation Model for Extremely Large Structured Data

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FEAT is a new linear-complexity foundation model designed for extremely large structured data across domains such as healthcare, finance, e-commerce, and scientific data management.
  • It replaces quadratic self-attention with a hybrid linear encoding in a multi-layer dual-axis architecture, combining adaptive-fusion bi-Mamba-2 for local dependencies and convolutional gated linear attention for global memory.
  • The model uses a hybrid structural causal model pipeline and a stable reconstruction objective to improve robustness beyond synthetic-only pre-training.
  • In experiments on 11 real-world datasets, FEAT outperforms baselines in zero-shot performance, scales linearly, and delivers up to 40x faster inference.

Abstract

Structured data is foundational to healthcare, finance, e-commerce, and scientific data management. Large structured-data models (LDMs) extend the foundation model paradigm to unify heterogeneous datasets for tasks such as classification, regression, and decision support. However, existing LDMs face major limitations. First, most rely on sample-wise self-attention, whose O(N^2) complexity limits the sample count. Second, linear sequence models often degrade representations due to hidden-state compression and artificial causal bias. Third, synthetic-only pre-training often fails to match real-world distributions. We propose FEAT, a linear-complexity foundation model for extremely large structured data. FEAT introduces a multi-layer dual-axis architecture that replaces quadratic attention with hybrid linear encoding. The architecture combines adaptive-fusion bi-Mamba-2 (AFBM) for local sample dependencies and convolutional gated linear attention (Conv-GLA) for global memory. This design enables linear-complexity cross-sample modeling while preserving expressive representations. To improve robustness, FEAT adopts a hybrid structural causal model pipeline and a stable reconstruction objective. Experiments on 11 real-world datasets show that FEAT consistently outperforms baselines in zero-shot performance, while scaling linearly and achieving up to 40x faster inference.