On associative neural networks for sparse patterns with huge capacities

arXiv cs.LG / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies higher-order generalized Hopfield-style associative memory models that use interaction terms beyond the classical quadratic formulation to boost storage capacity.
  • It combines this higher-order mechanism with sparse-pattern associative memories (e.g., Willshaw/Amari), analyzing how capacity scales with system size for fixed and growing interaction order.
  • For fixed interaction order, the authors derive storage capacities that scale polynomially with the number of neurons, improving over classical quadratic models.
  • Allowing the interaction order to grow logarithmically with the number of neurons leads to super-polynomial storage capacities.
  • The work also provides an analogue for the Gripon–Berrou architecture and shows that higher-order interaction-driven capacity gains remain in the sparse setting, with scaling dependent on the specific architecture.

Abstract

Generalized Hopfield models with higher-order or exponential interaction terms are known to have substantially larger storage capacities than the classical quadratic model. On the other hand, associative memories for sparse patterns, such as the Willshaw and Amari models, already outperform the classical Hopfield model in the sparse regime. In this paper we combine these two mechanisms. We introduce higher-order versions of sparse associative memory models and study their storage capacities. For fixed interaction order n, we obtain storage capacities of polynomial order in the system size. When the interaction order is allowed to grow logarithmically with the number of neurons, this yields super-polynomial capacities. We also discuss an analogue in the Gripon--Berrou architecture which was formulated for non-sparse messages (see \cite{griponc}). Our results show that the capacity increase caused by higher-order interactions persists in the sparse setting, although the precise storage scale depends on the underlying architecture.