Causal Learning with Neural Assemblies

arXiv cs.LG / 4/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether neural assemblies can learn the direction of causal influence between variables, extending their known role in classification, planning, and other tasks.
  • It proposes DIRECT (DIRectional Edge Coupling/Training), which co-activates source and target assemblies using an adaptive gain schedule to internalize directed relations.
  • The method depends only on local mechanisms within neural assemblies (projection, local plasticity control, and sparse winner selection) rather than backpropagation-based training.
  • The authors validate directional causal learning using dual readouts: measuring synaptic-strength asymmetry and quantifying functional propagation overlap.
  • Results across multiple domains show perfect structural recovery in a supervised setting with known ground-truth structure, positioning neural assemblies as an “explainable by design” bridge to formal causal models.

Abstract

Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables? While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality. We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning. We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. Our findings are verified through a dual-readout validation strategy: (i) synaptic-strength asymmetry, measuring the emergent weight gap between forward and reverse links, and (ii) functional propagation overlap, quantifying the reliability of directional signal flow. Across multiple domains, the framework achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries.