Causal Learning with Neural Assemblies
arXiv cs.LG / 4/30/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether neural assemblies can learn the direction of causal influence between variables, extending their known role in classification, planning, and other tasks.
- It proposes DIRECT (DIRectional Edge Coupling/Training), which co-activates source and target assemblies using an adaptive gain schedule to internalize directed relations.
- The method depends only on local mechanisms within neural assemblies (projection, local plasticity control, and sparse winner selection) rather than backpropagation-based training.
- The authors validate directional causal learning using dual readouts: measuring synaptic-strength asymmetry and quantifying functional propagation overlap.
- Results across multiple domains show perfect structural recovery in a supervised setting with known ground-truth structure, positioning neural assemblies as an “explainable by design” bridge to formal causal models.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to

Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to

Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to

Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to