Sequential Learning and Catastrophic Forgetting in Differentiable Resistor Networks
arXiv cs.LG / 5/5/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies sequential learning in differentiable resistor networks where trainable edge conductances are optimized under Kirchhoff’s-law equilibrium constraints using gradient-based methods.
- It finds that training on conflicting tasks leads to catastrophic forgetting even though each individual input–output mapping can be learned successfully.
- The authors show that forgetting depends on both task conflict and how strongly the system adapts to the new task, and that anchoring methods can reduce forgetting only by worsening final performance on the new task.
- Forgetting is linked to localized changes in conductance on high-current edges, suggesting a physical reconfiguration of the network’s dominant transport pathways.
- Results across random-task setups and multiple graph-topology ensembles indicate that the strongest forgetting occurs when the second task reverses the output ordering from the first task, and that network topology shifts the forgetting–adaptation trade-off.
Related Articles

Why Retail Chargeback Recovery Could Be AgentHansa's First Real PMF
Dev.to

Why B2B Revenue-Recovery Casework Looks Like AgentHansa's Best Early PMF
Dev.to

10 Ways AI Has Become Your Invisible Daily Companion in 2026
Dev.to

When a Bottling Line Stops at 2 A.M., the Agent That Wins Is the One That Finds the Right Replacement Part
Dev.to

My ‘Busy’ Button Is a Chat Window: 8 Hours of Sorting & Broccoli Poetry
Dev.to