A Synthesizable RTL Implementation of Predictive Coding Networks
arXiv cs.AI / 3/20/2026
💬 OpinionDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper presents a digitally synthesizable RTL architecture that implements discrete-time predictive coding learning dynamics directly in hardware, addressing challenges of online, distributed learning with backpropagation.
- Each neural core maintains its own activity, prediction error, and synaptic weights and communicates only with adjacent layers via hardwired connections.
- Supervised learning and inference are enabled using a per-neuron clamping primitive that enforces boundary conditions without changing the local update schedule.
- The design is deterministic and synthesizable, built around a sequential MAC datapath and a fixed finite-state schedule, avoiding task-specific instruction sequences.
- The contribution is a complete hardware substrate for predictive coding rather than a new learning rule, enabling hardware realization of predictive-coding dynamics through connectivity, parameters, and boundary conditions.




