TsetlinWiSARD: On-Chip Training of Weightless Neural Networks using Tsetlin Automata on FPGAs
arXiv cs.LG / 2026/3/26
💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper introduces TsetlinWiSARD, an on-chip training method for weightless neural networks (WNNs) that uses Tsetlin Automata for iterative, probabilistic feedback-driven learning.
- It addresses a key limitation of prior WiSARD-style WNNs by mitigating overfitting from one-shot memorization-based training and reducing the need for tedious post-training tuning.
- The authors present an FPGA-based training architecture designed for efficient learning with continuous binary feedback, targeting edge requirements like low latency and improved privacy/security.
- Reported results show over 1000x faster training versus traditional WiSARD, along with 22% lower FPGA resource usage, 93.3% lower latency, and 64.2% lower power compared with other FPGA ML training accelerators.
- Overall, the work positions WNN training on FPGAs as a hardware-efficient alternative to multiply-accumulate-heavy deep learning approaches for edge ML scenarios.



