Resource Utilization of Differentiable Logic Gate Networks Deployed on FPGAs

arXiv cs.AI / 5/7/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper evaluates how differentiable Logic Gate Networks (LGN) synthesized onto FPGAs affect power, resource utilization, inference speed, and model accuracy.
  • It finds that the LGN’s final layer is especially critical, because it determines the logic size of the summation operations and can reduce timing and resource usage by about 28%.
  • The study shows that, under timing and routing constraints, deeper and wider LGNs are feasible on FPGAs when the final layer is kept narrow.
  • It provides practical tradeoff guidance to help ML engineers choose baseline LGN architectures for a target FPGA with a fixed number of LUTs.

Abstract

On-edge machine learning (ML) often strives to maximize the intelligence of small models while miniaturizing the circuit size and power needed to perform inference. Meeting these needs, differentiable Logic Gate Networks (LGN) have demonstrated nanosecond-scale prediction speeds while reducing the required resources as compares to traditional binary neural networks. Despite these benefits, the trade-offs between LGN parameters and resulting hardware synthesis characteristics are not well characterized. This paper therefore studies the tradeoffs between power, resource utilization, inference speed, and model accuracy when varying the depth and width of LGNs synthesized for Field Programmable Gate Arrays (FPGA). Results reveal that the final layer of an LGN is critical to minimize timing and resource usage (i.e. 28\% decrease), as this layer dictates the logic size of summing operations. Subject to timing and routing constraints, deeper and wider LGNs can be synthesized for FPGA when the final layer is narrow. Further tradeoffs are presented to help ML engineers select baseline LGN architectures for FPGAs with a set number of Look Up Tables (LUT).