Loom: A Scalable Analytical Neural Computer Architecture

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Loom is a neural computer architecture that runs programs compiled from C using a looped transformer with analytically derived, program-independent weights.
  • The design uses a 22-opcode instruction set implemented across 8 transformer layers, where each forward pass executes one instruction and iterates until the program counter reaches zero.
  • Loom maintains the entire machine state in a single fixed-size tensor, ensuring each execution step has fixed computational cost regardless of program length or execution history.
  • The default configuration (d=155, n=1024) has 4.7M parameters and supports 928 instruction slots, while a compact setup (d=146, n=512) can solve a 9x9 Sudoku in 284 instructions.
  • The authors release Loom source code publicly, aiming to enable replication and further experimentation with this scalable analytical neural computing approach.

Abstract

We present Loom, a computer architecture that executes programs compiled from C inside a looped transformer whose weights are derived analytically. The architecture implements a 22-opcode instruction set in 8 transformer layers. Each forward pass executes one instruction; the model is applied iteratively until the program counter reaches zero. The full machine state resides in a single tensor X \in \mathbb{R}^{d \times n} of fixed size, and every step has fixed cost for fixed d and n, independent of program length or execution history. The default configuration uses d = 155 and n = 1024, yielding 4.7 million parameters and 928 instruction slots. A compact configuration at d = 146 and n = 512 suffices for a 9\times9 Sudoku solver (284 instructions). The weights are program-independent: programs live in the state tensor, and the same fixed-weight model executes any compiled program. We make Loom source code publicly available at https://github.com/mkturkcan/Loom.