I have created a biologically based AI model

Reddit r/artificial / 3/28/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The author describes NIMCP, a biologically inspired AI “artificial brain” in C that trains six neural network types simultaneously (spiking, liquid, convolutional, Fourier, Hamiltonian, adaptive) using learnable bridges and gradient-flow between them.
  • The spiking network reportedly develops mammalian-like firing rates (~26 Hz) with 67% sparsity emerging from cross-network training pressure rather than explicit regularization.
  • The project claims “structural” safety: an ethics module implemented as a function call in the inference code path that is not represented as a tunable weight and therefore cannot be fine-tuned away or jailbroken (verifiable via the source).
  • Learning is presented as curiosity-driven rather than reward-driven, using prediction error → dopamine → STDP gating with no explicit reward function.
  • Training follows a four-stage developmental curriculum (sensory → naming → feedback → reasoning) and the system is reportedly trainable live on a website, implemented across ~2,600 source files with multiple bindings and running on a single RTX 4000 (20GB VRAM), with eight technical papers and code released on GitHub.

I've spent the last year building NIMCP — a biologically-inspired artificial brain in C that trains six different neural network types simultaneously (spiking, liquid, convolutional, Fourier, Hamiltonian, adaptive) with gradient flow between them through learnable bridges.

Some things that might be interesting to this crowd:

- The SNN developed 26 Hz firing rates with 67% sparsity — within mammalian cortical range — without any regularization targeting those values. It emerged from cross-network training

pressure.

- Safety is structural, not behavioral. The ethics module is a function call in the inference code path, not a learned weight. It can't be fine-tuned away or jailbroken. The governance rules can only get stricter. You can verify this by reading the source.

- The brain learns through curiosity: prediction error → dopamine → STDP gating. No reward function.

- Training follows a 4-stage developmental curriculum (sensory → naming → feedback →reasoning). The training is currently in Stage 2. You can watch it train live on the website — metrics update every 60 seconds.

- 2,600 source files, 240 Python API methods, 8 language bindings. The system runs on a single RTX 4000 (20 GB VRAM).

Eight technical papers on the site covering the math, training methodology, safety architecture, and emergent dynamics.

Code: https://github.com/redmage123/nimcp

I am happy to answer questions about the architecture, training dynamics, or why I think growing intelligence through developmental stages might work differently than scaling transformers.

submitted by /u/redmage123
[link] [comments]