Training Language Models via Neural Cellular Automata
arXiv cs.AI / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors propose using neural cellular automata (NCA) to generate synthetic, non-linguistic data for pre-training LLMs, enabling a synthetic-then-natural-language pretraining approach.
- NCA data exhibit rich spatiotemporal structure similar to natural language while remaining controllable and cheap to produce at scale.
- Pre-training on just 164M NCA tokens improves downstream language modeling by up to 6% and accelerates convergence by up to 1.6x, even outperforming pre-training on 1.6B natural-language tokens in some settings.
- The gains transfer to reasoning benchmarks (GSM8K, HumanEval, BigBench-Lite), with findings that attention layers are highly transferable and that optimal NCA complexity varies by domain, enabling targeted synthetic distributions.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA