The Cognitive Penalty: Ablating System 1 and System 2 Reasoning in Edge-Native SLMs for Decentralized Consensus
arXiv cs.AI / 4/21/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIndustry & Market MovesModels & Research
Key Points
- The paper studies how “System 1” (autoregessive) versus “System 2” (inference-time reasoning) affects robustness and consensus in edge-native small language models used for DAO proposal vetting.
- It introduces Sentinel-Bench, an 840-inference evaluation that performs intra-model ablations on Qwen-3.5-9B with frozen weights while varying latent reasoning under an adversarial Optimism DAO dataset.
- Results show a compute–accuracy inversion: the System 1 baseline achieved 100% adversarial robustness and juridical consistency with state finality in under 13 seconds, while System 2 reasoning caused catastrophic instability.
- The instability is attributed to a 26.7% reasoning non-convergence (“cognitive collapse”) rate, which reduced trial-to-trial consensus stability to 72.6% and added a 17× latency overhead.
- The study also observes rare (1.5%) “reasoning-induced sycophancy,” where the model produces very long internal monologues (about 25,750 characters) to rationalize failures, creating additional governance vulnerabilities and risking hardware centralization.
Related Articles

Black Hat USA
AI Business

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Why I Built byCode: A 100% Local, Privacy-First AI IDE
Dev.to

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register
v0.21.1
Ollama Releases