Structural Rigidity and the 57-Token Predictive Window: A Physical Framework for Inference-Layer Governability in Large Language Models
arXiv cs.AI / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that common AI-safety methods based on behavioral monitoring and post-training alignment may fail to produce detectable pre-commitment signals in most instruction-tuned LLMs tested.
- It proposes an energy-based governance framework that links transformer inference dynamics to constraint-satisfaction views of neural computation.
- Using “trajectory tension” (rho = ||a|| / ||v||), the authors identify a model- and setting-specific 57-token predictive window in Phi-3-mini-4k-instruct under greedy decoding on arithmetic constraint probes.
- They introduce a five-regime taxonomy of inference behavior (Authority Band, Late Signal, Inverted, Flat, Scaffold-Selective) and use energy asymmetry to quantify “structural rigidity” across regimes and models.
- The study finds that hallucination does not show predictive signals across 72 test conditions, suggesting hallucination and rule-violation are distinct failure modes requiring different detection approaches (internal geometry monitoring vs external verification).
Related Articles

Black Hat Asia
AI Business
v0.20.5
Ollama Releases

Inside Anthropic's Project Glasswing: The AI Model That Found Zero-Days in Every Major OS
Dev.to
Gemma 4 26B fabricated an entire code audit. I have the forensic evidence from the database.
Reddit r/LocalLLaMA
SoloEngine: Low-Code Agentic AI Development Platform with Native Support for Multi-Agent Collaboration, MCP, and Skill System
Dev.to