Entropy and Attention Dynamics in Small Language Models: A Trace-Level Structural Analysis on the TruthfulQA Benchmark

arXiv cs.AI / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating small language models (SLMs) only by final accuracy or hallucination rates misses how internal behaviors drive confident mispredictions and unstable outputs.
  • It introduces a trace-level structural analysis on the TruthfulQA benchmark, measuring token output entropy, attention entropy, head dispersion, and hidden-state representation dynamics.
  • Across four 1B–1.7B-parameter models, the study finds three distinct entropy-pattern categories: deterministic (entropy decreases), exploratory (entropy increases), and balanced (moderate/stable entropy).
  • The authors report that each entropy group also shows different hidden-state movement and attention dispersion patterns, linking “truthfulness” to structured entropy/attention dynamics rather than output metrics alone.
  • The findings suggest monitoring and optimizing internal uncertainty patterns could improve the reliability and hallucination-awareness of application-specific edge SLMs.

Abstract

Small language models (SLMs) have been increasingly deployed in edge devices and other resource-constrained settings. However, these models make confident mispredictions and produce unstable output, making them risky for factual and decision-critical tasks. Current evaluation methodology relies on final accuracy or hallucination rates without explaining how internal model behavior affects outputs. Specifically, how entropy evolves during decoding, how attention is distributed across layers, and how hidden representations contribute to uncertainty, logical inconsistencies, and misinformation propagation are often overlooked. Consequently, this study introduces a trace-level analysis of entropy and attention dynamics in SLMs evaluated with the TruthfulQA dataset. Four models with parameter ranges of 1B-1.7B parameters were examined via token-level output entropy, attention entropy, head dispersion, and hidden-state representation. The results reflect three model classifications by entropy patterns. Deterministic models (DeepSeek-1.5B and LLaMA-1B): output entropy decreases over time. Exploratory models (Gemma-1B): with increasing entropy, and balanced models (Qwen-1.7B): have moderate and stable entropy. Also, each group has distinctively different hidden-state movement and attention dispersion patterns. The analysis demonstrates that truthfulness in SLMs emerges from structured entropy and attention dynamics. Monitoring and optimizing these internal uncertainty patterns can guide the design of a more reliable, hallucination-aware, and application-specific edge SLMs.