Self-Awareness before Action: Mitigating Logical Inertia via Proactive Cognitive Awareness

arXiv cs.AI / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that large language models can make unreliable decisions when they cannot tell whether their knowledge or reasoning state is complete, especially in fixed, hidden-structure puzzle settings.
  • It introduces SABA, a reasoning framework that adds explicit self-awareness of missing premises before producing the final decision.
  • SABA alternates between building a structured, verifiable base state via Information Fusion and resolving obstacles by converting missing or underspecified premises into queries for progressive state refinement.
  • Evaluations on the non-interactive Detective Puzzle benchmark show SABA delivers the best performance across all three difficulty splits, while also maintaining leading results on several public benchmarks.
  • The work suggests that proactive cognitive awareness—checking what is missing rather than committing early hypotheses—can improve reasoning stability in non-interactive tasks.
  • categories0]8]???

Abstract

Large language models perform well on many reasoning tasks, yet they often lack awareness of whether their current knowledge or reasoning state is complete. In non-interactive puzzle settings, the narrative is fixed and the underlying structure is hidden; once a model forms an early hypothesis under incomplete premises, it can propagate that error throughout the reasoning process, leading to unstable conclusions. To address this issue, we propose SABA, a reasoning framework that explicitly introduces self-awareness of missing premises before making the final decision. SABA formulates reasoning as a recursive process that alternates between structured state construction and obstacle resolution: it first applies Information Fusion to consolidate the narrative into a verifiable base state, and then uses Query-driven Structured Reasoning to identify and resolve missing or underspecified premises by turning them into queries and progressively completing the reasoning state through hypothesis construction and state refinement. Across multiple evaluation metrics, SABA achieves the best performance on all three difficulty splits of the non-interactive Detective Puzzle benchmark, and it also maintains leading results on multiple public benchmarks.