The Kerimov-Alekberli Model: An Information-Geometric Framework for Real-Time System Stability

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes the Kerimov-Alekberli model, an information-geometric framework that reframes AI safety by linking non-equilibrium thermodynamics to stochastic control for aligning autonomous systems.
  • It establishes an isomorphism between non-equilibrium thermodynamics and stochastic control, treating systemic anomalies as deviations from a Riemannian manifold measured using Kullback–Leibler divergence with a Fisher-information-based dynamic threshold.
  • Grounding the approach in the Landauer Principle, the study argues that adversarial perturbations can be interpreted as performing measurable physical work by increasing a system’s informational entropy.
  • Validation on NSL-KDD and unmanned aerial vehicle trajectory simulations suggests the model can detect issues in real time using an FPT trigger, achieving strong accuracy and low false positive rates on benchmarks.
  • Overall, the authors position the framework as a shift from heuristic, rule-based ethical checks toward a physics- and entropy-quantified stability paradigm for AI safety.

Abstract

This study introduces the Kerimov-Alekberli model, a novel information-geometric framework that redefines AI safety by formally linking non-equilibrium thermodynamics to stochastic control for the ethical alignment of autonomous systems. By establishing a formal isomorphism between non-equilibrium thermodynamics and stochastic control, we define systemic anomalies as deviations from a Riemannian manifold. The model utilizes the Kullback-Leibler divergence as the primary metric, governed by a dynamic threshold derived from the Fisher Information Metric. We further ground this framework in the Landauer Principle, proving that adversarial perturbations perform measurable physical work by increasing the system's informational entropy. Validation on the NSL-KDD dataset and unmanned aerial vehicle trajectory simulations demonstrated that our model achieves effective real-time detection via the FPT trigger, with strong performance metrics (e.g., high accuracy and low FPR) on benchmark datasets. This study provides a rigorous physical foundation for AI safety, transitioning from heuristic, rule-based ethical frameworks to a thermodynamics-based stability paradigm by grounding ethical violations in quantifiable physical work and entropic information.