Probabilistic Verification of Neural Networks via Efficient Probabilistic Hull Generation

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses probabilistic verification of neural networks by estimating the probability of satisfying safety constraints when inputs follow a probability distribution.
  • It introduces a framework that computes a guaranteed safe-probability range by efficiently generating safe and unsafe probabilistic “hulls.”
  • The method uses three key components: state-space subdivision with regression trees, boundary-aware sampling to locate the safety boundary, and iterative refinement with probabilistic prioritization.
  • Experiments on benchmarks such as ACAS Xu and a rocket lander controller show clear accuracy and efficiency improvements over prior work.

Abstract

The problem of probabilistic verification of a neural network investigates the probability of satisfying the safe constraints in the output space when the input is given by a probability distribution. It is significant to answer this problem when the input is affected by disturbances often modeled by probabilistic variables. In the paper, we propose a novel neural network probabilistic verification framework which computes a guaranteed range for the safe probability by efficiently finding safe and unsafe probabilistic hulls. Our approach consists of three main innovations: (1) a state space subdivision strategy using regression trees to produce probabilistic hulls, (2) a boundary-aware sampling method which identifies the safety boundary in the input space using samples that are later used for building regression trees, and (3) iterative refinement with probabilistic prioritization for computing a guaranteed range for the safe probability. The accuracy and efficiency of our approach are evaluated on various benchmarks including ACAS Xu and a rocket lander controller. The result shows an obvious advantage over the state of the art.