CORA: Conformal Risk-Controlled Agents for Safeguarded Mobile GUI Automation

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CORA is proposed as a post-policy, pre-action safeguarding framework for VLM-powered autonomous mobile GUI agents, focusing on statistically guaranteed reduction of harmful executed actions.
  • The method trains a Guardian model to estimate action-conditional risk and uses Conformal Risk Control to create a calibrated execute/abstain decision boundary aligned with a user-specified risk budget.
  • Rejected (high-risk) actions are routed to a trainable Diagnostician that performs multimodal reasoning to recommend interventions such as confirm, reflect, or abort, aiming to reduce user burden.
  • A Goal-Lock mechanism is introduced to anchor risk assessment to clarified, frozen user intent, helping resist visual injection attacks.
  • The paper also introduces the Phone-Harm benchmark with step-level harm labels under real-world mobile settings and reports experimental results showing improved safety–helpfulness–interruption trade-offs, with code and benchmarks published online.

Abstract

Graphical user interface (GUI) agents powered by vision language models (VLMs) are rapidly moving from passive assistance to autonomous operation. However, this unrestricted action space exposes users to severe and irreversible financial, privacy or social harm. Existing safeguards rely on prompt engineering, brittle heuristics and VLM-as-critic lack formal verification and user-tunable guarantees. We propose CORA (COnformal Risk-controlled GUI Agent), a post-policy, pre-action safeguarding framework that provides statistical guarantees on harmful executed actions. CORA reformulates safety as selective action execution: we train a Guardian model to estimate action-conditional risk for each proposed step. Rather than thresholding raw scores, we leverage Conformal Risk Control to calibrate an execute/abstain boundary that satisfies a user-specified risk budget and route rejected actions to a trainable Diagnostician model, which performs multimodal reasoning over rejected actions to recommend interventions (e.g., confirm, reflect, or abort) to minimize user burden. A Goal-Lock mechanism anchors assessment to a clarified, frozen user intent to resist visual injection attacks. To rigorously evaluate this paradigm, we introduce Phone-Harm, a new benchmark of mobile safety violations with step-level harm labels under real-world settings. Experiments on Phone-Harm and public benchmarks against diverse baselines validate that CORA improves the safety--helpfulness--interruption Pareto frontier, offering a practical, statistically grounded safety paradigm for autonomous GUI execution. Code and benchmark are available at cora-agent.github.io.