Human-Guided Harm Recovery for Computer Use Agents

arXiv cs.AI / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a new gap in LLM computer-use agents’ safety: how to recover from harmful actions after they occur, not just prevent them.
  • It defines “harm recovery” as steering an agent from a harmful state back to a safe one in a way that matches human preferences, supported by a user study and a natural-language rubric.
  • Using 1,150 pairwise judgments, the authors find that what users value in recovery is context-dependent (e.g., preferring pragmatic, targeted steps over broad long-term plans).
  • They implement these insights via a reward model that re-ranks candidate recovery plans, and introduce BackBench, a 50-task benchmark for testing recovery from harmful states.
  • Human evaluation indicates that the reward-model-based scaffold produces higher-quality recovery trajectories than baseline agents and rubric-only scaffolds.

Abstract

As LM agents gain the ability to execute actions on real computer systems, we need ways to not only prevent harmful actions at scale but also effectively remediate harm when prevention fails. We formalize a solution to this neglected challenge in post-execution safeguards as harm recovery: the problem of optimally steering an agent from a harmful state back to a safe one in alignment with human preferences. We ground preference-aligned recovery through a formative user study that identifies valued recovery dimensions and produces a natural language rubric. Our dataset of 1,150 pairwise judgments reveals context-dependent shifts in attribute importance, such as preferences for pragmatic, targeted strategies over comprehensive long-term approaches. We operationalize these learned insights in a reward model, re-ranking multiple candidate recovery plans generated by an agent scaffold at test time. To evaluate recovery capabilities systematically, we introduce BackBench, a benchmark of 50 computer-use tasks that test an agent's ability to recover from harmful states. Human evaluation shows our reward model scaffold yields higher-quality recovery trajectories than base agents and rubric-based scaffolds. Together, these contributions lay the foundation for a new class of agent safety methods -- ones that confront harm not only by preventing it, but by navigating its aftermath with alignment and intent.