AI Navigate

A Framework for Formalizing LLM Agent Security

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a contextual security framework for LLM agents with four properties—task alignment, action alignment, source authorization, and data isolation—to capture how security depends on context.
  • It provides oracle functions that verify these properties in real time as an agent executes a user task, enabling precise detection of violations.
  • It reformulates attacks such as indirect prompt injection, direct prompt injection, jailbreaks, task drift, and memory poisoning as violations of one or more security properties, yielding precise, contextual definitions.
  • Defenses are described as mechanisms that strengthen oracle checks or perform security property verifications, addressing the utility-security tradeoff in a contextual setting.
  • It also discusses several important future research directions enabled by the framework.

Abstract

Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security violation depending on whose instruction led to the action, what objective is being pursued, and whether the action serves that objective. However, existing definitions of security attacks against LLM agents often fail to capture this contextual nature. As a result, defenses face a fundamental utility-security tradeoff: applying defenses uniformly across all contexts can lead to significant utility loss, while applying defenses in insufficient or inappropriate contexts can result in security vulnerabilities. In this work, we present a framework that systematizes existing attacks and defenses from the perspective of contextual security. To this end, we propose four security properties that capture contextual security for LLM agents: task alignment (pursuing authorized objectives), action alignment (individual actions serving those objectives), source authorization (executing commands from authenticated sources), and data isolation (ensuring information flows respect privilege boundaries). We further introduce a set of oracle functions that enable verification of whether these security properties are violated as an agent executes a user task. Using this framework, we reformalize existing attacks, such as indirect prompt injection, direct prompt injection, jailbreak, task drift, and memory poisoning, as violations of one or more security properties, thereby providing precise and contextual definitions of these attacks. Similarly, we reformalize defenses as mechanisms that strengthen oracle functions or perform security property checks. Finally, we discuss several important future research directions enabled by our framework.