Anthropic’s Claude Code gets ‘safer’ auto mode

The Verge / 3/25/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Anthropic launched an “auto mode” for Claude Code that enables the agent to make permissions-level decisions on a user’s behalf with less hands-on supervision.
  • The feature is positioned as a safer middle ground, aiming to avoid the risks that come with fully autonomous agent actions.
  • Auto mode is designed to flag and block potentially dangerous steps (e.g., deleting files, sending sensitive data, executing malicious code, or following hidden instructions) before they run.
  • By intercepting risky actions and prompting for confirmation, Anthropic intends to reduce the likelihood of unwanted or harmful outcomes while preserving practical autonomy.

Anthropic has launched an "auto mode" for Claude Code, a new tool that lets AI make permissions-level decisions on users' behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.

Claude Code is capable of acting independently on users' behalf, a useful but risky feature as it can also do things users don't want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chan …

Read the full story at The Verge.