Claude Code source leak reveals how much info Anthropic can hoover up about you and your system
If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code
Anthropic's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect. It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI.
The leak of the company's client source code – details of which have been circulating for many months among those who reverse-engineered the binary – reveals that Claude Code pretty much has the run of any device where it's installed.
Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department (Anthropic PBC v. U.S. Department of War et al) for banning the company's AI services following the company's refusal to compromise model safeguards.
As part of its justification for declaring Anthropic a supply chain threat, the US government argued [PDF], there was "substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..."
Anthropic disputed that claim in a court filing. "That assertion is unmoored from technical reality: 'Anthropic does not have the access required to disable [its] technology or alter [its] model's behavior before or during ongoing operations,' it wrote, quoting Thiyagu Ramasamy, head of public sector at Anthropic, in a deposition. "Once deployed in classified environments, Anthropic has no access to (or control over) the model."
In a classified environment, that's credible under certain conditions. For everyone else, Claude has vast powers.
What Claude Code could do in a classified environment
The Register consulted a security researcher who asked to be referred to by the pseudonym "Antlers" to analyze the source for Claude Code.
It appears a government agency like the Defense Department could prevent Claude Code from phoning home or taking remote action by making sure all of the following are true:
- Ensure inference transits flow via Amazon Bedrock GovCloud or Google AI for Public Sector (Vertex).
- Block data gathering endpoints (Statsig/GrowthBook/Sentry) with a firewall.
- Block system prompt fingerprinting (via Bedrock, etc).
- Prevent automatic updates via version pinning and blocking update endpoints.
- Disable autoDream, an unreleased background agent being tested that's capable of reading all session transcripts.
There's no specific setting we found for operating in a classified environment but Claude Code supports several flags that limit remote communication.
These include:
- CLAUDE_CODE_DISABLE_AUTO_MEMORY=1 which disables all memory and telemetry write operations.
- CLAUDE_CODE_SIMPLE (--bare mode) which strips memory and autoDream entirely.
- ANTHROPIC_BASE_URL can be used to reroute API calls to a private endpoint
- ANTHROPIC_UNIX_SOCKET routes authentication through a forwarded socket (the ssh tunnel mode).
- The remote managed settings (policySettings) can lock down behavior for enterprise deployments, though not entirely.
According to Ramasamy, Anthropic hands off model administration with a government customer like the Defense Department. Model updates, with new or removed capabilities, would have to be negotiated.
"Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way," he said in a March 20, 2026 declaration. "In these deployments, only the government and its authorized cloud provider have access to the running system. Anthropic's role is limited to providing the model itself and delivering updates only if and when requested or approved by the customer."
Even so, Anthropic can exert some degree of control based on the usage terms in the applicable contract.
What Claude Code could do to everybody else
For everyone not using a version of Claude Code that's tied to a firewalled public sector cloud or is somehow air gapped, Anthropic has far more access.
- Anthropic admits Claude Code users hitting usage limits 'way faster than expected'
- Anthropic goes nude, exposes Claude Code source by accident
- Leaked memo suggests Red Hat's chugging the AI Kool-Aid
- UK watchdog targets Microsoft licensing in cloud competition probe
Just as a starting point, Claude users should know that Anthropic receives user prompts and responses that pass through its API, conversations that can reveal not only what was said but file contents and system details.
Yet there are many more ways that the company can potentially receive or collect information, based on the Claude Code source. These include:
- KAIROS (src/bootstrap/state.ts:72), a daemon (background process) set by the kairosActive flag. It appears to be an unreleased headless "assistant mode" for when the user is not watching the terminal user interface (TUI). It gets rid of the status bar (StatusLine.tsx:33), disables planning mode, silently suppresses the AskUserQuestion tool (AskUserQuestionTool.tsx:141). It auto-backgrounds long-running bash commands without notice (BashTool.tsx:976).
- CHICAGO, is the codename for computer use and desktop control. It enables the Claude agent to carry out mouse clicks, perform keyboard input, access the clipboard, and capture screenshots. It's publicly launched and available to Pro/Max subscribers and Anthropic employees (designated by the "ant" flag). There's also a separate publicly-launched Claude in Chrome service that supports browser automation and all the system access that entails.
- Persistent telemetry. Initially this was done via Statsig, which was acquired by rival OpenAI last September, presumably triggering the switch to GrowthBook, a platform that supports A/B testing and analytics. When Claude is launched, the analytics service (firstPartyEventLoggingExporter.ts) phones home with the following data, or saves it to ~/.claude/telemetry/ if the network is down: user ID, session ID, app version, platform, terminal type, Organization UUID, account UUID, email address if defined, and which features gates are currently enabled. Anthropic can activate these feature gates midsession, including enabling or disabling analytics.
- Remotely managed settings (remoteManagedSettings/index.ts). For enterprise customers, Anthropic maintains a server that can push a policySettings object that can: override other items in the merge chain; is polled hourly without user interaction; can set .env variables (e.g. ANTHROPIC_BASE_URL, LD_PRELOAD, PATH); and these settings take effect immediately via hot reload (settingsChangeDetector.notifyChange). Users are prompted when there's a "dangerous setting change," but the definition of that term follows from Anthropic's code and thus could be revised. Routine changes (permissions, .env variables, feature flags appear to happen without notification).
- Auto-updater. The auto-updater (autoUpdater.ts:assertMinVersion()), runs every launch, pulls the configuration version from Statsig/GrowthBook. So Anthropic can remove or disable specific versions by choice.
- Error reporting. When there's an unhandled exception, the error reporting script (sentry.ts) captures the current working directory, potentially showing project names, paths, and other system information. It also reports feature gates active, user ID, email, session ID, and platform information.
- Payload Size Telemetry. The API call tengu_api_query transmits the messageLength, the JSON-serialized byte length of the system prompt, messages, and tool schemas.
- autoDream. Publicly discussed but not officially released, the autoDream service spawns a background subagent that searches (greps) through all JSONL session transcripts to consolidate memories (stored data Claude uses as context for queries). The agent runs in the same process as Claude (under the same API key, with the same network access) and its scan is local. But whatever it writes to MEMORY.md gets injected back into future system prompts and would thus be sent to the API.
- Team Memory Sync. There's a bidirectional sync service (src/services/teamMemorySync/index.ts) that connects local memory files to api.anthropic.com/api/claude_code/team_memory. It provides a way to share memories to other team members within an organization. The service includes a secret scanner (secretSanner.ts) that uses regex patterns for around 40 known token and API key patterns (AWS, Azure, GCP, etc). But sensitive data that doesn't match these regexes might be exposed to other team members through memory sync.
- Experimental Skill Search (src/tools/SkillTool/SkillTool.ts:108) is a feature flag available only to Anthropic employees. It provides a way to download skill definitions to a remote server (remoteSkillLoader.js); track which remote skills have been used in a session (remoteSkillState.js); execute remotely-downloaded skills (executeRemoteSkill() at line 969); and register skills so they persist after a compact operation. If enabled for non-employee accounts (via GrowthBook feature flag flip, for example), this would be a theoretical remote code execution pathway. Anthropic, or whoever controls the skill search backend, could serve arbitrary prompt injections or instruction overrides in the form of "skills" that get loaded and run in a session.
Other capabilities have been documented at ccleaks.com.
"I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher "Antlers" told us. "If it's seen a file on your device, Anthropic has a copy."
For Free/Pro/Max customers, Anthropic retains this data either for five years, if the user has chosen to share data for model training, or for 30 days if not. Commercial users (Team, Enterprise, and API) have a standard 30 day retention period and a zero-data retention option.
For those who recall the debate surrounding Microsoft Recall not long ago, Claude Code's capture of activity is similar. Every read tool call, every Bash tool call, every search (grep) result, and every edit/write of old and new content gets stored locally in plaintext as a JSONL file.
The Claude's autoDream agent, once officially released, will search through those and extract data to store in MEMORY.md, which then gets injected to future system prompts and thus hits the API.
One of the more curious details to emerge from the publication of Claude Code's source is that Anthropic tries to hide AI authorship from contributions to public code repositories – possibly a response to the open source projects that have disallowed AI code contributions. Prompt instructions in a file called undercover.ts state, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
Mysterious Melon Mode
There's also a mystery: The current source code lacks a feature called "Melon Mode" that was present in prior reverse engineered versions of the software.
This was behind an Anthropic employee feature flag and only ran internally, not on production builds. A comment attached to the associated code check read, "Enable melon mode for ants if --melon is passed."
"Antlers" speculated that "Melon Mode" might be the code name for a headless agent mode.
Anthropic declined to provide comment for this story. When asked specifically about the function of "Melon Mode," it only noted that the company regularly tests various prototype services, not all of which make it into production. ®




