Git identity spoof fools Claude into giving bad code the nod

The Register / 4/16/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • The article reports that a forged Git identity/metadata trick can cause Claude to approve code changes that are actually hostile or low quality.
  • It highlights that the AI reviewer appears to treat the submitted patch as trustworthy when it looks like it came from a known maintainer.
  • The issue underscores a security gap where LLM-based code review can be influenced by provenance signals rather than independently validating the underlying code.
  • The incident suggests teams should not rely on identity metadata alone when using AI systems for code review and should strengthen verification and review workflows.

Git identity spoof fools Claude into giving bad code the nod

Forged metadata made AI reviewer treat hostile changes as though they came from known maintainer

Thu 16 Apr 2026 // 12:57 UTC

Security boffins say Anthropic's Claude can be tricked into approving malicious code with just two Git commands by spoofing a trusted developer's identity.

In a blog published this week, Manifold Security showed how an AI-powered code reviewer built on Claude accepted changes that appeared to come from a legitimate maintainer. By setting a fake author name and email in Git, the team made a commit appear to originate from a trusted source, then passed it through an automated review flow where the model approved it.

This is not a Git vulnerability – commit metadata has always been relatively easy to fake unless additional controls like signing are enforced. The problem arises when that metadata is treated as a signal of trust. In this case, the model appeared to give weight to the author's claimed identity rather than independently assessing whether the change itself was sound.

In Manifold's test, the workflow was set to auto-approve pull requests from "recognized industry legends," so the trust rule was obvious. In the real world, it's usually less explicit – checks against org membership, past contributions, or a maintainer list – but it's the same problem underneath. None of that proves who actually made the change.

"The motivation behind such configurations is understandable. Maintainers of popular open source projects are drowning in PRs," Manifold said. "Automating review for contributions from known, trusted figures reduces the bottleneck. But it creates an assumption that authorship can be trusted at face value."

Manifold compares the setup to the recent OpenClaw Cline package compromise, where a poisoned package slipped into a trusted environment and was treated as legitimate long enough to cause damage. In both cases, something that appeared to come from a reliable source was given a level of trust it hadn't earned.

What changes with systems like Claude is how that trust is applied. A human reviewer might question why a particular maintainer is making an unexpected change or take a closer look at the diff. An automated reviewer is more likely to follow its internal signals consistently, and if those signals include author identity, spoofing that identity becomes a way in.

"Open source libraries are increasingly relying on AI-powered workflow tools to auto-review and approve pull requests, yet these agents are easily fooled, creating opportunities for threat actors to bypass security controls and poison popular code repositories," Manifold warned.

Manifold's takeaway is that the guardrails can't live in the model. If nothing else is checking who did what, bad code won't just be suggested – it'll get pushed. ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news