Google unleashes Gemini AI agents on the dark web

The Register / 3/24/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIndustry & Market Moves

Key Points

  • The article reports Google deploying Gemini AI agents to operate on the dark web, positioning them as capable of analyzing large volumes of activity.
  • It claims the system can process millions of daily events and achieve “98 percent” accuracy, suggesting automated threat-monitoring or investigative capabilities.
  • The focus is framed as a security development, implying new tactics for detecting or tracking illicit activity at scale.
  • The piece highlights the significance of AI agents moving beyond traditional analytics into continuous, operational engagement in high-risk environments.
  • Overall, it signals an escalation in how major AI providers may use agentic models for cybersecurity intelligence and enforcement support.

Google unleashes Gemini AI agents on the dark web

Claims it can analyze millions of daily events with 98 percent accuracy

Mon 23 Mar 2026 // 15:05 UTC

Google's Gemini AI agents are crawling the dark web, sifting through upward of 10 million posts a day to find a handful of threats relevant to a particular organization.

Available now in public preview, the dark web intelligence service built into Google Threat Intelligence uses Gemini's models to build a profile of a user's organization. It then scours the dark web to determine the security risks it faces.

Google threat hunters told The Register that their internal tests show it can analyze millions of daily external events with 98 percent accuracy.

"We are now processing every post from the dark web using Gemini, and from there distilling down what threats actually matter," Google Threat Intelligence product manager Brandon Wood told us, adding that this includes initial access broker activity, data leaks, insider threats, and other intel.

"We're seeing anywhere from eight to 10 million events a day, and we're able to distill that down in very short throughput," he said.

For comparison, traditional dark-web monitoring tools mostly scrape for key terms and use regex to match those terms, generating between 80 percent and 90 percent false positives, according to Wood. "It mostly just creates noise for the threat intel team," he said.

Here's how the new service works. A customer – let's say Acme Bank – opens the dark web monitoring module for the first time. They confirm they are Acme Bank, and Gemini builds a customer profile.

"Within a couple of minutes, we return a profile with a deep understanding of the customer, their environment, their business operations, VIPs, brands, technology," Wood said. "They are things that are open source, publicly available, and we provide citations of all of that content as well, trying to shrink the black boxes of AI and LLMs."

Google's tool next automatically generates alerts, going back seven days to classify potential threats. The AI agents tag dark web data and then perform a vector comparison to detect stolen data or malicious activity that may affect the organization.

"Within a couple of minutes, alerts are flowing in over the last week, and we prioritize each of those alerts in really, really simple terms," Wood said. "We look at the relevance of each of these alerts. Is the threat actor specifically talking about elements in my organization profile? And then could they be talking about elements in my profile? That's a little bit more ambiguous."

So, for example, if a criminal on the dark web claims they are selling access to a large North American bank with more than 50,000 employees and $50 billion of assets under management, Gemini will draw connections between Acme Bank's profile and the attacker's claims, and identify this as a high-severity threat.

Gemini also pulls in knowledge from Google Threat Intelligence Group's human analysts, who track 627 threat groups.

"We're looking at how severe is this initial access builder? How severe is this data leak? And using Gemini to read the context that we put into the background and then generate that alert," Wood says. "Our goal is to move away from hundreds and thousands of mostly false positives."

Google hopes its customers will come to trust AI-generated recommendations that describe critical threats.

Depending on the level of access given to Gemini's dark web intel agents, however, it does seem that the AI tool could create yet another attack vector for cybercriminals to exploit.

"We're mostly focused on publicly available information and context that the user chooses to put into the platform," Wood said. "Google deeply cares about protecting user information. We're looking carefully at how we integrate more and more insights and capabilities into it, but we really do work with our users and customers to make sure there's a ton of transparency on how they want to exchange information."

But wait, there's more (AI agents)

In addition to the dark web intelligence tool, Google also added AI agents (in preview) to Google Security Operations to automate threat responses. Customers can embed agents, including Google's triage and investigation agent, directly into workflows, allowing it to autonomously investigate alerts, gather evidence for analysis, and provide verdicts – along with explanations of its reasoning.

Further, Google Security Operations customers can now build their own enterprise security agents with remote model context protocol (MCP) server support. This feature, now generally available, means customers do not need to host their own security operations MCP server client. It also enables unified governance and controls within Google Security Operations for the security agents they build. ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news