Govern your bots carefully or chaos could ensue

The Register / 5/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • The article argues that bot deployments can quickly become unmanageable without strong governance, leading to operational and compliance “chaos.”
  • It emphasizes the need to control the growth of bot ecosystems (“stop the sprawl”) through clear policies, ownership, and oversight mechanisms.
  • The piece highlights that as more autonomous or semi-autonomous bots are introduced, risks increase unless organizations implement guardrails and accountability.
  • It calls for practical governance practices to keep bot behavior consistent, safe, and auditable as usage expands.

Govern your bots carefully or chaos could ensue

Stop the sprawl!

Thu 30 Apr 2026 // 21:27 UTC

With the average Global Fortune 500 enterprise expected to run more than 150,000 AI agents by 2028, up from fewer than 15 today, there’s plenty of room for chaos. Analyst firm Gartner says that, without proper governance, those agents will multiply and run amok.

Governance is key to deriving value from AI deployments, according to Gartner researchers who delved into the problem for the company’s Digital Workplace Summit in London this week.

"As CIOs and IT leaders see an explosion of AI agents across their organizations, many are contending with an ungoverned sprawl of agents," Max Goss, senior director analyst at Gartner, said at the summit.

Half of the organizations Gartner surveyed said that they limited internal AI rollouts to low-risk or trusted users, but, in doing so, those same organizations were less likely to report high returns from their generative AI tools compared with companies that expanded access more broadly under strong governance, with broader adopters being 3.3 times more likely to report higher value.

Limiting access is not governance, the analyst firm posited.

Organizations that did invest in third-party governance tools were nearly twice as likely to report higher value from their AI deployments, the data showed.

Gartner's governance model calls for a two-tier structure: a centralized AI governance committee at the top — staffed by the CIO, CISO, chief AI officer, enterprise architects, legal and business leaders — sets strategy and policy. Below that, operational governance teams embedded in each application domain translate those policies into specific controls for their platforms.

There is a rapid spread of AI agents across enterprise software from CRM and ERP platforms to digital workplace tools like Microsoft 365 Copilot and it is creating what analysts call "agent sprawl," a tangle of autonomous AI tools that exposes organizations to misinformation, data loss and ballooning IT complexity, Gartner said.

Only 15 percent of respondents said they were considering, piloting, or deploying fully autonomous AI agents, according to Gartner’s 2025 survey of 360 IT application leaders. And just 13 percent of organizations believe that they have the right governance in place overall.

Recent announcements from Google and ServiceNow have boasted both of creating and containing agents. Okta and Commvault have both introduced ways to track agents and, in Commvault’s case, roll back their actions.

Gartner laid out a framework for getting AI agent proliferation under control.

Organizations need to establish clear governance policies that define when and how agents get built, who can create and share them, and which data they can access. They should build a centralized inventory of every agent in the enterprise. The analyst firm recommends using AI trust, risk and security management tools, a category it calls AI TRiSM, to discover and catalog agents across both sanctioned platforms and shadow AI deployments.

Once an organization knows what it has, it can start applying adaptive controls based on each agent's risk level.

Gartner says that every agent needs a defined identity, a clear set of permissions and a lifecycle plan. That means enforcing least-privilege access and retiring redundant agents before they pile up.

Companies need to find a way to continuously monitor agent behavior, tracking usage patterns, flagging anomalies and correcting agents that drift beyond their scope.

Gartner's analysts said responsible AI education will become as essential as cybersecurity training and will likely fold into mandatory security programs. ®

More about

TIP US OFF

Send us news