Shadow IT has given way to shadow AI. Enter AI-BOMs
'If you don't have visibility, you can't understand what to protect'
When it comes to securing enterprise supply chains, now heavily infused with AI applications and agents, a software bill of materials (SBOM) no longer provides a complete inventory of all the components in the environment. Enter AI-BOMs.
While a traditional SBOM includes all of the software packages and dependencies in the organization, an AI-BOM aims to cover the gaps introduced by AI assets by providing visibility across all of the models, datasets, SDK libraries, MCP servers, ML frameworks, agents, agentic skills, prompts, and other AI tools - plus how these AI components interact with each other and connect to workflows.
You don't know the recipe, you don't know the ingredients, you don't know the baker. Would you eat a slice of that cake?
"Imagine if AI is a birthday cake in the middle of this room, but you don't know how it got there," Ian Swanson, VP of AI security at Palo Alto Networks said in an interview with The Register. "You don't know the recipe, you don't know the ingredients, you don't know the baker. Would you eat a slice of that cake?"
A lot of organizations are eating the cake anyway.
In addition to the company-sanctioned models and AI used in the tech stack, there's also the problem of "shadow AI" - we used to call this "shadow IT" - and these unsanctioned tools also need to be brought out of the shadows so they can be accounted for. This includes all the vibe coding platforms and agents that individual employees spin up, along with any external chatbots they interact with on work computers and potentially input sensitive corporate data into.
To secure all of these AI ingredients baked into the cake, companies first need to know what they are, what they connect to, and how they are being used.
"In general, organizations that are trying to wrap their head around AI security," Amy Chang, Cisco's head of AI threat intelligence and security research told The Register. "They want a way to be able to identify what AI assets exist in their environment. A tool like the AI bill of materials is one of those first places that you can start to get a better understanding of what exists."
Up next: model provenance
Cisco previously open sourced its AI-BOM, making it free for anyone to scan codebases, container images, and cloud environments to produce this bill of materials.
On Friday, it also made available its Model Provenance Kit as an open source tool to track model provenance. In a blog announcing the new repository, Chang and other AI researchers describe it as a DNA test for AI models, and it determines provenance using one of two modes: compare or scan.
Compare mode takes any two models and shows their similarity across metadata, tokenizer structure, weight-level signals along with a final composite score. Scam mode starts with a single model and matches it against a database to determine the closest lineage candidates - and to help with this mode, Cisco also released a model fingerprint database covering about 150 base models across more than 45 families and over 20 publishers.
Chang told us that the new AI tool performs two gate checks. "First, at the metadata level, it compares the information from the base model with the fine-tuned version of the model to delineate some sort of provenance-linked relationship - like this was derived from Meta Llama 4, or derived from Alibaba Qwen3," she said.
"Then, what we do is look at weight-based signifiers. So now we're providing a sort of verifiable, repeatable and provable way to attest that the models that you use and deploy, that are customer facing, that are ingesting all this data, are truly the models that that you're supposed to be using, or that that are within the confines of your risk tolerance."
Organizations want a way to be able to identify what AI assets exist in their environment
During our interview, Chang pointed to Cursor's Composer 2, which is partly built on Kimi 2.5, a Chinese open source model. "They were very quick to admit that, yes, we used the Chinese model to build this," she said. "But that could have regulatory or compliance risk."
Case in point: The European Union's AI Act mandates organizations document training data, characteristics of training methodology, and risk assessments for "high-risk systems."
Google's Wiz, in its AI-BOMs, also accounts for all of the tools in the developers' workstation, such as a laptop or integrated development environment, that went into building the AI application.
"Many people define visibility or BOMs by what's actually in the final artifact, but we also extend the definition of BOMs in general and AI-BOMs in particular to include the AI tools that went into building that application," Ziad Ghalleb, Wiz technical product marketing manager, told us.
"And then another important aspect is the identities that are attached to these AI workloads, because all these agents or models, tools, etc., are tied to a specific identity inside your environment," Ghalleb added. "So you need to be looking at these non-human identities that are related to these systems. It's not just the resources. It's also the identities and the permission sets that are tied to them."
All of this boils down to visibility and security. "If you don't have visibility of these workloads, then you can't really understand what it is to protect," Swanson said.
Protection against poisonings
Enterprises aren't the only ones madly rushing to incorporate AI tools into their workloads and processes, as everyone who reads The Reg likely knows. Criminals are also using these same tools to move faster and make their attacks more efficient.
As Sherrod DeGrippo, Microsoft's GM of global threat intelligence, told The Register in a previous interview: This includes tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure.
"Agentic, automated reconnaissance against systems is something that is worth taking a look at," DeGrippo said. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity."
According to Swanson, this is also a case where having an AI-BOM can help defenders respond faster. He says he can't name the company, but in one incident that Palo Alto Networks responded to, a criminal group used AI to scout out the victim organization and locate exposed endpoints.
"One of the things that they did is get access to system prompts, the instructions to an AI workload that tells it what it can do, and what it can't do," Swanson said. And once the attacker gained access to the company's internal AI's system prompts, they modified them to force the AI to do things that it shouldn't - like steal data, and send it to an external email account.
An AI-BOM would provide an understanding of the AI system's configurations and dependencies at a specific state in time - and also indicate any changes.
"If you had understanding of state and understanding of state changes, then you would be able to go back to an AI bill of materials and say: 'What system prompt was used within the ingredients to create the AI application?' And then see it's changed from a prior state to a new state. So we should probably check this and see if there's anything bad that's happening here," Swanson said. "And in that case, you'd be able to catch it."
- The never-ending supply chain attacks worm into SAP npm packages, other dev tools
- Agents gone wild! Companies give untrustworthy bots keys to the kingdom
- OpenClaw reveals meaty personal information after simple cracks
- AI agents now help attackers, including North Korea, manage their drudge work
Other supply chain attacks such as model and skills poisoning underscore the risks of not knowing what AI tools are in an IT environment.
"Skills that people use in coordination with a lot of these coding assistants are pretty easy to tamper with, and so it's important to be able to scan them to make sure that somebody is not manipulating the capabilities," Swanson said. If a skill is supposed to provide a weather forecast, it shouldn't also steal credentials or leak secrets, he explained.
"Understand state changes, constantly scan these artifacts for supply chain risks, and then at the point of runtime, when your AI application is live, also look at all communications to make sure that nothing bad is happening," Swanson said.
AI-BOMs (and their software counterparts) can also help organizations quickly identify compromised open source code running on corporate systems. For example: the recent rash of poisoned npm and PyPI packages and earlier Shai-Hulud worm credential stealer attacks. Both of these campaigns targeted code commonly integrated into AI applications.
Even in the absence of a CVE identifier, an AI-BOM lets users query "related libraries or packages," and then identify any malicious versions in their environment, Ghalleb said. "There's no CVE attached to them, but at least you know how to remove these to contain an evolving threat." ®
More about
More about
Narrower topics
- 2FA
- Advanced persistent threat
- AIOps
- Application Delivery Controller
- Authentication
- BEC
- Black Hat
- BSides
- Bug Bounty
- Center for Internet Security
- CHERI
- CISO
- Common Vulnerability Scoring System
- Cybercrime
- Cybersecurity
- Cybersecurity and Infrastructure Security Agency
- Cybersecurity Information Sharing Act
- Data Breach
- Data Protection
- Data Theft
- DDoS
- DeepSeek
- DEF CON
- Digital certificate
- Encryption
- End Point Protection
- Exploit
- Firewall
- Gemini
- Google AI
- Google Project Zero
- GPT-3
- GPT-4
- Hacker
- Hacking
- Hacktivism
- Identity Theft
- Incident response
- Infosec
- Infrastructure Security
- Kenna Security
- Large Language Model
- Machine Learning
- MCubed
- NCSAM
- NCSC
- Neural Networks
- NLP
- Palo Alto Networks
- Password
- Personally Identifiable Information
- Phishing
- Quantum key distribution
- Ransomware
- Remote Access Trojan
- Retrieval Augmented Generation
- REvil
- RSA Conference
- Spamming
- Spyware
- Star Wars
- Supply Chain Security Week
- Surveillance
- Tensor Processing Unit
- TLS
- TOPS
- Trojan
- Trusted Platform Module
- Vulnerability
- Wannacry
- Zero trust
Broader topics
More about
More about
More about
Narrower topics
- 2FA
- Advanced persistent threat
- AIOps
- Application Delivery Controller
- Authentication
- BEC
- Black Hat
- BSides
- Bug Bounty
- Center for Internet Security
- CHERI
- CISO
- Common Vulnerability Scoring System
- Cybercrime
- Cybersecurity
- Cybersecurity and Infrastructure Security Agency
- Cybersecurity Information Sharing Act
- Data Breach
- Data Protection
- Data Theft
- DDoS
- DeepSeek
- DEF CON
- Digital certificate
- Encryption
- End Point Protection
- Exploit
- Firewall
- Gemini
- Google AI
- Google Project Zero
- GPT-3
- GPT-4
- Hacker
- Hacking
- Hacktivism
- Identity Theft
- Incident response
- Infosec
- Infrastructure Security
- Kenna Security
- Large Language Model
- Machine Learning
- MCubed
- NCSAM
- NCSC
- Neural Networks
- NLP
- Palo Alto Networks
- Password
- Personally Identifiable Information
- Phishing
- Quantum key distribution
- Ransomware
- Remote Access Trojan
- Retrieval Augmented Generation
- REvil
- RSA Conference
- Spamming
- Spyware
- Star Wars
- Supply Chain Security Week
- Surveillance
- Tensor Processing Unit
- TLS
- TOPS
- Trojan
- Trusted Platform Module
- Vulnerability
- Wannacry
- Zero trust