AI Navigate

Real-Time Trust Verification for Safe Agentic Actions using TrustBench

arXiv cs.AI / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • TrustBench is a novel dual-mode framework designed for real-time verification of autonomous agents' actions to ensure safety and reliability before execution.
  • It benchmarks trust using both traditional metrics and large language models acting as judges, intervening at the decision point between action formulation and execution.
  • Domain-specific plugins tailored to healthcare, finance, and technical fields significantly improve harm reduction, outperforming generic verification by 35%.
  • TrustBench demonstrated an 87% reduction in harmful actions across multiple agentic tasks, with a latency of under 200 milliseconds enabling practical real-time use.
  • This approach marks a fundamental shift from post-hoc evaluation frameworks to proactive, real-time trust verification in autonomous agent deployment.

Computer Science > Artificial Intelligence

arXiv:2603.09157 (cs)
[Submitted on 10 Mar 2026]

Title:Real-Time Trust Verification for Safe Agentic Actions using TrustBench

View a PDF of the paper titled Real-Time Trust Verification for Safe Agentic Actions using TrustBench, by Tavishi Sharma and 2 other authors
View PDF HTML (experimental)
Abstract:As large language models evolve from conversational assistants to autonomous agents, ensuring trustworthiness requires a fundamental shift from post-hoc evaluation to real-time action verification. Current frameworks like AgentBench evaluate task completion, while TrustLLM and HELM assess output quality after generation. However, none of these prevent harmful actions during agent execution. We present TrustBench, a dual-mode framework that (1) benchmarks trust across multiple dimensions using both traditional metrics and LLM-as-a-Judge evaluations, and (2) provides a toolkit agents invoke before taking actions to verify safety and reliability. Unlike existing approaches, TrustBench intervenes at the critical decision point: after an agent formulates an action but before execution. Domain-specific plugins encode specialized safety requirements for healthcare, finance, and technical domains. Across multiple agentic tasks, TrustBench reduced harmful actions by 87%. Domain-specific plugins outperformed generic verification, achieving 35% greater harm reduction. With sub-200ms latency, TrustBench enables practical real-time trust verification for autonomous agents.
Comments:
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09157 [cs.AI]
  (or arXiv:2603.09157v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09157
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Vinayak Sharma [view email]
[v1] Tue, 10 Mar 2026 03:46:22 UTC (334 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.