Free LLM security audit

Reddit r/artificial / 4/15/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Arc Sentry is presented as a pre-generation guardrail for open-source LLMs that blocks prompt injection before the model generates a response, avoiding post-output filtering.
  • The approach works by inspecting the model’s residual stream and is claimed to function across Mistral, Qwen, and Llama, addressing prompt injection (ranked #1 in OWASP LLM Top 10).
  • The author argues that many defenses are too late because they only scan outputs or text patterns after the model has already processed the attack.
  • A limited offer is made to provide 5 free security audits within 24 hours to test real deployments using JailbreakBench and Garak attack prompts and deliver a detailed report.
  • After the free testing, deployment is offered as a paid service ($199/month), positioning Arc Sentry as a practical security tool for LLM deployments.

I built Arc Sentry, a pre-generation guardrail for open source LLMs that blocks prompt injection before the model generates a response. It works on Mistral, Qwen, and Llama by reading the residual stream, not output filtering.

Prompt injection is OWASP LLM Top 10 #1. Most defenses scan outputs or text patterns, by the time they fire, the model has already processed the attack. Arc Sentry blocks before generate() is called.

I want to test it on real deployments, so I’m offering 5 free security audits this week.

What I need from you:

• Your system prompt or a description of what your bot does • 5-10 examples of normal user messages 

What you get back within 24 hours:

• Your bot tested against JailbreakBench and Garak attack prompts • Full report showing what got blocked and what didn’t • Honest assessment of where it works and where it doesn’t 

No call. Email only. 9hannahnine@gmail.com

If it’s useful after seeing the results, it’s $199/month to deploy.

submitted by /u/Turbulent-Tap6723
[link] [comments]