Vera, a local-first code search for AI agents (Rust, ONNX, 63 languages, CLI + SKILL/MCP)

Reddit r/LocalLLaMA / 3/28/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Vera is a local-first code indexing and search tool built specifically to support AI agents, aiming to reduce friction and avoid the eval-quality regressions seen in some existing MCP code-search tools.
  • It combines parallel BM25 keyword search and vector similarity, merges results with Reciprocal Rank Fusion, and then applies a cross-encoder reranker that jointly scores query and candidate text to improve retrieval quality.
  • The project reports a large quality lift, citing 0.60 MRR@10 with reranking versus 0.28 when using vector retrieval alone.
  • Vera emphasizes fully local operation by using SQLite + sqvec + Tantivy (embedded) rather than requiring cloud storage or running external systems like Qdrant.
  • It indexes code from 63 languages using tree-sitter structural parsing (with .gitignore and configurable .veraignore) and ships as a single static Rust binary with CLI plus SKILL/MCP-oriented modes.

You might know me from my SanityHarness coding agent eval and leaderboard. I've spent the last few months researching, testing, and building a new tool called Vera. It's a code indexing and search tool designed specifically for AI agents, and it's built to be as local-first and friction-less as possible.

https://github.com/lemon07r/Vera/

A lot of the existing code indexing and search tools are bloated and heavy. When I tested about 9 different MCP tools recently, I found that most of them actually make agent eval scores worse. Tools like Serena actually caused negative impacts on evals. The closest alternative that actually performed well was Claude Context, but that required a cloud service for storage (yuck) and lacks reranking support, which makes a massive difference in retrieval quality. Roo Code unfortunately suffers the similar issues, requiring cloud storage (or a complicated setup of running qdrant locally) and lacks reranking support.

I used to maintain Pampax, a fork of someone's code search tool. Over time, I made a lot of improvements to it, but the upstream foundation was pretty fragile. Deep-rooted bugs, questionable design choices, and no matter how much I patched it up, I kept running into new issues.

So I decided to build something from the ground up after realizing that I could have built something a lot better.

The Core

Vera runs BM25 keyword search and vector similarity in parallel, merges them with Reciprocal Rank Fusion, then a cross-encoder reranks the top candidates. That reranking stage is the key differentiator. Most tools retrieve candidates and stop there. Vera actually reads query + candidate together and scores relevance jointly. The difference: 0.60 MRR@10 with reranking vs 0.28 with vector retrieval alone.

Fully Local Storage

I evaluated multiple storage backends (LanceDB, etc.) and settled on SQLite + sqvec + Tantivy in Rust. This was consistently the fastest and highest quality retrieval combo across all my tests. This solution is embedded, no need to run a separate qdrant instance, use a cloud service or anything. Storage overhead is tiny too: the index is usually around 1.33x the size of the code being indexed. 10MB of code = ~13.3MB database.

63 Languages

Tree-sitter structural parsing extracts functions, classes, methods, and structs as discrete chunks, not arbitrary line ranges. Unsupported file extensions still get indexed via text chunking. .gitignore is respected, and can be supplemented or overridden with a .veraignore.

Single Binary, Zero Dependencies

No Python, no NodeJS, no language servers, no db server for Milvus/Qdrant, no per-language toolchains. One static binary with all 63 grammars compiled in. Nothing else needed for API mode, and the ONNX modes automatically download the ONNX runtime for you.

Local inference

This is the part I think this sub will care about most, and honestly just started out as a nice-to-have bonus feature but has become a core part of the tool. Also my new favorite way to use the tool because of how damn fast it is. Vera ships with curated ONNX models that you can download with one command (vera setup):

  • jina-embeddings-v5-text-nano-retrieval (239M params) for embeddings
  • jina-reranker-v2-base-multilingual (278M params) for cross-encoder reranking

I spent a lot of time researching and testing small models to find the best ones for local inference. These two gave the best accuracy-to-size ratio by a wide margin in my testing.

GPU backends can be selected or auto-detected: CUDA (NVIDIA), ROCm (AMD), DirectML (Windows), CoreML (Apple), OpenVINO (Intel). Indexing the entire Vera codebase with ONNX CUDA on a RTX 4080 takes only about 8 seconds. For comparison, Nebius, the fastest embedding provider I've tested, takes 56 seconds to index the same codebase with Qwen3-Embedding-8B.

CPU works too but is slower (~6 min on a Ryzen 5 7600X3D). I recommend GPU or iGPU if possible. After the first index, vera update . only re-embeds changed files, incremental updates should just be a few seconds on CPU, or close to instant otherwise.

Model and Provider Agnostic

Vera is completely model-agnostic, so you can hook it up to whatever local inference engine or remote provider API you want. Any OpenAI-Compatible endpoint works, including local ones from llama.cpp, etc.

Benchmarks

I wanted to keep things grounded instead of making vague claims. All benchmark data, reproduction guides, and ablation studies are in the repo.

Comparison against other approaches on the same workload (v0.4.0, 17 tasks across ripgrep, flask, fastify):

Metric ripgrep cocoindex-code vector-only Vera hybrid
Recall@5 0.2817 0.3730 0.4921 0.6961
Recall@10 0.3651 0.5040 0.6627 0.7549
MRR@10 0.2625 0.3517 0.2814 0.6009
nDCG@10 0.2929 0.5206 0.7077 0.8008

Vera has improved a lot since that comparison. Here's v0.4.0 vs current on the same 21-task suite (ripgrep, flask, fastify, turborepo):

Metric v0.4.0 v0.7.0+
Recall@1 0.2421 0.7183
Recall@5 0.5040 0.7778 (~54% improvement)
Recall@10 0.5159 0.8254
MRR@10 0.5016 0.9095
nDCG@10 0.4570 0.8361 (~83% improvement)

Similar tools make crazy claims like 70-90% token usage reduction. I haven't benchmarked this myself so I won't throw around random numbers like that (honestly I think it would be very hard to benchmark deterministically), but the reduction is real. Tools like this help coding agents use their context window more effectively instead of burning it on bloated search results. Vera also defaults to token-efficient Markdown code blocks instead of verbose JSON, which cuts output size ~35-40%.

Install and usage

bunx @vera-ai/cli install # or: npx -y @vera-ai/cli install / uvx vera-ai install vera setup # downloads local models, auto-detects GPU vera index . vera search "authentication logic" 

One command install, one command setup, done. Works as CLI or MCP server. Vera also ships with agent skill files that tell your agent how to write effective queries and when to reach for tools like `rg` instead, that you can install to any project. The documentation on Github should cover anything else not covered here.

Other recent additions based on user requests:

  • Docker support for MCP (CPU, CUDA, ROCm, OpenVINO images)
  • vera doctor for diagnosing setup issues
  • vera repair to re-fetch missing local assets
  • vera upgrade to inspect and apply binary updates
  • Auto update checks

A big thanks to my users in my Discord server, they've helped a lot with catching bugs, making suggestions and good ideas. Please feel free to join for support, requests, or just to chat about LLM and tools. https://discord.gg/rXNQXCTWDt

submitted by /u/lemon07r
[link] [comments]
広告