PeopleSearchBench: A Multi-Dimensional Benchmark for Evaluating AI-Powered People Search Platforms

arXiv cs.AI / 3/31/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces PeopleSearchBench, an open-source benchmark to evaluate AI-powered people search platforms using 119 real-world queries across four use cases (corporate recruiting, B2B prospecting, deterministic expert search, and influencer/KOL discovery).
  • It proposes Criteria-Grounded Verification, which extracts explicit, verifiable criteria from each query and uses live web search to produce binary relevance judgments based on factual checks rather than subjective LLM-as-judge scoring.
  • The benchmark evaluates systems on three dimensions—Relevance Precision (padded nDCG@10), Effective Coverage (task completion and qualified yield), and Information Utility (profile completeness/usefulness)—and averages them into an overall score.
  • In experiments, Lessie is the top-performing agent with an overall score of 65.2 (18.5% ahead of the runner-up) and the only system achieving 100% task completion across all queries.
  • The authors publish full artifacts (code, query definitions, prompts, normalization procedures, and results) and include statistical reporting such as confidence intervals and human validation of the verification pipeline (Cohen’s kappa = 0.84).

Abstract

AI-powered people search platforms are increasingly used in recruiting, sales prospecting, and professional networking, yet no widely accepted benchmark exists for evaluating their performance. We introduce PeopleSearchBench, an open-source benchmark that compares four people search platforms on 119 real-world queries across four use cases: corporate recruiting, B2B sales prospecting, expert search with deterministic answers, and influencer/KOL discovery. A key contribution is Criteria-Grounded Verification, a factual relevance pipeline that extracts explicit, verifiable criteria from each query and uses live web search to determine whether returned people satisfy them. This produces binary relevance judgments grounded in factual verification rather than subjective holistic LLM-as-judge scores. We evaluate systems on three dimensions: Relevance Precision (padded nDCG@10), Effective Coverage (task completion and qualified result yield), and Information Utility (profile completeness and usefulness), averaged equally into an overall score. Lessie, a specialized AI people search agent, performs best overall, scoring 65.2, 18.5% higher than the second-ranked system, and is the only system to achieve 100% task completion across all 119 queries. We also report confidence intervals, human validation of the verification pipeline (Cohen's kappa = 0.84), ablations, and full documentation of queries, prompts, and normalization procedures. Code, query definitions, and aggregated results are available on GitHub.