DeepTest Tool Competition 2026: Benchmarking an LLM-Based Automotive Assistant

arXiv cs.AI / 4/15/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper reports results from the first Large Language Model (LLM) Testing competition at the DeepTest workshop during ICSE 2026.
  • Four competing tools were benchmarked on an LLM-based automotive assistant tasked with retrieving car manual information and correctly mentioning relevant warnings.
  • The competition focused on finding user inputs where the system fails to appropriately surface warnings, using metrics centered on failure-finding effectiveness and test diversity.
  • The report details the experimental methodology, describes the participating competitor tools, and summarizes the comparative outcomes of their performance.

Abstract

This report summarizes the results of the first edition of the Large Language Model (LLM) Testing competition, held as part of the DeepTest workshop at ICSE 2026. Four tools competed in benchmarking an LLM-based car manual information retrieval application, with the objective of identifying user inputs for which the system fails to appropriately mention warnings contained in the manual. The testing solutions were evaluated based on their effectiveness in exposing failures and the diversity of the discovered failure-revealing tests. We report on the experimental methodology, the competitors, and the results.