PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis

arXiv cs.CV / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces PaveBench, a large-scale benchmark aimed at advancing pavement distress perception beyond standard CV tasks by including quantitative analysis and explanation needs.
  • PaveBench unifies four tasks—classification, object detection, semantic segmentation, and vision-language question answering—with standardized task definitions and evaluation protocols.
  • It provides real-world highway inspection images with extensive visual annotations, plus a curated hard-distractor subset to assess robustness against confusing cases.
  • The work also proposes PaveVQA, a real-image vision-language QA dataset that supports single-turn and multi-turn, expert-corrected interactions for recognition, localization, quantitative estimation, and maintenance reasoning.
  • The authors evaluate state-of-the-art approaches and present a simple agent-augmented VQA framework that uses domain-specific models as tools alongside vision-language models, with the datasets released on Hugging Face.

Abstract

Pavement condition assessment is essential for road safety and maintenance. Existing research has made significant progress. However, most studies focus on conventional computer vision tasks such as classification, detection, and segmentation. In real-world applications, pavement inspection requires more than visual recognition. It also requires quantitative analysis, explanation, and interactive decision support. Current datasets are limited. They focus on unimodal perception. They lack support for multi-turn interaction and fact-grounded reasoning. They also do not connect perception with vision-language analysis. To address these limitations, we introduce PaveBench, a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. PaveBench supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. It provides unified task definitions and evaluation protocols. On the visual side, PaveBench provides large-scale annotations and includes a curated hard-distractor subset for robustness evaluation. It contains a large collection of real-world pavement images. On the multimodal side, we introduce PaveVQA, a real-image question answering (QA) dataset that supports single-turn, multi-turn, and expert-corrected interactions. It covers recognition, localization, quantitative estimation, and maintenance reasoning. We evaluate several state-of-the-art methods and provide a detailed analysis. We also present a simple and effective agent-augmented visual question answering framework that integrates domain-specific models as tools alongside vision-language models. The dataset is available at: https://huggingface.co/datasets/MML-Group/PaveBench.