Quantized Vision-Language Models for Damage Assessment: A Comparative Study of LLaVA-1.5-7B Quantization Levels

arXiv cs.CV / 3/31/2026

📰 News

Key Points

  • The paper studies quantized vision-language models for automated bridge damage assessment, aiming to balance description quality, inference speed, and compute requirements.
  • It builds an end-to-end pipeline using LLaVA-1.5-7B for visual damage analysis, structured JSON extraction, and rule-based priority scoring.
  • Using 254 rebar exposure images, it compares quantization levels Q4_K_M, Q5_K_M, and Q8_0 on a quality framework that evaluates both damage-type recognition and severity classification.
  • Results show Q5_K_M provides the best trade-off, delivering higher quality than Q4_K_M with only a small speed reduction, and matching Q8_0 quality while running about 25% faster.
  • The study finds Q5_K_M has the weakest correlation between quality and text quality metrics, suggesting more consistent performance across varying description lengths.
  • categories: [

Abstract

Bridge infrastructure inspection is a critical but labor-intensive task requiring expert assessment of structural damage such as rebar exposure, cracking, and corrosion. This paper presents a comprehensive study of quantized Vision-Language Models (VLMs) for automated bridge damage assessment, focusing on the trade-offs between description quality, inference speed, and resource requirements. We develop an end-to-end pipeline combining LLaVA-1.5-7B for visual damage analysis, structured JSON extraction, and rule-based priority scoring. To enable deployment on consumer-grade GPUs, we conduct a systematic comparison of three quantization levels: Q4_K_M, Q5_K_M, and Q8\_0 across 254 rebar exposure images. We introduce a 5-point quality evaluation framework assessing damage type recognition, severity classification. Our results demonstrate that Q5_K_M achieves the optimal balance: quality score 3.18\pm1.35/5.0, inference time 5.67s/image, and 0.56 quality/sec efficiency -- 8.5% higher quality than Q4_K_M with only 4.5% speed reduction, while matching Q8_0's quality with 25% faster inference. Statistical analysis reveals Q5_K_M exhibits the weakest text-quality correlation (-0.148), indicating consistent performance regardless of description length.