LLM-as-Judge for Semantic Judging of Powerline Segmentation in UAV Inspection

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines using an offboard LLM as a “semantic judge” to assess how reliable UAV-based powerline segmentation outputs are when real-world visuals differ from training conditions.
  • It frames the approach as a watchdog/monitoring setup rather than a new onboard inspection system, where the LLM evaluates segmentation overlays for reliability and safety concerns.
  • Two evaluation protocols are proposed: one measures repeatability by checking stability of the LLM’s quality scores and confidence under identical prompts, and the other measures perceptual sensitivity under controlled visual corruptions (fog, rain, snow, shadow, sunflare).
  • Results indicate the LLM gives highly consistent categorical judgments for the same inputs and appropriately reduces confidence as visual conditions degrade, while still responding to cues like missing or misidentified power lines.
  • The authors conclude that, with careful constraints, an LLM can be a dependable semantic judge for monitoring segmentation quality in safety-critical aerial inspection workflows.

Abstract

The deployment of lightweight segmentation models on drones for autonomous power line inspection presents a critical challenge: maintaining reliable performance under real-world conditions that differ from training data. Although compact architectures such as U-Net enable real-time onboard inference, their segmentation outputs can degrade unpredictably in adverse environments, raising safety concerns. In this work, we study the feasibility of using a large language model (LLM) as a semantic judge to assess the reliability of power line segmentation results produced by drone-mounted models. Rather than introducing a new inspection system, we formalize a watchdog scenario in which an offboard LLM evaluates segmentation overlays and examine whether such a judge can be trusted to behave consistently and perceptually coherently. To this end, we design two evaluation protocols that analyze the judge's repeatability and sensitivity. First, we assess repeatability by repeatedly querying the LLM with identical inputs and fixed prompts, measuring the stability of its quality scores and confidence estimates. Second, we evaluate perceptual sensitivity by introducing controlled visual corruptions (fog, rain, snow, shadow, and sunflare) and analyzing how the judge's outputs respond to progressive degradation in segmentation quality. Our results show that the LLM produces highly consistent categorical judgments under identical conditions while exhibiting appropriate declines in confidence as visual reliability deteriorates. Moreover, the judge remains responsive to perceptual cues such as missing or misidentified power lines, even under challenging conditions. These findings suggest that, when carefully constrained, an LLM can serve as a reliable semantic judge for monitoring segmentation quality in safety-critical aerial inspection tasks.