KLDrive: Fine-Grained 3D Scene Reasoning for Autonomous Driving based on Knowledge Graph

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • KLDrive is proposed as a knowledge-graph-augmented LLM reasoning framework for fine-grained question answering in autonomous driving that aims to reduce hallucinations and make reasoning more reliable.
  • It introduces an energy-based scene fact construction module that consolidates multi-source evidence into a structured scene knowledge graph to improve the factual grounding of downstream reasoning.
  • An LLM agent then performs fact-grounded reasoning over a constrained action space using explicit structural constraints, improving transparency and controllability of the reasoning process.
  • The approach uses structured prompting and few-shot in-context exemplars to adapt across diverse QA tasks without heavy task-specific fine-tuning.
  • Experiments on two autonomous-driving QA benchmarks report improved results, including 65.04% accuracy on NuScenes-QA, a best SPICE score of 42.45 on GVQA, and a 46.01-point improvement on the most challenging counting task, indicating stronger factual reasoning performance.

Abstract

Autonomous driving requires reliable reasoning over fine-grained 3D scene facts. Fine-grained question answering over multi-modal driving observations provides a natural way to evaluate this capability, yet existing perception pipelines and driving-oriented large language model (LLM) methods still suffer from unreliable scene facts, hallucinations, opaque reasoning, and heavy reliance on task-specific training. We present KLDrive, the first knowledge-graph-augmented LLM reasoning framework for fine-grained question answering in autonomous driving. KLDrive addresses this problem through designing two tightly coupled components: an energy-based scene fact construction module that consolidates multi-source evidence into a reliable scene knowledge graph, and an LLM agent that performs fact-grounded reasoning over a constrained action space under explicit structural constraints. By combining structured prompting with few-shot in-context exemplars, the framework adapts to diverse reasoning tasks without heavy task-specific fine-tuning. Experiments on two large-scale autonomous-driving QA benchmarks show that KLDrive outperforms prior state-of-the-art methods, achieving the best overall accuracy of 65.04% on NuScenes-QA and the best SPICE score of 42.45 on GVQA. On counting, the most challenging factual reasoning task, it improves over the strongest baseline by 46.01 percentage points, demonstrating substantially reduced hallucinations and the benefit of coupling reliable scene fact construction with explicit reasoning.