Cross-Stage Coherence in Hierarchical Driving VQA: Explicit Baselines and Learned Gated Context Projectors

arXiv cs.AI / 4/27/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to keep “planning” decisions consistent with a model’s own earlier perceptions in hierarchical driving visual question answering (GVQA) using cross-stage context passing on DriveLM-nuScenes.
  • An explicit, training-free approach compares three prompt-based conditioning strategies on a domain-adapted 4B VLM, cutting NLI contradictions by up to 42.6% and serving as a strong baseline.
  • An implicit approach adds learned gated context projectors that transfer a hidden-state representation from one stage to the next, trained with stage-specific QLoRA adapters while updating only ~0.5% of parameters.
  • The implicit method yields statistically significant improvements, including a 34% reduction in planning-stage NLI contradiction and a 50% increase in cross-stage entailment, plus a CIDEr +30.3% gain in planning language quality.
  • The authors note limitations from using non-driving-domain pretraining for the implicit setup, which harms lexical/structural consistency, and conclude that combining these strategies with better domain adaptation is a promising next step.

Abstract

Graph Visual Question Answering (GVQA) for autonomous driving organizes reasoning into ordered stages, namely Perception, Prediction, and Planning, where planning decisions should remain consistent with the model's own perception. We present a comparative study of cross-stage context passing on DriveLM-nuScenes using two complementary mechanisms. The explicit variant evaluates three prompt-based conditioning strategies on a domain-adapted 4B VLM (Mini-InternVL2-4B-DA-DriveLM) without additional training, reducing NLI contradiction by up to 42.6% and establishing a strong zero-training baseline. The implicit variant introduces gated context projectors, which extract a hidden-state vector from one stage and inject a normalized, gated projection into the next stage's input embeddings. These projectors are jointly trained with stage-specific QLoRA adapters on a general-purpose 8B VLM (InternVL3-8B-Instruct) while updating only approximately 0.5% of parameters. The implicit variant achieves a statistically significant 34% reduction in planning-stage NLI contradiction (bootstrap 95% CIs, p < 0.05) and increases cross-stage entailment by 50%, evaluated with a multilingual NLI classifier to account for mixed-language outputs. Planning language quality also improves (CIDEr +30.3%), but lexical overlap and structural consistency degrade due to the absence of driving-domain pretraining. Since the two variants use different base models, we present them as complementary case studies: explicit context passing provides a strong training-free baseline for surface consistency, while implicit gated projection delivers significant planning-stage semantic gains, suggesting domain adaptation as a plausible next ingredient for full-spectrum improvement.