3D-VCD: Hallucination Mitigation in 3D-LLM Embodied Agents through Visual Contrastive Decoding

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 3D-VCDは、3D環境で動く3D-LLMベースのエンボディドエージェントにおける幻覚(ハルシネーション)を、推論時に抑制するための「視覚コントラスト(contrastive decoding)」手法として提案されています。
  • 既存の2Dの視覚言語向け対策では不十分な点を踏まえ、3Dではオブジェクトの有無・空間レイアウト・幾何学的な根拠付けが失敗要因になるとし、オブジェクト中心の3Dシーングラフ表現に意味的/幾何学的な摂動を加えます。
  • 元の3D文脈と摂動した3D文脈の予測を対比し、根拠となる3D証拠に鈍感で言語先行(priors)由来である可能性が高いトークンを抑えることで、接地された推論を改善します。
  • 3D-POPEとHEALのベンチマークで、再学習なし(inference-timeのみ)で一貫して接地推論が向上し、3D表現に基づく推論時コントラストが実用的な信頼性向上策になり得ることを示しています。

Abstract

Large multimodal models are increasingly used as the reasoning core of embodied agents operating in 3D environments, yet they remain prone to hallucinations that can produce unsafe and ungrounded decisions. Existing inference-time hallucination mitigation methods largely target 2D vision-language settings and do not transfer to embodied 3D reasoning, where failures arise from object presence, spatial layout, and geometric grounding rather than pixel-level inconsistencies. We introduce 3D-VCD, the first inference-time visual contrastive decoding framework for hallucination mitigation in 3D embodied agents. 3D-VCD constructs a distorted 3D scene graph by applying semantic and geometric perturbations to object-centric representations, such as category substitutions and coordinate or extent corruption. By contrasting predictions under the original and distorted 3D contexts, our method suppresses tokens that are insensitive to grounded scene evidence and are therefore likely driven by language priors. We evaluate 3D-VCD on the 3D-POPE and HEAL benchmarks and show that it consistently improves grounded reasoning without any retraining, establishing inference-time contrastive decoding over structured 3D representations as an effective and practical route to more reliable embodied intelligence.