Can VLMs Reason Robustly? A Neuro-Symbolic Investigation

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether vision-language models (VLMs) can reason robustly when covariate shifts change the visual input distribution while the underlying logical rules stay the same.
  • Experiments on visual deductive reasoning show that end-to-end gradient fine-tuning can yield high in-distribution accuracy but often fails to generalize robustly under these distribution shifts.
  • The authors argue that fine-tuning may not reliably induce the intended reasoning function, motivating a neuro-symbolic approach that separates perception (concept recognition) from reasoning (logic execution).
  • They also find that prior neuro-symbolic methods using black-box reasoning components can still show inconsistent robustness across different tasks.
  • To improve reliability, the paper proposes VLC, which compiles task rules into an explicit symbolic circuit executed exactly over the object concepts recognized by the VLM, achieving more consistent performance under covariate shifts across multiple task rule sets.

Abstract

Vision-Language Models (VLMs) have been applied to a wide range of reasoning tasks, yet it remains unclear whether they can reason robustly under distribution shifts. In this paper, we study covariate shifts in which the perceptual input distribution changes while the underlying prediction rules do not. To investigate this question, we consider visual deductive reasoning tasks, where a model is required to answer a query given an image and logical rules defined over the object concepts in the image. Empirically, we find that VLMs fine-tuned through gradient-based end-to-end training can achieve high in-distribution accuracy but fail to generalize under such shifts, suggesting that fine-tuning does not reliably induce the underlying reasoning function. This motivates a neuro-symbolic perspective that decouples perception from reasoning. However, we further observe that recent neuro-symbolic approaches that rely on black-box components for reasoning can still exhibit inconsistent robustness across tasks. To address this issue, we propose VLC, a neuro-symbolic method that combines VLM-based concept recognition with circuit-based symbolic reasoning. In particular, task rules are compiled into a symbolic program, specifically a circuit, which executes the rules exactly over the object concepts recognized by the VLM. Experiments on three visual deductive reasoning tasks with distinct rule sets show that VLC consistently achieves strong performance under covariate shifts, highlighting its ability to support robust reasoning.