AI Navigate

Visual Set Program Synthesizer

arXiv cs.CL / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that many visual question answering tasks require explicit set-based reasoning (filtering, comparison, and aggregation) beyond standard object recognition.
  • It proposes Visual Program Synthesis, generating a symbolic program executed by a separate engine grounded in the visual scene.
  • It introduces Set-VQA, a benchmark specifically designed to evaluate set-based visual reasoning.
  • Experiments show the program-driven approach significantly outperforms state-of-the-art baselines, yielding more transparent, systematic reasoning and higher answer accuracy.

Abstract

A user pointing their phone at a supermarket shelf and asking "Which soda has the least sugar?" poses a difficult challenge for current visual Al assistants. Such queries require not only object recognition, but explicit set-based reasoning such as filtering, comparison, and aggregation. Standard endto-end MLLMs often fail at these tasks because they lack an explicit mechanism for compositional logic. We propose treating visual reasoning as Visual Program Synthesis, where the model first generates a symbolic program that is executed by a separate engine grounded in visual scenes. We also introduce Set-VQA, a new benchmark designed specifically for evaluating set-based visual reasoning. Experiments show that our approach significantly outperforms state-of-the-art baselines on complex reasoning tasks, producing more systematic and transparent behavior while substantially improving answer accuracy. These results demonstrate that program-driven reasoning provides a principled alternative to black-box visual-language inference.