Q-Tacit: Image Quality Assessment via Latent Visual Reasoning

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Q-Tacit, a new paradigm for VLM-based image quality assessment that moves reasoning from natural language into a latent quality space.
  • It argues that language can be a suboptimal representation for quality perception because visual quality cues are hard to abstract into discrete text tokens.
  • Q-Tacit uses a two-stage method: injecting structural visual quality priors into the latent space and calibrating latent reasoning trajectories to improve assessment quality.
  • Experiments show Q-Tacit achieves strong overall image quality reasoning performance while using significantly fewer tokens than prior chain-of-thought-style reasoning methods.
  • The authors state they will release source code to enable further research on latent visual reasoning approaches for IQA.

Abstract

Vision-Language Model (VLM)-based image quality assessment (IQA) has been significantly advanced by incorporating Chain-of-Thought (CoT) reasoning. Recent work has refined image quality reasoning by applying reinforcement learning (RL) and leveraging active visual tools. However, such strategies are typically language-centric, with visual information being treated as static preconditions. Quality-related visual cues often cannot be abstracted into text in extenso due to the gap between discrete textual tokens and quality perception space, which in turn restricts the reasoning effectiveness for visually intensive IQA tasks. In this paper, we revisit this by asking the question, "Is natural language the ideal space for quality reasoning?" and, as a consequence, we propose Q-Tacit, a new paradigm that elicits VLMs to reason beyond natural language in the latent quality space. Our approach follows a synergistic two-stage process: (i) injecting structural visual quality priors into the latent space, and (ii) calibrating latent reasoning trajectories to improve quality assessment ability. Extensive experiments demonstrate that Q-Tacit can effectively perform quality reasoning with significantly fewer tokens than previous reasoning-based methods, while achieving strong overall performance. This paper validates the proposition that language is not the only compact representation suitable for visual quality, opening possibilities for further exploration of effective latent reasoning paradigms for IQA. Source code will be released to support future research.