AI Navigate

Spatial Reasoning is Not a Free Lunch: A Controlled Study on LLaVA

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models still struggle with basic 2D spatial reasoning, and attributes part of this to design choices like CLIP-style encoders and flattening images into 1D token sequences with 1D positional encoding.
  • It presents a controlled diagnostic study within the LLaVA framework to isolate how encoder design and positional structure affect spatial grounding.
  • The authors compare CLIP-based encoders against alternatives trained with denser or generative objectives, as well as variants augmented with 2D positional encoding, across a suite of spatial benchmarks.
  • Results show consistent spatial reasoning gaps across models, indicating that encoder objectives and 2D positional structure shape but do not fully resolve spatial understanding challenges.

Abstract

Vision-language models (VLMs) have advanced rapidly, yet they still struggle with basic spatial reasoning. Despite strong performance on general benchmarks, modern VLMs remain brittle at understanding 2D spatial relationships such as relative position, layout, and counting. We argue that this failure is not merely a data problem, but is closely tied to dominant design choices in current VLM pipelines: reliance on CLIP-style image encoders and the flattening of images into 1D token sequences with 1D positional encoding. We present a controlled diagnostic study within the LLaVA framework to isolate how these choices affect spatial grounding. We evaluate frontier models and LLaVA variants on a suite of spatial benchmarks, comparing CLIP-based encoders against alternatives trained with denser or generative objectives, as well as variants augmented with 2D positional encoding. Our results show consistent spatial performance gaps across models, and indicate that encoder objectives and positional structure shape spatial behavior, but do not fully resolve it.