Learning to Draw ASCII Improves Spatial Reasoning in Language Models

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether teaching LLMs to construct explicit spatial layouts (in a human-sketch-like way) leads to true spatial understanding rather than superficial pattern matching.
  • It introduces Text2Space, a dataset that links natural-language spatial descriptions to ground-truth ASCII grid layouts and spatial question-answer pairs to disentangle representation-construction errors from reasoning errors.
  • The authors find a strong “Read-Write Asymmetry”: models can usually interpret ASCII well, but have difficulty generating ASCII from text, and those generation mistakes cause downstream incorrect answers.
  • Training on layout construction (Text→ASCII) notably improves spatial reasoning from text alone, even when the model does not need to output ASCII during inference.
  • The gains further increase when combining construction training with comprehension training and they transfer to three external spatial reasoning benchmarks, suggesting generalizable spatial understanding.

Abstract

When faced with complex spatial problems, humans naturally sketch layouts to organize their thinking, and the act of drawing further sharpens their understanding. In this work, we ask whether a similar principle holds for Large Language Models (LLMs): can learning to construct explicit visual layouts from spatial descriptions instill genuine spatial understanding? We introduce Text2Space, a dataset that pairs natural language descriptions with ground-truth ASCII grid layouts and spatial QA pairs, enabling us to separate failures in constructing spatial representations from failures in reasoning over them. We adopt ASCII because it is human-readable, operates entirely within the token space of language models, and encodes spatial relations in a structurally verifiable form. Our evaluation reveals a pronounced "Read-Write Asymmetry": LLMs interpret ASCII representations effectively but struggle to produce them from text, and these construction errors propagate to incorrect answers downstream. To address this limitation, we train models on layout construction (Text\rightarrowASCII) and find that it significantly improves spatial reasoning from text alone, even without producing any ASCII at inference time. Combining construction with comprehension training further amplifies these gains. Crucially, these improvements transfer to three external spatial reasoning benchmarks, demonstrating that, much as sketching sharpens human spatial thinking, learning to construct explicit layouts instills spatial understanding that generalizes beyond the training format.