Best LLM for logic/ spatial reasoning on small context inputs?

Reddit r/LocalLLaMA / 4/16/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The post asks for a small, locally runnable LLM that can handle logic and spatial reasoning for a 2D grid-based procedural text-adventure game.
  • The requester describes a setup with 32GB RAM and 8GB VRAM and notes that DeepSeek-R1-Distill-Qwen-7B-Q6_K_L.gguf performed poorly, hallucinating and ignoring the grid constraints.
  • The target task involves feeding the model a 10x10 board state plus a constrained action list (up to 50 valid actions), emphasizing strict adherence to spatial structure.
  • The user is specifically seeking model suggestions that fit within the available memory while improving “spatial IQ” and reducing grid-violation behavior.

My system has 32gb RAM and 8gb VRAM. I tried out DeepSeek-R1-Distill-Qwen-7B-Q6_K_L.gguf and it was vastly inadequate for what I wanted so looking for other suggestions.

I'm working on a procedural text-adventure engine where the world is a strict 2D coordinate grid. The model receives a board state (10x10) and a list of valid actions (up to 50). I’ve found that the 7B model I tried failed at 'spatial IQ' and kept hallucinating and trying to ignore the grid layout. Looking for a split model I can keep under 32gb to retain some system memory.

submitted by /u/clambarlambar
[link] [comments]