AI Navigate

Litesearch: Karpathy's autoresearch but for consumer GPUs (4–8GB) + easy GUI

Reddit r/LocalLLaMA / 3/22/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • Litesearch is a fork of Karpathy's autoresearch designed to run tiny LLM experiments on consumer GPUs with 4–8 GB of VRAM.
  • It automatically selects model size, depth, batch, and sequence length to fit VRAM while leaving a buffer to prevent OOM errors.
  • It provides a simple dark GUI dashboard showing live VRAM usage, logs, and a config preview, eliminating the need to stare at a terminal.
  • It strips back fancy kernels (uses torch sdpa), offers easier setup, and works on older Pascal GPUs.
  • The project is NVIDIA-only for now, with approximate parameter mappings (4GB ~86M, 8GB ~285M) and a MIT license with pip/uv installation.

Karpathy's autoresearch is awesome — agent edits train.py and runs tiny LLM experiments overnight. But it wants serious VRAM.

I forked it to run on normal cards like my 1080/3060:

  • Auto-picks model size/depth/batch/seq len so it fits your VRAM (leaves buffer, no more OOM surprises)
  • Simple dark GUI dashboard: live VRAM bar, logs, config preview, start/stop — no terminal staring
  • Stripped fancy kernels (uses torch sdpa), easier setup, works on older Pascal too

Quick table example (full in README):
4GB → ~86M params
8GB → ~285M params
(Currently NVIDIA-only and works on every of their GPUs)

Repo: https://github.com/jlippp/litesearch
MIT, quick pip/uv install.

(Props to Karpathy for the original idea.)

submitted by /u/Fast-Mousse405
[link] [comments]