Karpathy's autoresearch is awesome — agent edits train.py and runs tiny LLM experiments overnight. But it wants serious VRAM.
I forked it to run on normal cards like my 1080/3060:
- Auto-picks model size/depth/batch/seq len so it fits your VRAM (leaves buffer, no more OOM surprises)
- Simple dark GUI dashboard: live VRAM bar, logs, config preview, start/stop — no terminal staring
- Stripped fancy kernels (uses torch sdpa), easier setup, works on older Pascal too
Quick table example (full in README):
4GB → ~86M params
8GB → ~285M params
(Currently NVIDIA-only and works on every of their GPUs)
Repo: https://github.com/jlippp/litesearch
MIT, quick pip/uv install.
(Props to Karpathy for the original idea.)
[link] [comments]
