Running Llama2 Models in Vanilla Minecraft With Pure Commands

Reddit r/LocalLLaMA / 4/4/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • A developer created a tool that converts Llama2 language models into a Minecraft datapack so text generation (inference) can run inside the game using only vanilla commands.
  • The current implementation is semi-finished, supporting only argmax sampling and lacking a tokenizer, which limits outputs and can cause the model to get stuck in repetitive loops.
  • The project notes that adding top-p sampling and completing the tokenizer would likely improve generation quality and usability, including supporting more natural continuation.
  • Performance is reported as extremely slow: a ~15M parameter model may take around 20 minutes to generate a single token, making it more of a proof-of-concept than a practical chat experience today.
  • Users are invited to try it by downloading model and tokenizer binaries (e.g., from the llama2.c ecosystem) and following the repository instructions; the author plans to keep improving toward a usable Minecraft chat model.
Running Llama2 Models in Vanilla Minecraft With Pure Commands

I made a program that converts any

llama2 large language model into a

minecraft datapack, and you can run inference right

inside the game. It's still semi-finished, Currently I've only

implemented argmax sampling, so the output

tends to stuck in loops sometimes. Adding top-p

sampling will probably improve this a lot. The tokenizer

is also missing for now, it can only generate text

from scratch.

Inference speed is...quite slow. With a 15M parameter

model, it takes roughly 20 minutes to produce a single

token. If you want to try it out yourself, you can

download "stories15M.bin" and "tokenizer.bin" from

llama2.c, and follow the instructions in my repository

down below.

I will keep working on this project, hopefully one day I

will be able to bring a usable chat model in Minecraft.

Github Repository

*Inspired by Andrej Karpathy's llama2.c

submitted by /u/This-Purchase-3325
[link] [comments]