Catapult - a llama.cpp launcher / manager

Reddit r/LocalLLaMA / 4/10/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • Catapult is introduced as a new launcher/manager for running and managing custom `llama.cpp` builds, models, and runtime presets in one place.
  • The project emphasizes full coverage of `llama-server` command-line options by including literally every option accepted by the server right now, targeting users who need advanced prompt cache/checkpoint and other RAM-related tuning.
  • It offers both a GUI and a TUI (terminal UI) designed to work efficiently alongside a running background server while still providing log visibility.
  • Catapult is distributed with source code and pre-built binaries for Linux (deb/rpm/AppImage), macOS, and Windows, using Tauri as its main engine to avoid typical Electron resource overhead.
  • The author positions it as an “unofficial” community tool and asks whether the ecosystem already has enough launcher solutions or if there’s demand for this approach.
Catapult - a llama.cpp launcher / manager

I would like to introduce to all the LocalLlama people my newest creation: Catapult.

Catapult started out as an experiment - what if I actually vibe-coded a launcher that I would use myself? After all, my use-cases have completely shut me out of using LMStudio - I need to run any custom llama.cpp build, sometimes with very customized options - but it would still be good to have one place to organize / search / download models, keep runtime presets, run the server and launch the occasional quick-test chat window.

So, I set out to do it. Since ggml is now part of HuggingFace and they have their own long-term development roadmap, this is not an "official" launcher by any means. This is just my attempt to bring something that I feel is missing - a complete, but also reasonably user friendly experience for managing the runtimes, models and launch parameters. The one feature I hope everyone will appreciate is that the launcher includes literally *every single option* accepted by `llama-server` right now - so no more wondering "when / whether will option X will be merged into the UI", which is kind of relevant, judging from the recent posts of people who find themselves unable to modify the pretty RAM-hungry defaults of `llama-server` with respect to prompt cache / checkpoints.

I've tried to polish it, make sure that all features are usable and tested, but of course this is a first release. What I'm more interested in is whether the ecosystem is already saturated with all the launcher solutions out there or is there actually anyone for whom this would be worth using?

Oh, as a bonus: includes a TUI. As per some internal Discord discussions: not a "yet-another-Electron-renderer" TUI, a real TUI optimized for the terminal experience, without fifteen stacked windows and the like. With respect to features, it's a bit less complete than the GUI, but still has the main feature set (also, per adaptation to the terminal experience, allows jumping in an out with a running server in the background, while giving a log view to still be able to see server output).

Comes in source code form or pre-packaged Linux (deb/rpm/AppImage), Mac and Windows binaries. Main engine is Tauri, so hopefully no Electron pains with the launcher using as much RAM as `llama-server`. License is Apache 2.0.

submitted by /u/ilintar
[link] [comments]