AI Doomsday Toolbox v0.932 update

Reddit r/LocalLLaMA / 3/30/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • AI Doomsday Toolbox v0.932 adds benchmarking features for local LLMs, letting users compare thread counts and select better per-device configurations.
  • The update introduces a dataset creator that can import TXT/PDFs, chunk and clean content, generate and rate Q&A pairs, and export datasets in Alpaca JSON format with customizable prompts.
  • It significantly expands Termux/proot and agent capabilities, including improved proot distro support, tool/webview/file management workflows, and an agent workspace where LLMs can run commands and use custom tools/agents.
  • Multimedia support is enhanced with Whisper-based subtitle generation and “subtitle burning” into videos, along with summary workflow improvements for Ollama and llama.cpp-compatible backends.
  • The project also adds built-in Ollama/llama tooling (model/Modelfile management and longer-call stability for llama-server style backends) and a “Pet system,” plus an easier distribution path via a Google Play beta.

I’ve been working on this Android project for running local AI, I've posted about this before and the latest version adds a pretty big batch of changes and additions.

Main additions in this update:

  • Benchmarking for local LLMs Users can benchmark their device and compare different thread counts to figure out the best setup for a model instead of guessing.

  • Dataset creator You can import txt or PDF files, split them into chunks, clean them up, generate question/answer pairs, rate them, and export the final dataset in Alpaca JSON format. The prompts used in the pipeline can also be customized.

  • Termux / proot workflows The app now has better support for using a proot distro through Termux, including SSH setup help, install flows for predefined tools, in-app webview access for compatible tools, and file management from inside the app.

  • AI agent workspace There is now an agent-oriented environment built around Termux and local backends, with support for custom tools, custom agents, and more project-oriented workflows. It gives your LLM the power to use tools, run commands, etc...

  • Subtitle burning You can generate subtitles with Whisper and burn them into video with font, color, and position controls.

  • Summary workflow changes Summaries now work better with Ollama and llama.cpp-compatible backends.

  • Built-in Ollama and llama tools There is now a built-in Ollama manager for models and Modelfiles, plus a native chat interface for llama-server style backends, it allows the user to run long calls to the server without dropping the connection (it happens with the webui).

  • Pet system The Tama side of the app has gameplay around memory, adventures, farm management, and interaction.

It still includes the things I had been focusing on before too, like distributed inference across Android devices, workflow-based processing for media and documents, offline knowledge tools, local image generation, and the general idea of reusing old phones for local AI instead of leaving them unused.

If you want the easiest install path, there is also a Google Play beta now. The Play version uses an App Bundle, so the install is smaller than a universal package, and joining the beta helps a lot with testing across different devices:

Google Play beta: here

GitHub: here

Feedback is appreciated.

submitted by /u/ManuXD32
[link] [comments]