AI Navigate

Local manga translator with LLMs built in

Reddit r/LocalLLaMA / 3/14/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • It combines a YOLO model for text detection, a custom OCR model, a LaMa model for inpainting, a suite of LLMs for translation, and a custom text rendering engine to blend translated text into manga images.
  • It is open source and written in Rust as a standalone application with CUDA bundled, requiring zero setup.
  • The tool runs locally, enabling offline manga translation without dependence on cloud services.
  • The project has been developed for about a year and has achieved good results, with the GitHub repository at https://github.com/mayocream/koharu.

I have been working on this project for almost one year, and it has achieved good results in translating manga pages.

In general, it combines a YOLO model for text detection, a custom OCR model, a LaMa model for inpainting, a bunch of LLMs for translation, and a custom text rendering engine for blending text into the image.

It's open source and written in Rust; it's a standalone application with CUDA bundled, with zero setup required.

https://github.com/mayocream/koharu

submitted by /u/mayocream39
[link] [comments]