Has anyone managed to run an offline agent (OpenClaw or similar) with a local LLM on Android?

Reddit r/LocalLLaMA / 3/27/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • A Reddit user is testing local LLMs on Android (via Termux and apps like MNN Chat) and wants an offline “agent layer” similar to OpenClaw, fully local with no cloud/API access.
  • They can already run GGUF local models and log inputs/outputs to SQLite, but they lack true agent capabilities like tool use, chaining, and persistent/local memory.
  • The main challenge they highlight is that most agent frameworks are desktop- or Python-environment dependent, which is difficult on mobile Android.
  • They’re asking the community whether anyone has successfully run offline agents on-device, what lightweight agent frameworks work in Termux, and any hacky workarounds—especially for tool calling and automation loops.

I’m currently experimenting with running local LLMs directly on Android (mostly via Termux + apps like MNN Chat).

What I’m trying to figure out:

Is there any way to run something like an offline agent (e.g. OpenClaw or similar) fully locally on a smartphone?

Main constraints:

- no cloud

- no API calls

- fully offline

- ideally controllable via CLI or scripts (Termux)

So far:

- I can run local models (GGUF etc.)

- I can log inputs/outputs via SQLite

- but there’s no real “agent layer” (tool use, chaining, memory)

Problem:

Most agent frameworks seem desktop-focused or depend on Python environments that are painful on Android.

Questions:

- Has anyone actually done this on-device?

- Any lightweight agent frameworks that work in Termux?

- Workarounds? (even hacky ones)

I’m especially interested in:

- tool calling

- basic automation loops

- local memory handling

Feels like mobile is still missing a proper local-first agent stack.

Would appreciate any pointers.

submitted by /u/NeoLogic_Dev
[link] [comments]