| Built an open-source terminal dashboard for managing multiple AI coding sessions from one place. Everything runs locally — no cloud dependency for the core features. The voice dictation runs on local Whisper (or cloud STT if you prefer), so you can talk to your coding agents without sending audio to a third party. Sessions persist through restarts, and you can pop out any terminal to your system terminal and adopt it back anytime. Features: Install: GitHub: https://github.com/ai-genius-automations/hivecommand Apache 2.0 + Commons Clause. Would love feedback, especially on the local Whisper integration. [link] [comments] |
HiveCommand — local-first terminal dashboard for AI coding agents with local Whisper voice control and multi-agent orchestration
Reddit r/LocalLLaMA / 3/18/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical Usage
Key Points
- HiveCommand is an open-source, local-first terminal dashboard for managing multiple AI coding sessions from a single interface, with no cloud dependency for core features.
- It supports multi-agent hive-mind orchestration to run parallel coding agents and provides a live-streaming grid of terminal outputs.
- Voice dictation uses local Whisper by default (cloud STT optional), enabling speech control without sending audio to third parties, with sessions persisting through restarts.
- It includes a built-in web browser, Git source control, a desktop app with a system tray, per-project session tracking, and a one-line install.
- The project is released under Apache 2.0 plus Commons Clause, and feedback on the local Whisper integration is welcome.
Related Articles

ベテランの若手育成負担を減らせ、PLC制御の「ラダー図」をAIで生成
日経XTECH

Hey dev.to community – sharing my journey with Prompt Builder, Insta Posts, and practical SEO
Dev.to

Why Regex is Not Enough: Building a Deterministic "Sudo" Layer for AI Agents
Dev.to

Perplexity Hub
Dev.to

How to Build Passive Income with AI in 2026: A Developer's Practical Guide
Dev.to