| Something I didn't expect when I started building with AI agents: the interface problem. My agent handles 15+ automations, runs night shifts, processes tasks across CLI, Discord, email. It's capable. But I had no way to see what it was doing without asking. And asking "what's your status?" every time is not a real workflow. It's a workaround. Humans process information visually. We scan, we group, we notice patterns at a glance. That's not how agents communicate. They give you text. Logs. Summaries. And when your agent is doing 20 things in parallel across 5 channels, text stops scaling. So I built a custom visual dashboard. Kanban board, real-time updates, native apps for macOS and iOS. Three platforms. 54 commits. It worked for about 6 weeks. Then I hit what I'd call the productivity paradox of AI agents: the more capable your agent becomes, the more things happen, and the more you need from your interface. I was adding features to keep up with the agent. Every feature added maintenance. Every simplification broke something. I was spending more time on the dashboard than on the actual work the agent was helping with. The fix wasn't building better custom software. It was finding a solid open-source foundation (in my case, Fizzy by 37signals) and building only the integration layer on top. A 94-line adapter between my agent and the board. That's the custom part. The board itself shouldn't be my problem. Two things I learned: I wrote up the full journey for anyone thinking about this problem: https://thoughts.jock.pl/p/wizboard-fizzy-ai-agent-interface-pivot-2026 Curious: for those of you running agents beyond chatbots, how do you keep track of what they're doing? [link] [comments] |
AI agents work in text. Humans think in visuals. I spent 2 months learning this the hard way.
Reddit r/artificial / 4/13/2026
💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage
Key Points
- The author explains a key bottleneck when building AI agents: they operate across many channels but only communicate via text logs/summaries, which doesn’t scale for humans’ visual pattern-detection workflows.
- They built a custom visual dashboard (Kanban, real-time updates, macOS/iOS) to replace repetitive “what’s your status?” checks, but hit an interface/productivity paradox where more agent capability caused more UI maintenance overhead.
- The main lesson is that the long-term challenge isn’t “smarter agents,” but designing an interface that supports human monitoring and decision-making as tasks proliferate.
- Rather than continuously rebuilding custom tooling, the author recommends starting from an open-source foundation (Fizzy by 37signals in their case) and implementing only a thin integration/adapter layer between the agent and the dashboard.
- They emphasize a planning perspective: while version 1 is quick to build, version 20 becomes a job, so teams should evaluate whether to build or reuse interface infrastructure.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business

Agentic coding at enterprise scale demands spec-driven development
VentureBeat

How to build effective reward functions with AWS Lambda for Amazon Nova model customization
Amazon AWS AI Blog

How 25 Students Went from Idea to Deployed App in 2 Hours with Google Antigravity
Dev.to