| Anthropic recently shipped interactive artifacts in Claude — charts, diagrams, visualizations rendered right in the chat. Cool feature, locked to one provider. (source) I wanted the same thing for whatever model I'm running. So I built it. It's called Inline Visualizer, it's BSD-3 licensed, and it works with any model that supports tool calling — Qwen, Mistral, Gemma, DeepSeek, Gemini, Claude, GPT, doesn't matter. What it actually does: It gives your model a design system and a rendering tool. The model writes HTML/SVG fragments, the tool wraps them in a themed shell with dark mode support, and they render inline in chat. No iframes-within-iframes mess, no external services, no API keys. The interesting part is the JS bridge it injects: elements inside the visualization can send messages back to the chat. Click a node in an architecture diagram and your model gets asked about that component. Fill out a quiz and the model grades your answers. Pick preferences in a form and the model gives you a tailored recommendation. It turns diagrams into conversation interfaces. Some things it can render:
What you need:
I've been testing with Claude Haiku and Qwen3.5 27b but honestly the real fun is running it with local models. If your model can write decent HTML, it can use this. Obviously, this plugin is way cooler if you have a high TPS for your local model. If you only get single digit TPS, you might be waiting a good minute for your rendered artifact to appear! Download + Installation GuideThe plugin (tool + skill) is here: https://github.com/Classic298/open-webui-plugins BSD-3 licensed. Fork it, modify it, do whatever you want with it. Note: The demo video uses Claude Haiku because it's fast and cheap for recording demos. The whole point of this tool is that it works with any model — if your model can write HTML and use tool calling, it'll work. Haiku just made my recording session quicker. I have tested it with Qwen3.5 27b too — and it worked well, but it was a bit too slow on my machine. [link] [comments] |
Your local model can now render interactive charts, clickable diagrams, and forms that talk back to the AI — no cloud required
Reddit r/LocalLLaMA / 3/21/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical Usage
Key Points
- Inline Visualizer lets any local AI model with tool calling render interactive HTML/SVG visualizations directly in chat without cloud services or iframes.
- It uses a JS bridge to allow UI elements inside visuals to send messages back to the chat, enabling actions like clicking a diagram node to query the AI or having a quiz graded by the model.
- The tool supports Claude, Qwen, Mistral, Gemma, DeepSeek, Gemini, GPT and other models with tool calling, and runs with self-hosted Open WebUI.
- It showcases renderable items such as architecture diagrams, charts, quizzes, preference forms, and expandable explainers with dark/light theming and no external dependencies.
- It is BSD-3 licensed, designed for local use, and claims installation in under a minute with minimal setup.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.




