| Browser use agents tend to prefer the models' native multimodality over concrete source, and, even if they do, they still tend to take too much context to even barely function. I was running into this problem when using LLM Agents; Then I came up with an idea. What if I can just... send the rendered DOM to the agent, but with markdown-like compression? Turns out, it works! It reduces token consumption by thirty-two times on GitHub (vs. raw DOM), at least according to my experiments, while only taking ~30ms to parse. Also, it comes with 18 tools for LLMs to work interactively with pages, and they all work with whatever model you're using, as long as they have tool calling capabilities. It works with both CLI and MCP. It's still an early project though, v0.3, so I'd like to hear more feedback. npm: https://www.npmjs.com/package/@tidesurf/core Expriment metrics Tested HW: Tested env: Numbers (raw DOM v. TideSurf) edit: numbers [link] [comments] |
Web use agent harness w/ 30x token reduction, 12x TTFT reduction w/ Qwen 3.5 9B on potato device (And no, I did not use vision capabilities)
Reddit r/LocalLLaMA / 3/28/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage
Key Points
- Browser-use agents can consume excessive tokens and context when given raw rendered pages, so the author proposes compressing the rendered DOM into a markdown-like format before sending it to the agent.
- In experiments with Qwen 3.5 9B on limited “potato” hardware, the approach reportedly cuts token consumption by ~32x versus raw DOM and reduces TTFT from ~106s to ~8.4s, while adding only about ~30ms of parsing time.
- The project (“@tidesurf/core”, v0.3) includes 18 interactive page tools that work across models as long as the model supports tool calling, and it supports both CLI and MCP integrations.
- The post emphasizes that results are based on early testing/experiments and invites additional community feedback to validate and improve the harness.
Related Articles

Black Hat Asia
AI Business
Built a mortgage OCR system that hit 100% final accuracy in production (US/UK underwriting)
Reddit r/LocalLLaMA

# I Created a Pagination Challenge… And AI Missed the Real Problem
Dev.to

Xata Has a Free Serverless Database — PostgreSQL With Built-in Search, Analytics, and AI
Dev.to

The Real Stack Behind AI Agents in Production — MCP, Kubernetes, and What Nobody Tells You
Dev.to