Bian Que: An Agentic Framework with Flexible Skill Arrangement for Online System Operations
arXiv cs.AI / 4/30/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Bian Que, an agentic framework aimed at reducing the heavy human workload of operating large-scale online systems (search, recommendation, advertising) by improving how agents orchestrate data and operational knowledge.
- It argues the core deployment bottleneck for LLM agents in O&M is orchestration (event-to-(metrics/logs/change data, handbook/practitioner knowledge) mapping), since indiscriminate signal feeding leads to dilution and hallucinations.
- Bian Que provides a unified operational paradigm that abstracts O&M into three canonical patterns: release interception, proactive inspection, and alert root-cause analysis.
- It uses “Flexible Skill Arrangement,” where Skills declare which specific data and knowledge to retrieve per business-module context and can be generated/updated by LLMs or refined via natural-language instructions from on-call engineers.
- In experiments on KuaiShou’s e-commerce search engine, the system reduced alert volume by 75%, improved root-cause analysis accuracy to 80%, cut mean time to resolution by over 50%, and achieved a 99.0% offline evaluation pass rate, with code released on GitHub.
Related Articles
Claude Opus 4.7: What Actually Changed and Whether You Should Migrate
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
The Inference Inflection: Why AI's Center of Gravity Has Shifted from Training to Inference
Dev.to
AI transparency index on pvgomes.com
Dev.to
Everyone is Building MCP-Powered AI Apps Now But Is Model Context Protocol Actually Worth The Hype?
Dev.to