AI Navigate

Orla: A Library for Serving LLM-Based Multi-Agent Systems

arXiv cs.AI / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • Orla is a library for constructing and running LLM-based agentic systems, enabling workflows that combine multiple inference steps, tool calls, and heterogeneous backends.
  • It acts as a serving layer above existing LLM inference engines by separating request execution from workflow-level policy, letting developers define workflows and let Orla manage mapping and coordination.
  • Orla provides three main controls for agents: a stage mapper to assign each stage to an appropriate model and backend, a workflow orchestrator to schedule stages and manage resources and context, and a memory manager to handle inference state such as the KV cache across workflow boundaries.
  • The paper demonstrates Orla with a customer support workflow and reports that stage mapping reduces latency and cost compared to a single-model baseline, while memory/cache management lowers time-to-first-token.
  • Overall, Orla aims to simplify building complex multi-agent LLM workflows and optimize performance through cross-model orchestration and state management.

Abstract

We introduce Orla, a library for constructing and running LLM-based agentic systems. Modern agentic applications consist of workflows that combine multiple LLM inference steps, tool calls, and heterogeneous infrastructure. Today, developers typically build these systems by manually composing orchestration code with LLM serving engines and tool execution logic. Orla provides a general abstraction that separates request execution from workflow-level policy. It acts as a serving layer above existing LLM inference engines: developers define workflows composed of stages, while Orla manages how those stages are mapped, executed, and coordinated across models and backends. It provides agent-level control through three mechanisms: a stage mapper, which assigns each stage to an appropriate model and backend; a workflow orchestrator, which schedules stages and manages their resources and context; and a memory manager, which manages inference state such as the KV cache across workflow boundaries. We demonstrate Orla with a customer support workflow that exercises many of its capabilities. We evaluate Orla on two datasets, showing that stage mapping improves latency and cost compared to a single-model vLLM baseline, while workflow-level cache management reduces time-to-first-token.