Budget-Constrained Online Retrieval-Augmented Generation: The Chunk-as-a-Service Model

arXiv cs.LG / 5/1/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while Retrieval-Augmented Generation (RAG) improves LLM reliability, existing RAG-as-a-Service (RaaS) pricing and access models can be opaque and inefficient because they charge based on prompts rather than the relevance/quality of retrieved chunks.
  • It proposes “Chunk-as-a-Service” (CaaS) as a more transparent and cost-effective alternative, offering two variants: Open-Budget CaaS (OB-CaaS) and Limited-Budget CaaS (LB-CaaS).
  • For LB-CaaS and OB-CaaS, the authors introduce the Utility-Cost Online Selection Algorithm (UCOSA), which selectively enriches a subset of prompts online while respecting budget constraints and utility–cost tradeoffs.
  • Experiments show UCOSA improves over offline and relevance-greedy baselines using a metric combining the number of enriched prompts and average relevance, and it substantially outperforms random selection.
  • The results also indicate better budget efficiency versus RaaS, with CaaS variants achieving higher performance-to-budget ratios, demonstrating improved cost-effectiveness and accessibility for retrieval-enhanced generation.

Abstract

Large Language Models (LLMs) have revolutionized the field of natural language processing. However, they exhibit some limitations, including a lack of reliability and transparency: they may hallucinate and fail to provide sources that support the generated output. Retrieval-Augmented Generation (RAG) was introduced to address such limitations in LLMs. One popular implementation, RAG-as-a-Service (RaaS), has shortcomings that hinder its adoption and accessibility. For instance, RaaS pricing is based on the number of submitted prompts, without considering whether the prompts are enriched by relevant chunks, i.e., text segments retrieved from a vector database, or the quality of the utilized chunks (i.e., their degree of relevance). This results in an opaque and less cost-effective payment model. We propose Chunk-as-a-Service (CaaS) as a transparent and cost-effective alternative. CaaS includes two variants: Open-Budget CaaS (OB-CaaS) and Limited-Budget CaaS (LB-CaaS), which is enabled by our ``Utility-Cost Online Selection Algorithm (UCOSA)''. UCOSA further extends the cost-effectiveness and the accessibility of the OB-CaaS variant by enriching, in an online manner, a subset of the submitted prompts based on budget constraints and utility-cost tradeoff. Our experiments demonstrate the efficacy of the proposed UCOSA compared to both offline and relevance-greedy selection baselines. In terms of the performance metric-the number of enriched prompts (NEP) multiplied by the Average Relevance (AR)-UCOSA outperforms random selection by approximately 52% and achieves around 75% of the performance of offline selection methods. Additionally, in terms of budget utilization, LB-CaaS and OB-CaaS achieve higher performance-to-budget ratios of 140% and 86%, respectively, compared to RaaS, indicating their superior efficiency.