AI Navigate

LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression

arXiv cs.CL / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • LooComp introduces a margin-based framework that uses a leave-one-out strategy to identify critical sentences for query-aware context compression, enhancing question answering accuracy and efficiency.
  • The method leverages a lightweight encoder-only Transformer trained with a composite ranking loss to differentiate between critical and non-critical sentences, ensuring precise clue retention.
  • This approach achieves strong exact-match and F1 scores with high-throughput inference and reduced memory usage compared to major baselines, making it suitable for scalable retrieval-augmented generation tasks.
  • LooComp offers effective compression ratios without compromising answer quality, positioning it as a practical and efficient alternative for context delivery in large language model applications.
  • The framework's focus on improving efficiency and scalability directly supports cost-effective and fast retrieval-augmented systems, addressing key challenges in current LLM pipeline implementations.

Computer Science > Computation and Language

arXiv:2603.09222 (cs)
[Submitted on 10 Mar 2026]

Title:LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression

View a PDF of the paper titled LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression, by Thao Do and 4 other authors
View PDF HTML (experimental)
Abstract:Efficient context compression is crucial for improving the accuracy and scalability of question answering. For the efficiency of Retrieval Augmented Generation, context should be delivered fast, compact, and precise to ensure clue sufficiency and budget-friendly LLM reader cost. We propose a margin-based framework for query-driven context pruning, which identifies sentences that are critical for answering a query by measuring changes in clue richness when they are omitted. The model is trained with a composite ranking loss that enforces large margins for critical sentences while keeping non-critical ones near neutral. Built on a lightweight encoder-only Transformer, our approach generally achieves strong exact-match and F1 scores with high-throughput inference and lower memory requirements than those of major baselines. In addition to efficiency, our method yields effective compression ratios without degrading answering performance, demonstrating its potential as a lightweight and practical alternative for retrieval-augmented tasks.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2603.09222 [cs.CL]
  (or arXiv:2603.09222v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09222
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Thao Do [view email]
[v1] Tue, 10 Mar 2026 05:44:20 UTC (1,373 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression, by Thao Do and 4 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.