RECOMP (**Re**trieve, **Com**press, **P**repend) is a method that compresses retrieved documents into a texual summary before prepending it as input to a language model at inference time.
![[Pasted image 20240903134444.png]]
The compressed summary guides the LM to generate the correct answer, while significantly reducing the computation costs required to encode the documents.
Related: [[Summarization Agorithms]]
Remove the summarization module to improve latency in [[Retrieval Augmented Generation (RAG)]]
---
Retrieving documents and prepending them in-context at inference time improves performance of language model (LMs) on a wide range of tasks. However, these documents, often spanning hundreds of words, make inference substantially more expensive.
We propose compressing the retrieved documents into textual summaries prior to in-context integration. This not only reduces the computational costs but also relieves the burden of LMs to identify relevant information in long retrieved documents.
We present two compressors:
1. an extractive compressor which selects useful sentences from retrieved documents
2. an abstractive compressor which generates summaries by synthesizing information from multiple documents.
Both compressors are trained to improve LMs’ performance on end tasks when the generated summaries are prepended to the LMs’ input, while keeping the summary concise.
If the retrieved documents are irrelevant to the input or offer no additional information to LM, our compressor can return an empty string, implementing selective augmentation.
[RECOMP: IMPROVING RETRIEVAL-AUGMENTED LMS WITH COMPRESSION AND SELECTIVE AUGMENTATION](https://arxiv.org/abs/2310.04408)