- **DLM Reranking:** uses classification ([[Cross-encoder]], perhaps mixed with [[Bi-encoder]]). This method leverages deep language models (DLMs) for [[Re-ranker]]. These models are fine-tuned to classify document relevancy to a query as “true” or “false”. During fine-tuning, the model is trained with concatenated query and document inputs, labeled by relevancy. At inference, documents are ranked based on the probability of the “true” token. - **[[TILDE]] Reranking:** focuses on query likelihoods. TILDE calculates the likelihood of each query term independently by predicting token probabilities across the model’s vocabulary. Documents are scored by summing the pre-calculated log probabilities of query tokens, allowing for rapid reranking at inference. [[TILDEv2]] improves this by indexing only document-present tokens, using NCE loss, and expanding documents, thus enhancing efficiency and reducing index size.