nlp:retrieval-augmented_methods
Table of Contents
Retrieval-Augmented Methods (RAG)
Overviews
Papers
- Rubin et al 2021 - Learning To Retrieve Prompts for In-Context Learning Is this the first retrieval-based in-context learning paper?
- Poesia et al 2022 - Synchromesh: Reliable code generation from pre-trained language models One of the first retrieval-based in-context learning papers
- Izacard et al 2022 - Few-shot Learning with Retrieval Augmented Language Models Learns the retrieval model
- Lin et al 2023 - RA-DIT: Retrieval-Augmented Dual Instruction Tuning Fine-tunes a language model so it's better for retrieval augmented use. Best performance occurs after a small number of fine-tuning steps (<500 steps!).
During Pre-Training
Related Pages
nlp/retrieval-augmented_methods.txt · Last modified: 2025/05/13 19:39 by jmflanig