nlp:retrieval-augmented_methods
This is an old revision of the document!
Table of Contents
Retrieval-Augmented Methods
Papers
- Rubin et al 2021 - Learning To Retrieve Prompts for In-Context Learning Is this the first retrieval-based in-context learning paper?
- Poesia et al 2022 - Synchromesh: Reliable code generation from pre-trained language models One of the first retrieval-based in-context learning papers
- Izacard et al 2022 - Few-shot Learning with Retrieval Augmented Language Models Learns the retrieval model
- Lin et al 2023 - RA-DIT: Retrieval-Augmented Dual Instruction Tuning Fine-tunes a language model to it's better for retrieval augmented use. Best performance occurs after a small number of fine-tuning steps (<500 steps!).
During Pre-Training or Fine-Tuning
Related Pages
nlp/retrieval-augmented_methods.1705700950.txt.gz · Last modified: 2024/01/19 21:49 by jmflanig