nlp:retrieval-augmented_methods

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:retrieval-augmented_methods [2024/01/19 21:49] – [Papers] jmflanignlp:retrieval-augmented_methods [2025/05/13 19:39] (current) – [Papers] jmflanig
Line 1: Line 1:
-====== Retrieval-Augmented Methods ======+====== Retrieval-Augmented Methods (RAG) ====== 
 + 
 +===== Overviews ===== 
 +  * [[https://arxiv.org/pdf/2312.10997|Gao et al 2023 - Retrieval-Augmented Generation for Large Language Models: A Survey]]
  
 ===== Papers ===== ===== Papers =====
Line 9: Line 12:
   * [[https://arxiv.org/pdf/2305.06983.pdf|Jiang et al 2023 - Active Retrieval Augmented Generation]]   * [[https://arxiv.org/pdf/2305.06983.pdf|Jiang et al 2023 - Active Retrieval Augmented Generation]]
   * [[https://arxiv.org/pdf/2208.03299.pdf|Izacard et al 2022 - Few-shot Learning with Retrieval Augmented Language Models]] Learns the retrieval model   * [[https://arxiv.org/pdf/2208.03299.pdf|Izacard et al 2022 - Few-shot Learning with Retrieval Augmented Language Models]] Learns the retrieval model
-  * **[[https://arxiv.org/pdf/2310.01352.pdf|Lin et al 2023 - RA-DIT: Retrieval-Augmented Dual Instruction Tuning]]** Fine-tunes a language model to it's better for retrieval augmented use.  Best performance occurs after a small number of fine-tuning steps (<500 steps!).+  * **[[https://arxiv.org/pdf/2310.01352.pdf|Lin et al 2023 - RA-DIT: Retrieval-Augmented Dual Instruction Tuning]]** Fine-tunes a language model so it's better for retrieval augmented use.  Best performance occurs after a small number of fine-tuning steps (<500 steps!). 
 +  * [[https://arxiv.org/pdf/2401.14021.pdf|Zhang et al 2024 - Accelerating Retrieval-Augmented Language Model Serving with Speculation]] 
 +  * [[https://arxiv.org/pdf/2502.16101|Zheng et al 2025 - Worse than Zero-shot? A Fact-Checking Dataset for Evaluating the Robustness of RAG Against Misleading Retrievals]]
  
-==== During Pre-Training or Fine-Tuning ====+==== During Pre-Training ==== 
 +  * [[https://arxiv.org/pdf/2002.08909.pdf|Guu et al 2020 - REALM: Retrieval-Augmented Language Model Pre-Training]]
   * RETRO: [[https://arxiv.org/pdf/2112.04426.pdf|Borgeaud et al 2021 - Improving Language Models by Retrieving from Trillions of Tokens]]   * RETRO: [[https://arxiv.org/pdf/2112.04426.pdf|Borgeaud et al 2021 - Improving Language Models by Retrieving from Trillions of Tokens]]
   * [[https://arxiv.org/pdf/2304.06762.pdf|Wang et al 2023 - Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study]]   * [[https://arxiv.org/pdf/2304.06762.pdf|Wang et al 2023 - Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study]]
nlp/retrieval-augmented_methods.1705700950.txt.gz · Last modified: 2024/01/19 21:49 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki