User Tools

Site Tools


nlp:instruction-tuning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:instruction-tuning [2025/05/04 23:48] – [Papers] jmflanignlp:instruction-tuning [2025/06/01 22:58] (current) – [Papers] jmflanig
Line 15: Line 15:
   * **[[https://arxiv.org/pdf/2212.09689.pdf|Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor]]** - Problem with this paper: it might be extracting instructions that were used to train davinci-002, so it's actually using the human labor that was used to create the davinci-002 instructions.   * **[[https://arxiv.org/pdf/2212.09689.pdf|Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor]]** - Problem with this paper: it might be extracting instructions that were used to train davinci-002, so it's actually using the human labor that was used to create the davinci-002 instructions.
   * [[https://arxiv.org/pdf/2301.13688.pdf|Longpre et al 2023 - The Flan Collection: Designing Data and Methods for Effective Instruction Tuning]] Two-column version [[https://openreview.net/pdf?id=ZX4uS605XV|here]]   * [[https://arxiv.org/pdf/2301.13688.pdf|Longpre et al 2023 - The Flan Collection: Designing Data and Methods for Effective Instruction Tuning]] Two-column version [[https://openreview.net/pdf?id=ZX4uS605XV|here]]
-  * [[https://arxiv.org/pdf/2305.11206.pdf|Zhou et al 2023 - LIMA: Less Is More for Alignment]]+  * [[https://arxiv.org/pdf/2305.11206.pdf|Zhou et al 2023 - LIMA: Less Is More for Alignment]] Demonstrates that strong performance can be achieved by fine-tuning on 1,000 carefully curated training examples.
   * **[[https://aclanthology.org/2023.acl-long.754.pdf|Wang et al 2023 - Self-Instruct: Aligning Language Models with Self-Generated Instructions]]**   * **[[https://aclanthology.org/2023.acl-long.754.pdf|Wang et al 2023 - Self-Instruct: Aligning Language Models with Self-Generated Instructions]]**
   * RSO: [[https://arxiv.org/pdf/2309.06657|Liu et al 2023 - Statistical Rejection Sampling Improves Preference Optimization]] Uses rejection sampling with CE loss.  Sample outputs, and accept or reject them based on the reward.  Then fine-tune on the accepted ones use CE loss.  Very principled, easy to implement.  Says they get a benefit over DPO by using a reward model.   * RSO: [[https://arxiv.org/pdf/2309.06657|Liu et al 2023 - Statistical Rejection Sampling Improves Preference Optimization]] Uses rejection sampling with CE loss.  Sample outputs, and accept or reject them based on the reward.  Then fine-tune on the accepted ones use CE loss.  Very principled, easy to implement.  Says they get a benefit over DPO by using a reward model.
Line 24: Line 24:
   * [[https://arxiv.org/pdf/2312.11456|Xiong et al 2023 - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint]] Talks about "it is also common to query human feedback during the training process. For instance, Bai et al. (2022); Touvron et al. (2023) typically iterate the RLHF process on a weekly cadence, where the fresh RLHF models are deployed to interact with crowdworkers and to collect new human preference data."   * [[https://arxiv.org/pdf/2312.11456|Xiong et al 2023 - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint]] Talks about "it is also common to query human feedback during the training process. For instance, Bai et al. (2022); Touvron et al. (2023) typically iterate the RLHF process on a weekly cadence, where the fresh RLHF models are deployed to interact with crowdworkers and to collect new human preference data."
   * [[https://arxiv.org/pdf/2402.01306|Ethayarajh et al 2024 - KTO: Model Alignment as Prospect Theoretic Optimization]]   * [[https://arxiv.org/pdf/2402.01306|Ethayarajh et al 2024 - KTO: Model Alignment as Prospect Theoretic Optimization]]
-  * **[[https://aclanthology.org/2024.acl-long.662.pdf|Ahmadian et al 2024 - Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs]]** [[https://arxiv.org/pdf/2402.14740|arXiv version]] Shows that "PPO is not the right tool for doing RL in +  * **[[https://aclanthology.org/2024.acl-long.662.pdf|Ahmadian et al 2024 - Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs]]** [[https://arxiv.org/pdf/2402.14740|arXiv version]] Shows that "PPO is not the right tool for doing RL in RLHF" and that "PPO is unnecessarily complicated for a pre-trained LLM environment."
-RLHF" and that "PPO is unnecessarily complicated for a pre-trained LLM environment."+
   * [[https://arxiv.org/pdf/2404.10719|Xu et al 2024 - Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study]]   * [[https://arxiv.org/pdf/2404.10719|Xu et al 2024 - Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study]]
   * [[https://arxiv.org/pdf/2404.09656|Gorbatovski et al 2024 - Learn Your Reference Model for Real Good Alignment]] Says you can update the reference model even in DPO (making DPO similar to PPO)   * [[https://arxiv.org/pdf/2404.09656|Gorbatovski et al 2024 - Learn Your Reference Model for Real Good Alignment]] Says you can update the reference model even in DPO (making DPO similar to PPO)
   * [[https://arxiv.org/pdf/2309.16583|Zheng et al 2023 - GPT-Fathom: Benchmarking Large Language Models to Decipher the Evolutionary Path towards GPT-4 and Beyond]]   * [[https://arxiv.org/pdf/2309.16583|Zheng et al 2023 - GPT-Fathom: Benchmarking Large Language Models to Decipher the Evolutionary Path towards GPT-4 and Beyond]]
   * Group Relative Policy Optimization (GRPO): [[https://arxiv.org/pdf/2402.03300|Shao et al 2024 - DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models]]   * Group Relative Policy Optimization (GRPO): [[https://arxiv.org/pdf/2402.03300|Shao et al 2024 - DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models]]
 +      * [[https://arxiv.org/pdf/2505.21178|Song & Zheng 2025 - Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning]] Gives a nice overview of problems with GRPO, and some extensions
   * [[https://arxiv.org/pdf/2403.00409|Chowdhury 2024 - Provably Robust DPO: Aligning Language Models with Noisy Feedback]]   * [[https://arxiv.org/pdf/2403.00409|Chowdhury 2024 - Provably Robust DPO: Aligning Language Models with Noisy Feedback]]
   * [[https://arxiv.org/pdf/2403.07691|Hong et al 2024 - ORPO: Monolithic Preference Optimization without Reference Model]] Similar to SimPO, below   * [[https://arxiv.org/pdf/2403.07691|Hong et al 2024 - ORPO: Monolithic Preference Optimization without Reference Model]] Similar to SimPO, below
Line 36: Line 36:
   * [[https://arxiv.org/pdf/2407.18248|Wang et al 2024 - Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning]] Does iterative DPO training, [[https://arxiv.org/pdf/2407.21783|Llama 3.1]] does this as well (see post-training section 4, Figure 7)   * [[https://arxiv.org/pdf/2407.18248|Wang et al 2024 - Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning]] Does iterative DPO training, [[https://arxiv.org/pdf/2407.21783|Llama 3.1]] does this as well (see post-training section 4, Figure 7)
   * [[https://arxiv.org/pdf/2401.10020|Yuan et al 2024 - Self-Rewarding Language Models]] From a seed instruction-tuned model, can create more instruction tuning data   * [[https://arxiv.org/pdf/2401.10020|Yuan et al 2024 - Self-Rewarding Language Models]] From a seed instruction-tuned model, can create more instruction tuning data
 +  * [[https://arxiv.org/pdf/2505.20809|Wu et al 2025 - Improved Representation Steering for Language Models]] Called steering, but actually instruction tuning
   * **Multi-Dimensional Rewards**   * **Multi-Dimensional Rewards**
     * [[https://arxiv.org/pdf/2311.09528|Wang et al 2023 - HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM]] Very high quality dataset (10k examples), better than 700K datasets that are not as good.     * [[https://arxiv.org/pdf/2311.09528|Wang et al 2023 - HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM]] Very high quality dataset (10k examples), better than 700K datasets that are not as good.
     * [[https://arxiv.org/pdf/2402.18571|Wang et al 2024 - Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards]]     * [[https://arxiv.org/pdf/2402.18571|Wang et al 2024 - Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards]]
 +  * **Analyzing, Filtering, or Improving Preference Data**
 +    * [[https://arxiv.org/pdf/2505.23114|Lee et al 2025 - Dataset Cartography for Large Language Model Alignment: Mapping and Diagnosing Preference Data]] Applies dataset cartography ([[https://arxiv.org/pdf/2009.10795|Swayamdipta 2020]]) to preference data
  
 ===== Datasets ===== ===== Datasets =====
Line 60: Line 63:
   * [[Alignment]]   * [[Alignment]]
   * [[Human-In-The-Loop]]   * [[Human-In-The-Loop]]
 +  * [[ml:reinforcement_learning#Reinforcement Learning with Verifiable Rewards]]
   * [[human-in-the-loop#RLHF]]   * [[human-in-the-loop#RLHF]]
  
nlp/instruction-tuning.1746402534.txt.gz · Last modified: 2025/05/04 23:48 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki