ml:learning_rate

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ml:learning_rate [2023/10/13 01:20] – [Parameter-Free Optimization] jmflanigml:learning_rate [2024/02/06 00:31] (current) – [Automatically Setting the Learning Rate] jmflanig
Line 19: Line 19:
   * **Plateau Learning Rate**   * **Plateau Learning Rate**
     * Decrease the learning rate when the objective reaches a plateau.  See [[https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html|PyTorch - ReduceLROnPlateau]]     * Decrease the learning rate when the objective reaches a plateau.  See [[https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html|PyTorch - ReduceLROnPlateau]]
 +  * **Other Schedules**
 +    * [[https://arxiv.org/pdf/1608.03983.pdf|Loshchilov & Hutter 2016 - SGDR: Stochastic Gradient Descent with Warm Restarts]] Used by nanoGPT
   * **Warm restarts** [[https://arxiv.org/pdf/1608.03983.pdf|Loshchilov & Hutter 2016 - SGDR: Stochastic Gradient Descent with Warm Restarts]]   * **Warm restarts** [[https://arxiv.org/pdf/1608.03983.pdf|Loshchilov & Hutter 2016 - SGDR: Stochastic Gradient Descent with Warm Restarts]]
   * **Batch size** [[https://arxiv.org/pdf/1711.00489.pdf|Smith et al 2017 - Don't Decay the Learning Rate, Increase the Batch Size]]   * **Batch size** [[https://arxiv.org/pdf/1711.00489.pdf|Smith et al 2017 - Don't Decay the Learning Rate, Increase the Batch Size]]
Line 40: Line 42:
   * [[https://arxiv.org/pdf/2105.14526.pdf|Iyer et al 2021 - LRTuner: A Learning Rate Tuner for Deep Neural Networks]] Uses a quadratic approximation in the direction of descent to pick the step size. Seems to work well. Similar to L4.   * [[https://arxiv.org/pdf/2105.14526.pdf|Iyer et al 2021 - LRTuner: A Learning Rate Tuner for Deep Neural Networks]] Uses a quadratic approximation in the direction of descent to pick the step size. Seems to work well. Similar to L4.
   * [[https://arxiv.org/pdf/2111.15317.pdf|Teng et al 2021 - AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop]]   * [[https://arxiv.org/pdf/2111.15317.pdf|Teng et al 2021 - AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop]]
-  * [[https://arxiv.org/pdf/2306.00144.pdf|Cutkosky et al 2023 - Mechanic: A Learning Rate Tuner]]+  * **[[https://arxiv.org/pdf/2306.00144.pdf|Cutkosky et al 2023 - Mechanic: A Learning Rate Tuner]]**
  
 ==== Parameter-Free Optimization ==== ==== Parameter-Free Optimization ====
 +Optimization algorithms that don't have a stepsize or hyperparameters.
 +
   * [[https://arxiv.org/pdf/2302.12022.pdf|Ivgi et al 2023 - DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule]]   * [[https://arxiv.org/pdf/2302.12022.pdf|Ivgi et al 2023 - DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule]]
  
ml/learning_rate.1697160025.txt.gz · Last modified: 2023/10/13 01:20 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki