ml:meta-learning
Table of Contents
Meta-Learning
Overviews
AutoML
Deep Learning Papers
- Hochreiter & Younger 2001 - Learning to Learn Using Gradient Descent Amazing paper from one of the inventors of LSTMs. Jeurgen talks about it here.
- Ravi & Larochelle 2016 - Optimization as a Model for Few-Shot Learning Proposes “an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime.”
- Chen 2017 - Learning to Learn without Gradient Descent by Gradient Descent Learns a black-box optimizer (gradient-free optimizer). Can be applied to hyperparameter tuning.
- Finn et al 2019 - Online Meta-Learning Introduces online meta-learning, where the learner sees a series of tasks. Proposes Follow The Meta-Leader (FTML), a follow-the-leader-type meta-learning algorithm, which extend MAML to this setting.
Non-Deep Learning Papers
Meta-Learning in NLP
Related Pages
ml/meta-learning.txt · Last modified: 2023/11/09 19:44 by jmflanig