ml:alternative_training_methods
Table of Contents
Neural Networks: Alternative Training Methods
Papers
- Unnikrishnan & Venugopal 1994 - Alopex: A Correlation-Based Learning Algorithm for Feedforward and Recurrent Neural Networks pdf A stochastic training algorithm that does not use gradients, but instead looks at how stochastic changes in the weights change the loss function. Paper claims it can be used with discontinuous activation functions. This is a local search optimization method, similar to simulated annealing. Very simple to implement, and can be parallelized. Experiments in the paper show it is comparable in number of iterations required as backprop. Used in Forcada 1997.
- Such et al 2017 - Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning Training neural networks with genetic algorithms instead of backprop
- Baydin et al 2022 - Gradients without Backpropagation github Uses forward mode automatic differentiation to compute a “forward gradient” (no backward pass like backprop). Essentially it computes the change in loss in a random direction. When scaled by the loss, this is an unbiased estimate of the true gradient, which they plug into stochastic gradient descent. This has a number of important implications:
- Because this doesn't require storing the whole computation graph like backprop does, computation nodes can be removed from the graph once they are not needed in further computation. For example, each layer of the transformer can thrown away once it has been used. This could save GPU memory and perhaps allow much deeper networks.
- They could have computed the finite differences approximation to the gradient by taking a small step in the random direction. This would allow computing the change in loss for discontinuous functions.
- The direction doesn't have to be sampled from a random normal - the components only need to be independent. They could have sampled the components from {-1,1} (two discrete values). This would allow them to optimize binary neural networks with their technique.
- Follow-up work: Belouze 2022 - Optimization without Backpropagation
Related Pages
ml/alternative_training_methods.txt · Last modified: 2023/08/11 20:05 by jmflanig