ml:ensembling
This is an old revision of the document!
Table of Contents
Ensembling
Ensembling combines several models to improve generalization performance. For example, ensembling models trained with different random seeds almost always improves performance. This technique is often used when performance is the main object, such as in competitions like WMT. However, in papers, because it often gives a large improvement, researchers usually compare non-ensembling methods to other non-ensembled methods, and ensembled methods to ensembled methods. See for example of this see Gehring et al 2017.
For models trained with cross-entropy, the standard method of ensembling is to just average the probabilities of the models at test time and predict using this probability.
Overviews
Papers
Theory
Related Pages
ml/ensembling.1659338170.txt.gz · Last modified: 2023/06/15 07:36 (external edit)