====== Loss Functions ====== A function that is minimized during training (using gradient descent or Adam, for example) is called a loss function. ==== Code Examples ===== * Hugging Face * Custom loss in Hugging Face trainer: [[https://huggingface.co/docs/transformers/main_classes/trainer|Trainer]] ==== List of Loss Functions ==== * Cross-entropy (aka log loss, conditional log-likelihood, CRF loss) * Lots of different ways to write this loss function. One way is minimize $L(\mathcal{D}) = -\sum_{i=1}^{N} log(p(y_i|x_i))$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$ * The cross-entropy version writes it as $L(\mathcal{D}) = -\sum_{i=1}^{N}\sum_{y} p(y|x_i) log(p_\theta(y|x_i))$, but usually we put in the empirical distribution $p(y|x_i) = I[y=y_i]$ which gives us the log-loss above. * Cross-entropy loss can be written as \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + log( \sum_{y \in \mathcal{Y}(x_i)} e^{score_\theta(x_i,y)} ) \Big) \] * This is often call the Conditional Random Field (CRF) loss * The minimum of cross-entropy loss does not always exist, and does not exist if the data training data can be completely separated. See for example, section 1.1 of [[https://arxiv.org/pdf/1804.09753.pdf|this paper]]. * Perceptron loss \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + \max_{y \in \mathcal{Y}(x_i)} score_\theta(x_i,y) \Big) \] * Hinge (SVM) loss \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + \max_{y \in \mathcal{Y}(x_i)} \big(score_\theta(x_i,y) + cost(y_i,y)\big) \Big) \] * Softmax margin * [[https://www.aclweb.org/anthology/N10-1112.pdf|Gimple & Smith 2010 - Softmax-Margin CRFs: Training Log-Linear Models with Cost Functions]] * [[https://arxiv.org/pdf/1612.02295.pdf|Large-Margin Softmax Loss for Convolutional Neural Networks]] L-Softmax. Doesn't cite [[https://www.aclweb.org/anthology/N10-1112.pdf|Gimple & Smith]]. I suspect it may be different, but need to check. * The softmax margin loss is obtained by replacing the max in the SVM loss with a softmax: \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + log( \sum_{y \in \mathcal{Y}(x_i)} e^{score_\theta(x_i,y) + cost(y_i,y)} ) \Big) \] * Risk\[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \frac{\sum_{y\in\mathcal{Y}(x_i)} cost(y_i,y) e^{score_\theta(x_i,y_i)}}{\sum_{y\in\mathcal{Y}(x_i)} e^{score_\theta(x_i,y_i)}} \] \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \sum_{y\in\mathcal{Y}(x_i)} cost(y_i,y) p_\theta(y|x_i) \] * Ramp loss * Soft ramp loss * Infinite ramp loss * Squared error loss * [[https://arxiv.org/pdf/2006.07322.pdf|Hui & Belkin 2020 - Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks]] * Squentropy (Cross-entropy + squared error) * [[https://arxiv.org/pdf/2302.03952.pdf|Hui et al 2023 - Cut your Losses with Squentropy]] ===== Related Pages ===== * [[NN Training|Training]]