User Tools

Site Tools


ml:loss_functions

This is an old revision of the document!


Loss Functions

  • Cross-entropy (aka log loss, conditional log-likelihood, CRF loss)
    • Lots of different ways to write this loss function. One way is minimize $L(\mathcal{D}) = -\sum_{i=1}^{N} log(p(y_i|x_i))$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$
    • The cross-entropy version writes it as $L(\mathcal{D}) = -\sum_{i=1}^{N}\sum_{y} p(y|x_i) log(p_\theta(y|x_i))$, but usually we put in the empirical distribution $p(y|x_i) = I[y=y_i]$ which gives us the log-loss above.
    • The minimum of cross-entropy loss does not always exist, and does not exist if the data training data can be completely separated. See for example, section 1.1 of this paper.
  • Perceptron loss \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + \max_{y \in \mathcal{Y}(x_i)} score_\theta(x_i,y) \Big) \]
  • Hinge (SVM) loss \[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + \max_{y \in \mathcal{Y}(x_i)} \big(score_\theta(x_i,y) + cost(y_i,y)\big) \Big) \]
  • Softmax margin
  • Ramp loss\[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \frac{\sum_{y\in\mathcal{Y}(x_i)} cost(y_i,y) e^{score_\theta(x_i,y_i)}}{\sum_{y\in\mathcal{Y}(x_i)} e^{score_\theta(x_i,y_i)}} \]
  • Soft ramp loss
  • Infinite ramp loss
  • Squared error loss
  • Squentropy (Cross-entropy + squared error)
ml/loss_functions.1698269781.txt.gz · Last modified: 2023/10/25 21:36 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki