User Tools

Site Tools


ml:loss_functions

This is an old revision of the document!


Loss Functions

  • Cross-entropy (aka log loss, conditional log-likelihood, CRF loss)
    • Lots of different ways to write this loss function. One way is minimize $L(\mathcal{D}) = -\sum_{i=1}^{N} log(p(y_i|x_i))$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$, where $p(y|x) = \frac{e^{score(x,y)}}{\sum_{y} e^{score(x,y)}}$
    • The cross-entropy version writes it as $L(\mathcal{D}) = -\sum_{i=1}^{N}\sum_{y} p(y|x_i) log(p_\theta(y|x_i))$, but usually we put in the empirical distribution $p(y|x_i) = I[y=y_i]$ which gives us the log-loss above.
  • Perceptron loss
  • Hinge (SVM) loss
  • Softmax margin
  • Ramp loss
  • Soft ramp loss
  • Infinite ramp loss
ml/loss_functions.1654036834.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki