Table of Contents

Loss Functions

A function that is minimized during training (using gradient descent or Adam, for example) is called a loss function.

Code Examples

List of Loss Functions

\[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \Big( -score_\theta(x_i,y_i) + log( \sum_{y \in \mathcal{Y}(x_i)} e^{score_\theta(x_i,y)} ) \Big) \]

\[ L(\theta,\mathcal{D}) = \sum_{(x_i,y_i)\in\mathcal{D}} \sum_{y\in\mathcal{Y}(x_i)} cost(y_i,y) p_\theta(y|x_i) \]