ml:loss_functions
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ml:loss_functions [2024/07/23 00:28] – [List of Loss Functions] jmflanig | ml:loss_functions [2024/07/23 00:32] (current) – jmflanig | ||
|---|---|---|---|
| Line 13: | Line 13: | ||
| * Cross-entropy loss can be written as | * Cross-entropy loss can be written as | ||
| \[ | \[ | ||
| - | L(\theta, | + | L(\theta, |
| \] | \] | ||
| + | * This is often call the Conditional Random Field (CRF) loss | ||
| * The minimum of cross-entropy loss does not always exist, and does not exist if the data training data can be completely separated. | * The minimum of cross-entropy loss does not always exist, and does not exist if the data training data can be completely separated. | ||
| * Perceptron loss \[ | * Perceptron loss \[ | ||
| Line 25: | Line 26: | ||
| * [[https:// | * [[https:// | ||
| * [[https:// | * [[https:// | ||
| + | * The softmax margin loss is obtained by replacing the max in the SVM loss with a softmax: \[ | ||
| + | L(\theta, | ||
| + | \] | ||
| * Risk\[ | * Risk\[ | ||
| L(\theta, | L(\theta, | ||
ml/loss_functions.1721694486.txt.gz · Last modified: 2024/07/23 00:28 by jmflanig