ml:confidence
This is an old revision of the document!
Table of Contents
Confidence
Evaluation Measures
TODO: literature review for evaluation measures of confidence scores.
In NLP
(search acl anthology for “confidence scores”)
- Culotta & McCallum 2003 - Confidence Estimation for Information Extraction Uses three evaluation metrics of confidence scores:
- “Pearson’s r, a correlation coefficient ranging from -1 to 1 that measures the correlation between a confidence score and whether or not the field (or record) is correctly labeled.”
- “average precision, used in the Information Retrieval community… the precision at each point in the ranked list where a relevant document is found and then averages these values. Instead of ranking documents by their relevance score, here we rank fields (and records) by their confidence score, where a correctly labeled field is analogous to a relevant document”
- “accuracy-coverage graph. Better confidence estimates push the curve to the upper-right”
- Nguyen Bach 2011 - Goodness: A Method for Measuring Machine Translation Confidence Has a good explanation of MT confidence
- 2018 - Confidence Modeling for Neural Semantic Parsing Measures “the relationship between confidence scores and F1 using Spearman’s ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).”
ml/confidence.1614247166.txt.gz · Last modified: 2023/06/15 07:36 (external edit)