nlp:evaluation
This is an old revision of the document!
Table of Contents
Evaluation
Papers
Robust Evaluation
- Ribeiro et al 2020 - Beyond Accuracy: Behavioral Testing of NLP Models with CheckList Very good paper, best paper award at ACL 2020.
Natural Language Output
To evaluate natural language output, researchers often use BLEU or human evaluation. For summarization, they often use ROUGE.
See also Generation - Evaluation, Machine Translation - Evaluation, and Dialog - Evaluation.
nlp/evaluation.1631072554.txt.gz · Last modified: 2023/06/15 07:36 (external edit)