User Tools

Site Tools


nlp:robustness_in_nlp

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:robustness_in_nlp [2021/04/29 10:33] – [Related Pages] jmflanignlp:robustness_in_nlp [2023/06/15 07:36] (current) – external edit 127.0.0.1
Line 11: Line 11:
   * **[[https://arxiv.org/pdf/1907.11932.pdf|Jin et al 2019 - Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment]]**   * **[[https://arxiv.org/pdf/1907.11932.pdf|Jin et al 2019 - Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment]]**
   * [[http://proceedings.mlr.press/v119/srivastava20a/srivastava20a.pdf|Srivastava et al 2020 - Robustness to Spurious Correlations via Human Annotations]]   * [[http://proceedings.mlr.press/v119/srivastava20a/srivastava20a.pdf|Srivastava et al 2020 - Robustness to Spurious Correlations via Human Annotations]]
 +  * [[https://arxiv.org/pdf/2007.06778.pdf|Tu et al 2020 - An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models]]
   * [[https://arxiv.org/pdf/2002.00293.pdf|Bartolo et al 2020 - Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension]] Applies adversarial filtering to QA   * [[https://arxiv.org/pdf/2002.00293.pdf|Bartolo et al 2020 - Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension]] Applies adversarial filtering to QA
   * [[https://arxiv.org/pdf/2004.14004.pdf|Si et al 2020 - Benchmarking Robustness of Machine Reading Comprehension Models]]   * [[https://arxiv.org/pdf/2004.14004.pdf|Si et al 2020 - Benchmarking Robustness of Machine Reading Comprehension Models]]
   * **[[https://arxiv.org/pdf/2005.04118.pdf|Ribeiro et al 2020 - Beyond Accuracy: Behavioral Testing of NLP Models with CheckList]]**   * **[[https://arxiv.org/pdf/2005.04118.pdf|Ribeiro et al 2020 - Beyond Accuracy: Behavioral Testing of NLP Models with CheckList]]**
   * [[https://arxiv.org/pdf/2010.03656.pdf|Rosenman et al 2020 - Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data]] Shows that deep learning relation extraction systems usually rely on shallow heuristics   * [[https://arxiv.org/pdf/2010.03656.pdf|Rosenman et al 2020 - Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data]] Shows that deep learning relation extraction systems usually rely on shallow heuristics
 +  * [[https://aclanthology.org/2021.acl-short.43.pdf|Lin et al 2021 - Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models]]
  
  
 ===== Conferences, Workshops, and Shared Tasks ===== ===== Conferences, Workshops, and Shared Tasks =====
-  * [[https://www.aclweb.org/anthology/W17-5401.pdf|Ettinger er al 2017 - Towards Linguistically Generalizable NLP Systems: A Workshop and Shared Task]]+  * [[https://www.aclweb.org/anthology/W17-5401.pdf|Ettinger et al 2017 - Towards Linguistically Generalizable NLP Systems: A Workshop and Shared Task]]
   * [[https://bibinlp.umiacs.umd.edu/|Build It, Break It The Language Edition]]   * [[https://bibinlp.umiacs.umd.edu/|Build It, Break It The Language Edition]]
   * [[https://generalizablenlp.weebly.com/|EMNLP 2017 Workshop - Building Linguistically Generalizable NLP Systems]]   * [[https://generalizablenlp.weebly.com/|EMNLP 2017 Workshop - Building Linguistically Generalizable NLP Systems]]
Line 29: Line 31:
  
 ===== Related Pages ===== ===== Related Pages =====
-  * [[Bias#Dataset Bias]]+  * [[Bias#Dataset Bias (Annotation Artifacts)]]
   * [[ml:Distribution Shift]]   * [[ml:Distribution Shift]]
   * [[Evaluation#Robust Evaluation]]   * [[Evaluation#Robust Evaluation]]
  
nlp/robustness_in_nlp.1619692397.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki