User Tools

Site Tools


nlp:bias

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:bias [2022/08/01 06:06] – [Dataset Bias (for example, Annotation Artifacts)] jmflanignlp:bias [2025/05/14 18:36] (current) – [Dataset Bias] jmflanig
Line 10: Line 10:
     * [[https://arxiv.org/pdf/2103.00453.pdf|Schick et al 2021 - Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP]]     * [[https://arxiv.org/pdf/2103.00453.pdf|Schick et al 2021 - Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP]]
  
 +==== In Large Language Models ====
 +    * [[https://arxiv.org/pdf/2311.04892.pdf|Gupta et al 2023 - Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs]]
  
  
Line 23: Line 25:
     * [[https://www.aclweb.org/anthology/P19-1161v2.pdf|Zmigrod et al 2019 - Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology]]     * [[https://www.aclweb.org/anthology/P19-1161v2.pdf|Zmigrod et al 2019 - Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology]]
     * BUG dataset: [[https://arxiv.org/pdf/2109.03858.pdf|Levy et al 2021 - Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation]]     * BUG dataset: [[https://arxiv.org/pdf/2109.03858.pdf|Levy et al 2021 - Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation]]
 +    * [[https://aclanthology.org/2022.findings-acl.55.pdf|Gupta et al 2022 - Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal]]
     * Data Augmentation     * Data Augmentation
       * [[https://aclanthology.org/P19-1161v2.pdf|Zmigrod et al 2019 - Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology]]       * [[https://aclanthology.org/P19-1161v2.pdf|Zmigrod et al 2019 - Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology]]
Line 43: Line 46:
   * [[https://arxiv.org/pdf/1805.01042.pdf|Poliak et al 2018 - Hypothesis Only Baselines in Natural Language Inference]]   * [[https://arxiv.org/pdf/1805.01042.pdf|Poliak et al 2018 - Hypothesis Only Baselines in Natural Language Inference]]
   * [[https://arxiv.org/pdf/1803.02324.pdf|Gururangan et al 2018 - Annotation Artifacts in Natural Language Inference Data]]   * [[https://arxiv.org/pdf/1803.02324.pdf|Gururangan et al 2018 - Annotation Artifacts in Natural Language Inference Data]]
 +  * [[https://arxiv.org/pdf/1902.01007.pdf|McCoy et al 2019 - Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference]]
   * [[https://arxiv.org/pdf/1908.07898.pdf|Geva et al 2019 - Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets]]   * [[https://arxiv.org/pdf/1908.07898.pdf|Geva et al 2019 - Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets]]
 +  * [[https://arxiv.org/pdf/2204.12708|Schwartz & Stanovsky 2022 - On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations]]
 +
  
 ==== Reducing Annotation Artifacts During Dataset Creation ==== ==== Reducing Annotation Artifacts During Dataset Creation ====
nlp/bias.1659333996.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki