====== Entailment ====== ===== General Entailment Papers ===== * [[https://arxiv.org/pdf/1802.08535.pdf|Evans et al 2018 - Can Neural Networks Understand Logical Entailment?]] ===== Recognizing Textual Entailment ===== RTE, also know as natural language inference (NLI). ==== Papers ===== * [[https://arxiv.org/pdf/1508.05326.pdf|Bowman et al 2015 - A large annotated corpus for learning natural language inference]] SNLI dataset * ESIM model: [[https://arxiv.org/pdf/1609.06038.pdf|Chen et al 2016 - Enhanced LSTM for Natural Language Inference]] Famous model, the best pre-BERT NLI model * [[https://arxiv.org/pdf/1704.05426.pdf|Williams et al 2017 - A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference]] MNLI dataset * [[https://arxiv.org/pdf/2009.14505.pdf|Joshi et al 2020 - TaxiNLI: Taking a Ride up the NLU Hill]] * **[[https://aclanthology.org/2022.emnlp-main.251.pdf|Stacey et al 2022 - Logical Reasoning with Span-Level Predictions for Interpretable and Robust NLI Models]]** === Natural Language Explanations === See also [[nlp:explainability#Natural Language Explanations]] * [[https://arxiv.org/pdf/1812.01193.pdf|Camburu et al 2018 - e-SNLI: Natural Language Inference with Natural Language Explanations]] * [[https://aclanthology.org/2020.acl-main.771.pdf|Kumar & Talukdar 2020 - NILE: Natural Language Inference with Faithful Natural Language Explanations]] Doesn't beat the RoBERTa baseline * [[https://arxiv.org/pdf/2012.09157.pdf|Zhao et al 2020 - LIREx: Augmenting Language Inference with Relevant Explanation]] Improves upon NILE, beats RoBERTa baseline ==== Datasets ==== * SNLI: [[https://nlp.stanford.edu/projects/snli/|Stanford Natural Language Inference (SNLI) Corpus]] paper: [[https://arxiv.org/pdf/1508.05326.pdf|Bowman 2015]] * SNLI-hard, hard examples from SNLI (from [[https://arxiv.org/pdf/1803.02324.pdf|Gururangan et al 2018 - Annotation Artifacts in Natural Language Inference Data]]) * MNLI: [[https://cims.nyu.edu/~sbowman/multinli/|Multi-Genre NLI Corpus]] paper: [[https://arxiv.org/pdf/1704.05426.pdf|Williams 2017]] * e-SNLI [[https://arxiv.org/pdf/1812.01193.pdf|paper]] * SICK: [[https://marcobaroni.org/composes/sick.html|Sentences Involving Compositional Knowledge]] paper: [[https://aclanthology.org/L14-1314/|Marelli 2014]] Used in a lot a papers recently, see for example [[https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.251.pdf|Stacey 2022]] ==== Applications ===== * In Question Answering * [[https://aclanthology.org/2021.findings-emnlp.324.pdf|Chen et al 2021 - Can NLI Models Verify QA Systems’ Predictions?]] ===== Types of Entailment ===== {{media:nli.png}}\\ Figure from [[https://arxiv.org/pdf/2109.08927.pdf|Wu 2021]] and [[https://aclanthology.org/W09-3714.pdf|MacCartney and Manning 2009]]. ===== Related Pages ===== * [[nlp:explainability#Natural Language Explanations]] * [[Paraphrase]] * [[Semantics]]