User Tools

Site Tools


nlp:question_answering

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:question_answering [2023/03/10 23:39] – [Datasets] jmflanignlp:question_answering [2025/05/13 19:46] (current) – [Overviews] jmflanig
Line 5: Line 5:
   * [[https://arxiv.org/pdf/1809.08267.pdf|Gao et al 2018 - Neural Approaches to Conversational AI]] (contains a chapter on QA)   * [[https://arxiv.org/pdf/1809.08267.pdf|Gao et al 2018 - Neural Approaches to Conversational AI]] (contains a chapter on QA)
   * **[[https://arxiv.org/ftp/arxiv/papers/2001/2001.01582.pdf|Baradaran et al 2020 - A Survey on Machine Reading Comprehension Systems]]**   * **[[https://arxiv.org/ftp/arxiv/papers/2001/2001.01582.pdf|Baradaran et al 2020 - A Survey on Machine Reading Comprehension Systems]]**
-  * [[https://arxiv.org/pdf/2107.12708.pdf|Rogers et al 2022 - QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension]] 
   * [[https://arxiv.org/pdf/2010.00389.pdf|Thayaparan et al 2020 - A Survey on Explainability in Machine Reading Comprehension]]   * [[https://arxiv.org/pdf/2010.00389.pdf|Thayaparan et al 2020 - A Survey on Explainability in Machine Reading Comprehension]]
 +  * **[[https://arxiv.org/pdf/2107.12708.pdf|Rogers et al 2022 - QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension]]** [[https://dl.acm.org/doi/pdf/10.1145/3560260|ACM version (better)]]
 ===== Demos ===== ===== Demos =====
   * [[https://demo.allennlp.org/reading-comprehension/transformer-qa|AllenNLP - RoBERTa QA Model Online Demo]]   * [[https://demo.allennlp.org/reading-comprehension/transformer-qa|AllenNLP - RoBERTa QA Model Online Demo]]
  
 ===== Key Papers ===== ===== Key Papers =====
 +  * Early papers
 +    * [[https://aclanthology.org/W00-0603.pdf|Riloff & Thelen 2000 - A Rule-based Question Answering System for Reading Comprehension Tests]] (Cited by the SQuAD 1.0 paper)
   * [[https://arxiv.org/pdf/1606.02858v2.pdf|Chen et al 2016 - A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task]]   * [[https://arxiv.org/pdf/1606.02858v2.pdf|Chen et al 2016 - A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task]]
 +  * [[https://arxiv.org/pdf/1606.05250.pdf|Rajpurkar et al 2016 - SQuAD: 100,000+ Questions for Machine Comprehension of Text]]
   * BiDAF model   * BiDAF model
 +  * [[https://arxiv.org/pdf/1806.03822.pdf|Rajpurkar et al 2018 - Know What You Don't Know: Unanswerable Questions for SQuAD]] (SQuAD 2.0 paper)
   * [[https://aclanthology.org/2020.emnlp-main.550.pdf|Karpukhin et al 2020 - Dense Passage Retrieval for Open-Domain Question Answering]]   * [[https://aclanthology.org/2020.emnlp-main.550.pdf|Karpukhin et al 2020 - Dense Passage Retrieval for Open-Domain Question Answering]]
 +  * [[https://arxiv.org/pdf/2404.06283|Basmov et al 2024 - LLMs’ Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements]]
 +
 ====== Topics ====== ====== Topics ======
  
 ===== General QA Papers ===== ===== General QA Papers =====
   * [[https://arxiv.org/pdf/1601.01705.pdf|Andreas et al 2016 - Learning to Compose Neural Networks for Question Answering]]   * [[https://arxiv.org/pdf/1601.01705.pdf|Andreas et al 2016 - Learning to Compose Neural Networks for Question Answering]]
 +  * [[https://arxiv.org/pdf/2404.06283|Basmov et al 2024 - LLMs’ Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements]]
  
 ===== Explanation And Implicit Reasoning Papers ===== ===== Explanation And Implicit Reasoning Papers =====
Line 26: Line 32:
   * [[https://arxiv.org/pdf/2101.02235.pdf|Geva et al 2021 - Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies]]   * [[https://arxiv.org/pdf/2101.02235.pdf|Geva et al 2021 - Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies]]
   * [[https://arxiv.org/pdf/2104.08661.pdf|Dalvi et al 2021 - Explaining Answers with Entailment Trees]]   * [[https://arxiv.org/pdf/2104.08661.pdf|Dalvi et al 2021 - Explaining Answers with Entailment Trees]]
 +
 +===== QA with Attribution =====
 +  * [[https://arxiv.org/pdf/2212.08037|Bohnet et al 2022 - Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models]]
 +
 ===== Robust Question Answering ===== ===== Robust Question Answering =====
-  * [[https://arxiv.org/pdf/2004.14648.pdf|Robust Question Answering Through Sub-part Alignment]]+  * [[https://arxiv.org/pdf/2004.14648.pdf|Chen & Durrett 2020 - Robust Question Answering Through Sub-part Alignment]]
  
 ===== Open-Domain Question Answering ===== ===== Open-Domain Question Answering =====
nlp/question_answering.1678491590.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki