User Tools

Site Tools


nlp:prompting_and_task_descriptions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:prompting_and_task_descriptions [2022/07/29 08:37] – [Prompting Language Models] jmflanignlp:prompting_and_task_descriptions [2022/09/16 18:19] (current) – removed jmflanig
Line 1: Line 1:
-====== Prompting and Task Descriptions ====== 
-This page is about natural language task descriptions and prompting language models.  Both these methods use natural language to describe a task which is to be performed. 
- 
-===== Overviews ===== 
-  * [[https://arxiv.org/pdf/2107.13586.pdf|Liu et al 2021 - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing]] 
- 
-===== Prompting Language Models ===== 
-  * Zero-shot 
-    * [[https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf|Radford et al 2019 - Language Models Are Unsupervised Multitask Learners]] GPT-2 
-    * [[https://arxiv.org/pdf/2109.01652.pdf|Wei et al 2021 - Finetuned Language Models Are Zero-Shot Learners]] 
-  * Few-shot aka In-Context Learning 
-    * [[https://arxiv.org/pdf/2009.07118.pdf|Schick & Schütze 2020 - It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners]] 
-    * **[[https://arxiv.org/pdf/2001.07676.pdf|Schick & Schütze 2021 - Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference]]** Introduces PET, pre-dates GTP-3 
-    * [[https://arxiv.org/pdf/2005.14165.pdf|Brown et al 2021 - Language Models are Few-Shot Learners]] GPT-3 
-    * [[https://arxiv.org/pdf/2012.15723.pdf|Gao et al 2021 - Making Pre-trained Language Models Better Few-shot Learners]] 
-    * [[https://arxiv.org/pdf/2012.11926.pdf|Schick & Schütze 2020 - Few-Shot Text Generation with Natural Language Instructions]] GenPET, prompting for natural language generation 
-  * Soft-Prompting, etc 
-    * [[https://arxiv.org/pdf/2104.06599.pdf|Qin & Eisner 2021 - Learning How to Ask: Querying LMs with Mixtures of Soft Prompts]] 
-    * [[https://arxiv.org/pdf/2201.08670.pdf|Tang et al 2022 - Context-Tuning: Learning Contextualized Prompts for Natural Language Generation]] 
-  * Prompt design 
-    * [[https://aclanthology.org/2022.findings-acl.50.pdf|Mishra et al 2022 - Reframing Instructional Prompts to GPTk’s Language]] 
-  * Data-Augmentation Prompting 
-    * [[https://arxiv.org/pdf/2202.12499.pdf|Wang et al 2022 - PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks]] 
-  * Chain of Thought Prompting 
-    * [[https://arxiv.org/pdf/2201.11903.pdf|Wei et al 2022 - Chain of Thought Prompting Elicits Reasoning in Large Language Models]] 
-    * [[https://arxiv.org/pdf/2203.11171.pdf|Wang et al 2022 - Self-Consistency Improves Chain of Thought Reasoning in Language Models]] 
-  * More papers 
-    * [[https://arxiv.org/pdf/2103.08493.pdf|Scao & Rush 2021 - How Many Data Points is a Prompt Worth?]] Prompts are very helpful in small data regimes, and are worth 100's of datapoints. 
- 
-==== Retrieval-Based Methods ==== 
-  * [[https://arxiv.org/pdf/2203.08773.pdf|Wang et al 2022 - Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data]] 
- 
-===== Natural Language Task Descriptions ===== 
-    * [[https://link.springer.com/content/pdf/10.1007/s10994-013-5407-y.pdf|Goldwasser & Roth 2014 - Learning From Natural Instructions]] 
-    * [[https://arxiv.org/pdf/2104.08773.pdf|Mishra et al 2021 - Natural-Instructions: Benchmarking Generalization to New Tasks from Natural Language Instructions]] 
- 
-===== Talks and Lectures ===== 
-  * [[https://underline.io/events/122/sessions?eventSessionId=4313|Invited Talk @ NAACL 2021: Humans Learn From Task Descriptions and So Should Our Models - Hinrich Schütze]] 
- 
-===== People ===== 
-  * [[https://scholar.google.com/citations?user=k8CKy5UAAAAJ&hl=en|Timo Schick]] 
- 
-===== Related Pages ===== 
-  * [[ml:Few-Shot Learning]] 
-  * [[ml:Zero-Shot Learning]] 
  
nlp/prompting_and_task_descriptions.1659083827.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki