Table of Contents

Prompting and In-Context Learning

Overviews

Prompting Language Models

Zero-shot

Few-shot aka In-Context Learning

Many-Shot In-Context Learning

Prompting with a large context of many shots.

Soft-Prompting, etc

Prompt tuning can be slower than fine-tuning. See the figure below.

Figure from Su et al 2022. See also figures 6-8 from Ding et al 2022.

Prompt Design / Prompt Engineering

See Prompt Engineering.

Calibration and Scoring

Data-Augmentation Prompting

Chain of Thought Prompting

See also Reasoning - Reasoning Chains.

Cross-lingual Prompting

Miscellaneous Promping Papers

Chained or Tool-based Prompting

For an overview see Tool Learning Papers

Prompt Compression

Retrieval-Based Methods (Retrieval-Augmented)

See Retrieval-Augmented Methods.

Data Contamination Issues

See also Membership Inference.

Dependence on Number of Examples

Comparison to Fine-Tuning

Analysis of In-Context-Learning

Datasets

Software

Talks and Lectures

People