ml:distribution_shift

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ml:distribution_shift [2021/04/29 10:40] – [Related Pages] jmflanigml:distribution_shift [2023/07/26 18:59] (current) jmflanig
Line 1: Line 1:
 ====== Distribution Shift ====== ====== Distribution Shift ======
 +Also known as **covariate shift** (shift between training and testing in the distribution of the input variables), see [[https://jmlr.csail.mit.edu/papers/volume10/bickel09a/bickel09a.pdf|Bickel 2009]].
  
 ===== Papers ===== ===== Papers =====
 +  * [[https://jmlr.csail.mit.edu/papers/volume10/bickel09a/bickel09a.pdf|Bickel et al 2009 - Discriminative Learning Under Covariate Shift]]
   * [[https://arxiv.org/pdf/1810.08750.pdf|Duchi 2018 - Learning Models with Uniform Performance via Distributionally Robust Optimization]]   * [[https://arxiv.org/pdf/1810.08750.pdf|Duchi 2018 - Learning Models with Uniform Performance via Distributionally Robust Optimization]]
   * [[https://arxiv.org/pdf/2007.13982.pdf|Duchi et al 2020 - Distributionally Robust Losses for Latent Covariate Mixtures]]   * [[https://arxiv.org/pdf/2007.13982.pdf|Duchi et al 2020 - Distributionally Robust Losses for Latent Covariate Mixtures]]
 +  * [[https://proceedings.mlr.press/v162/zhou22d/zhou22d.pdf|Zhou et al 2022 - Model Agnostic Sample Reweighting for Out-of-Distribution Learning]]
 +
 +==== NLP ====
 +  * [[https://arxiv.org/pdf/2103.10282.pdf|Michel et al 2021 - Modeling the Second Player in Distributionally Robust Optimization]]
 +  * [[https://arxiv.org/pdf/2109.01558.pdf|Michel 2021 - Learning Neural Models for Natural Language Processing in the Face of Distributional Shift]] PhD thesis
 +  * [[https://arxiv.org/pdf/2204.06340.pdf|Michel 2022 - Distributionally Robust Models with Parametric Likelihood Ratios]]
 +
 +===== Distribution Shift / Out-of-Domain Detection =====
 +  * **In NLP**
 +    * [[https://aclanthology.org/2023.acl-long.717.pdf|Uppaal et al 2023 - Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection]]
 +
 +===== Datasets =====
 +  * WILDS: ([[https://arxiv.org/pdf/2012.07421.pdf|paper]]) A benchmark of in-the-wild distribution shifts
 +
 +===== People =====
 +  * [[https://scholar.google.com/citations?user=oyyIf0YAAAAJ&hl=en|Paul Michel]] See his [[https://arxiv.org/pdf/2109.01558.pdf|2021 thesis]]
 +  * [[https://scholar.google.com/citations?user=Nn990CkAAAAJ&hl=en|Pang Wei Koh]]
 +  * [[https://scholar.google.com/citations?user=pouyVyUAAAAJ&hl=en|Percy Liang]]
 +  * [[https://scholar.google.com/citations?user=5ygiTwsAAAAJ&hl=en|Tatsunori Hashimoto]]
  
 ===== Related Pages ===== ===== Related Pages =====
-  * [[nlp:Ethics]] +  * [[Fairness]] (Methods that are not robust to distribution shift may not be fair across populations)
-  * [[Fairness]]+
   * [[nlp:Robustness in NLP]]   * [[nlp:Robustness in NLP]]
  
ml/distribution_shift.1619692815.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki