ml:theory:generalization_in_deep_learning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ml:theory:generalization_in_deep_learning [2022/06/24 19:02] jmflanigml:theory:generalization_in_deep_learning [2025/05/29 07:00] (current) – [Grokking] jmflanig
Line 2: Line 2:
 The theory of generalization in deep learning is not well understood, and is an active area of research. The theory of generalization in deep learning is not well understood, and is an active area of research.
  
-===== Summary =====+===== Overviews =====
   * [[https://lilianweng.github.io/lil-log/2019/03/14/are-deep-neural-networks-dramatically-overfitted.html|Lil'log - Are Deep Neural Networks Dramatically Overfitted?]]  Good summary from 2019.   * [[https://lilianweng.github.io/lil-log/2019/03/14/are-deep-neural-networks-dramatically-overfitted.html|Lil'log - Are Deep Neural Networks Dramatically Overfitted?]]  Good summary from 2019.
 +  * **Overview Papers**
 +    * [[https://arxiv.org/pdf/2012.10931|He and Tao 2022 - Recent Advances in Deep Learning Theory]]
 +  * **Textbooks**
 +    * **[[https://arxiv.org/pdf/2106.10165.pdf|Roberts & Yaida 2021 - The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks]]**
  
 ===== Key Papers ===== ===== Key Papers =====
Line 11: Line 15:
   * **[[https://arxiv.org/pdf/1706.05394.pdf|Arpit et al 2017 - A Closer Look at Memorization in Deep Networks]]**   * **[[https://arxiv.org/pdf/1706.05394.pdf|Arpit et al 2017 - A Closer Look at Memorization in Deep Networks]]**
   * [[https://arxiv.org/pdf/1810.08591.pdf|Neal et al 2018 - A Modern Take on the Bias-Variance Tradeoff in Neural Networks]]   * [[https://arxiv.org/pdf/1810.08591.pdf|Neal et al 2018 - A Modern Take on the Bias-Variance Tradeoff in Neural Networks]]
 +  * [[https://openreview.net/pdf?id=ry_WPG-A-|Saxe et al 2018 - On the Information Bottleneck Theory of Deep Learning]]
   * [[https://arxiv.org/pdf/2002.11448.pdf|Unterthiner et al 2020 - Predicting Neural Network Accuracy from Weights]]   * [[https://arxiv.org/pdf/2002.11448.pdf|Unterthiner et al 2020 - Predicting Neural Network Accuracy from Weights]]
   * [[https://arxiv.org/pdf/2006.07522.pdf|Raj et al 2020 - Understanding Learning Dynamics of Binary Neural Networks via Information Bottleneck]]   * [[https://arxiv.org/pdf/2006.07522.pdf|Raj et al 2020 - Understanding Learning Dynamics of Binary Neural Networks via Information Bottleneck]]
Line 39: Line 44:
 i.e. for a fixed model, training the model with more data can hurt test performance. i.e. for a fixed model, training the model with more data can hurt test performance.
 </blockquote> </blockquote>
-  * [[https://arxiv.org/pdf/1812.11118.pdf|Belkin et al 2018 - Reconciling modern machine learning practice and the bias-variance trade-off]] Introduced the concept of double-descent.  Note: they use //squared-error loss, not cross-entropy//!  (Does this affect the results?) +  * [[https://arxiv.org/pdf/1812.11118.pdf|Belkin et al 2018 - Reconciling modern machine learning practice and the bias-variance trade-off]] Introduced the concept of double-descent.  Note: they use squared-error loss, not cross-entropy!  (Does this affect the results?) 
-  * [[https://arxiv.org/pdf/1912.02292.pdf|Nakkiran et al 2019 - Deep Double Descent: Where Bigger Models and More Data Hurt]]+  * **[[https://arxiv.org/pdf/1912.02292.pdf|Nakkiran et al 2019 - Deep Double Descent: Where Bigger Models and More Data Hurt]]** Has more convincing experiments than the original paper of Belkin 2018.
   * [[https://arxiv.org/pdf/1912.13053.pdf|Xiao et al 2019 - Disentangling Trainability and Generalization in Deep Neural Networks]]   * [[https://arxiv.org/pdf/1912.13053.pdf|Xiao et al 2019 - Disentangling Trainability and Generalization in Deep Neural Networks]]
   * [[https://arxiv.org/pdf/2003.01897.pdf|Nakkiran et al 2020 - Optimal Regularization Can Mitigate Double Descent]]   * [[https://arxiv.org/pdf/2003.01897.pdf|Nakkiran et al 2020 - Optimal Regularization Can Mitigate Double Descent]]
 +  * **[[https://arxiv.org/pdf/2004.04328.pdf|Loog et al 2020 - A Brief Prehistory of Double Descent]]**
 +  * [[https://arxiv.org/pdf/2007.10099.pdf|Heckel & Yilmaz 2020 - Early Stopping in Deep Networks: Double Descent and How to Eliminate it]]
   * [[https://arxiv.org/pdf/2011.03321.pdf|Adlam & Pennington 2020 - Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition]]   * [[https://arxiv.org/pdf/2011.03321.pdf|Adlam & Pennington 2020 - Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition]]
-  * [[https://arxiv.org/pdf/2205.15549.pdf|Lee & Cherkassky 2022 - VC Theoretical Explanation of Double Descent]] See page 3, the two settings of controlling VC dimension. Under their setup (one hidden layer), "during second descent, the norm of weights in the output layer can be used to approximate the VC-dimension of a neural network."  Note that this only happens during second descent.+  * **[[https://arxiv.org/pdf/2107.12685.pdf|Kuzborskij et al 2021 - On the Role of Optimization in Double Descent: A Least Squares Study]]** Shows that for least squares, double descent is related to the condition number of the underlying optimization problem 
 +  * **Theory Papers** 
 +    * **[[https://arxiv.org/pdf/1911.05822.pdf|Deng et al 2019 - A Model of Double Descent for High-dimensional Binary Linear Classification]]** 
 +    * **[[https://arxiv.org/pdf/2205.15549.pdf|Lee & Cherkassky 2022 - VC Theoretical Explanation of Double Descent]]** See page 3, the two settings of controlling VC dimension. Under their setup (one hidden layer), "during second descent, the norm of weights in the output layer can be used to approximate the VC-dimension of a neural network."  Note that this only happens during second descent.
  
 +==== Grokking ====
 +  * [[https://arxiv.org/pdf/2201.02177.pdf|Power et al 2022 - Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets]]
 +  * [[https://arxiv.org/pdf/2505.20896|Wu et al 2025 - How Do Transformers Learn Variable Binding in Symbolic Programs?]]: "We find that the model’s final solution builds upon, rather than replaces, the heuristics learned in earlier phases. This adds nuance to the traditional narrative about “grokking”, where models are thought to discard superficial heuristics in favor of more systematic solutions. Instead, our model maintains its early-line heuristics while developing additional mechanisms to handle cases where these heuristics fail, suggesting cumulative learning where sophisticated capabilities emerge by augmenting simpler strategies."
  
 ===== Related Pages ===== ===== Related Pages =====
   * [[ml:Regularization]]   * [[ml:Regularization]]
ml/theory/generalization_in_deep_learning.1656097327.txt.gz · Last modified: 2023/06/15 07:36 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki