Table of Contents
Generalization in Deep Learning
The theory of generalization in deep learning is not well understood, and is an active area of research.
Overviews
- Lil'log - Are Deep Neural Networks Dramatically Overfitted? Good summary from 2019.
- Overview Papers
- Textbooks
Key Papers
- Frankle & Carbin 2019 - The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks After training, only a subset of the edges in the network matter. The rest can be pruned (take only the edges with the highest weights) before training if you keep the same initialization weights.
- Shwartz-Ziv & Tishby 2017 - Opening the Black Box of Deep Neural Networks via Information Shows that “neural networks get trained in two phases: i) empirical risk minimization (ERM) phase where the stochastic gradient descent (SGD) algorithm generates high SNR gradients, the loss rapidly decreases and ii) a much slower representation compression(RC) phase where efficient representations for intermediate activations are learned.” (from Raj 2020)
- Bubeck & Sellke 2021 - A Universal Law of Robustness via Isoperimetry Best paper at NeurIPS 2021
- Lee & Cherkassky 2022 - VC Theoretical Explanation of Double Descent Under their setting (one hidden layer), “during second descent, the norm of weights in the output layer can be used to approximate the VC-dimension of a neural network.” Note that this only happens during second descent.
Sample Complexity Bounds
- Aarti Singh's work: 2019 - Sample-complexity of Estimating Convolutional and Recurrent Neural Networks (video)
- Du et al 2019 - How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network? Shows O(1/sqrt(n)). Also shows matching lower bound.
Phenomena
Double Descent
From Nakkiran et al 2020 - Optimal Regularization Can Mitigate Double Descent:
Recent works have demonstrated a ubiquitous “double descent” phenomenon present in a range of machine learning models, including decision trees, random features, linear regression, and deep neural networks (Opper, 1995, 2001; Advani & Saxe, 2017; Spigler et al., 2018; Belkin et al., 2018; Geiger et al., 2019b; Nakkiran et al., 2020; Belkin et al., 2019; Hastie et al., 2019; Bartlett et al., 2019; Muthukumar et al., 2019; Bibas et al., 2019; Mitra, 2019; Mei & Montanari, 2019; Liang & Rakhlin, 2018; Liang et al., 2019; Xu & Hsu, 2019; Dereziski et al., 2019; Lampinen & Ganguli, 2018; Deng et al., 2019; Nakkiran, 2019). The phenomenon is that models exhibit a peak of high test risk when they are just barely able to fit the train set, that is, to interpolate. For example, as we increase the size of models, test risk first decreases, then increases to a peak around when effective model size is close to the training data size, and then decreases again in the overparameterized regime. Also surprising is that Nakkiran et al. (2020) observe a double descent as we increase sample size, i.e. for a fixed model, training the model with more data can hurt test performance.
- Belkin et al 2018 - Reconciling modern machine learning practice and the bias-variance trade-off Introduced the concept of double-descent. Note: they use squared-error loss, not cross-entropy! (Does this affect the results?)
- Nakkiran et al 2019 - Deep Double Descent: Where Bigger Models and More Data Hurt Has more convincing experiments than the original paper of Belkin 2018.
- Kuzborskij et al 2021 - On the Role of Optimization in Double Descent: A Least Squares Study Shows that for least squares, double descent is related to the condition number of the underlying optimization problem
- Theory Papers
- Lee & Cherkassky 2022 - VC Theoretical Explanation of Double Descent See page 3, the two settings of controlling VC dimension. Under their setup (one hidden layer), “during second descent, the norm of weights in the output layer can be used to approximate the VC-dimension of a neural network.” Note that this only happens during second descent.
Grokking
- Wu et al 2025 - How Do Transformers Learn Variable Binding in Symbolic Programs?: “We find that the model’s final solution builds upon, rather than replaces, the heuristics learned in earlier phases. This adds nuance to the traditional narrative about “grokking”, where models are thought to discard superficial heuristics in favor of more systematic solutions. Instead, our model maintains its early-line heuristics while developing additional mechanisms to handle cases where these heuristics fail, suggesting cumulative learning where sophisticated capabilities emerge by augmenting simpler strategies.”