ml:infinite_neural_networks
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ml:infinite_neural_networks [2022/09/02 18:10] – [Neural Tangent Kernel] chrisliu298 | ml:infinite_neural_networks [2023/06/15 07:36] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 5: | Line 5: | ||
| * Neural Tangent Kernel | * Neural Tangent Kernel | ||
| * [[https:// | * [[https:// | ||
| + | * [[https:// | ||
| * [[https:// | * [[https:// | ||
| Line 30: | Line 31: | ||
| * [[https:// | * [[https:// | ||
| * **[[https:// | * **[[https:// | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| * [[https:// | * [[https:// | ||
| * **[[https:// | * **[[https:// | ||
| - | * [[https:// | + | |
| ===== Notes ===== | ===== Notes ===== | ||
| Jeff's thoughts: Although objective functions for training finite neural networks are usually non-convex, for neural networks with an infinite number of hidden units (infinitely wide) they are usually convex (this is because the space of infinite neural networks is linear: any infinite NN is just a linear combination of all possible parameters in the parameter space). | Jeff's thoughts: Although objective functions for training finite neural networks are usually non-convex, for neural networks with an infinite number of hidden units (infinitely wide) they are usually convex (this is because the space of infinite neural networks is linear: any infinite NN is just a linear combination of all possible parameters in the parameter space). | ||
ml/infinite_neural_networks.1662142221.txt.gz · Last modified: 2023/06/15 07:36 (external edit)