Skip to main content
Back to top
Ctrl
+
K
Welcome
Useful mathematical facts
Technicalities: Kronecker product of matrices and the vectorization operation
1. Review of statistics
2. Statistical Mechanics for the statistician
3. A first encounter with ML and the supervised setting
4. Bayesian Supervised Learning
4.1. Bayesian Linear Regression
4.2. Intermezzo: parametric vs non-parametric bayesian model
4.3. Gaussian Process Regression
5. Infinite width limit of Neural Networks
5.1. NTK regime
5.2. NNGP regime
6. Neural Network in the proportional limit and renormalized theories
7. Learning algorithms and optimization methods
8. Sampling theory and algorithms
Old Chapter, deep learning theory
9. Topics in Bayesian Supervised Learning
9.1. Conventions and notation
9.2. Bayesian Linear Regression
9.3. Intermezzo: parametric vs non-parametric bayesian models
9.4. Gaussian Process Regression
9.5. Infinite-width limit for bayesian FC NN
9.6. Bayesian Deep Linear NN
10. First version of the calculations
10.1. Neural Networks and Gaussian Processes correspondence
10.2. Gradient Descent dynamics and Neural Tangent Kernel
10.3. Bayesian Neural Networks
10.4. Partition function of 1 HL FC NN (single output)
10.5. Extention for finite mean activation functions
10.6. Single vatiable expression for
\(S\)
in the case of zero-mean activation functions
dev
Instructions
.md
.pdf
NNGP regime
5.2.
NNGP regime
#
NNGP regime