Large width limits of neural networks

Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks.

They are the core component of modern deep learning algorithms.

Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite.

This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces.

This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.

Behavior of a neural network simplifies as it becomes infinitely wide. Left : a Bayesian neural network with two hidden layers, transforming a 3-dimensional input (bottom) into a two-dimensional output (top). Right : output probability density function induced by the random weights of the network. Video : as the width of the network increases, the output distribution simplifies, ultimately converging to a Neural network Gaussian process in the infinite width limit.