Hidden layer

In artificial neural networks, a hidden layer is a layer of artificial neurons that is neither an input layer nor an output layer.

The simplest examples appear in multilayer perceptrons (MLP), as illustrated in the diagram.

[1] An MLP without any hidden layer is essentially just a linear model.

With hidden layers and activation functions, however, nonlinearity is introduced into the model.

[1] In typical machine learning practice, the weights and biases are initialized, then iteratively updated during training via backpropagation.

Example of hidden layers in a MLP.