In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable.
[1] Modern neural networks are trained using backpropagation[2][3][4][5][6] and are colloquially referred to as "vanilla" networks.
[7] MLPs grew out of an effort to improve single-layer perceptrons, which could only be applied to linearly separable data.
A perceptron traditionally used a Heaviside step function as its nonlinear activation function.
However, the backpropagation algorithm requires that modern MLPs use continuous activation functions such as sigmoid or ReLU.
[8] Multilayer perceptrons form the basis of deep learning,[9] and are applicable across a vast set of diverse domains.
[10] If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model.
In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons.
The two historically common activation functions are both sigmoids, and are described by The first is a hyperbolic tangent that ranges from −1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1.
is the weighted sum of the input connections.
More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).
In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.
Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result.
This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron.
We can represent the degree of error in an output node
The node weights can then be adjusted based on corrections that minimize the error in the entire output for the
th data point, given by Using gradient descent, the change in each weight
is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations.
denotes the partial derivate of the error
The derivative to be calculated depends on the induced local field
It is easy to prove that for an output node this derivative can be simplified to where
is the derivative of the activation function described above, which itself does not vary.
The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is This depends on the change in weights of the
th nodes, which represent the output layer.
So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.