It is done by updating the weight and bias[broken anchor] levels of a network when it is simulated in a specific data environment.
[1] A learning rule may accept existing conditions (weights and biases) of the network, and will compare the expected result and actual result of the network to give new and improved values for the weights and biases.
[2] Depending on the complexity of the model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.
The learning rule is one of the factors which decides how fast or how accurately the neural network can be developed.
It is to be noted that though these learning rules might appear to be based on similar ideas, they do have subtle differences, as they are a generalisation or application over the previous rule, and hence it makes sense to study them separately based on their origins and intents.
Developed by Donald Hebb in 1949 to describe biological neuron firing.
The algorithm converges to the correct classification if: [5] *It should also be noted that a single layer perceptron with this learning rule is incapable of working on linearly non-separable inputs, and hence the XOR problem cannot be solved using this rule alone[6] Seppo Linnainmaa in 1970 is said to have developed the Backpropagation Algorithm[7] but the origins of the algorithm go back to the 1960s with many contributors.
It is a generalisation of the least mean squares algorithm in the linear perceptron and the Delta Learning Rule.
It was developed for use in the ADALAINE network, which differs from the Perceptron mainly in terms of the training.
The delta rule is considered to a special case of the back-propagation algorithm.
Delta rule also closely resembles the Rescorla-Wagner model under which Pavlovian conditioning occurs.
Competitive learning works by increasing the specialization of each node in the network.