For example humans can perform an image recognition task at rate requiring no more than 10ms of processing time per neuron through the successive layers (going from the retina to the temporal lobe).
Although these networks have achieved breakthroughs in many fields, they are biologically inaccurate and do not mimic the operation mechanism of neurons in the brain of a living thing.
[15] Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.
The idea is that neurons may not test for activation in every iteration of propagation (as is the case in a typical multilayer perceptron network), but only when their membrane potentials reach a certain value.
In a spiking neural network, a neuron's current state is defined as its membrane potential (possibly modeled as a differential equation).
[20] SNNs are theoretically more powerful than so called "second-generation networks" defined in[20] as "[ANNs] based on computational units that apply activation function with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs; however, SNN training issues and hardware requirements limit their use.
[20] Spike-based activation of SNNs is not differentiable thus making it hard to develop gradient descent based training methods to perform error backpropagation.
The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is non-differentiable because neuron's output is either 1 when it spikes, and 0 otherwise.
This all-or-nothing behavior of the binary spiking nonlinearity stops gradients from “flowing” and makes LIF neurons unsuitable for gradient-based optimization.
[24] Originating from biological insights, SFA offers significant computational benefits by reducing power usage through efficient coding,[25] especially in cases of repetitive or intense stimuli.
This adaptation improves signal clarity against background noise and introduces an elementary short-term memory at the neuron level, which in turn, refines the accuracy and efficiency of information processing.
This efficiency not only streamlines the computational workflow but also conserves space and energy, offering a pragmatic step forward in the practical application of SNNs for complex computing tasks while maintaining a commitment to technical integrity.High-performance deep spiking neural networks with 0.3 spikes per neuron SNNs can in principle be applied to the same applications as traditional ANNs.
[31] In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment.
Candidates include:[36] Future neuromorphic architectures[39] will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity.
Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching.
Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated reversal of domains.