Tempotron

The Tempotron is a supervised synaptic learning algorithm which is applied when the information is encoded in spatiotemporal spiking patterns.

This is an advancement of the perceptron which does not incorporate a spike timing framework.

It is general consensus that spike timing (STDP) plays a crucial role in the development of synaptic efficacy for many different kinds of neurons [1] Therefore, a large variety of STDP-rules has been developed one of which is the tempotron.

Assuming a leaky integrate-and-fire-model the potential

of the synapse can be described by

{\displaystyle V(t)=\sum _{i}\omega _{i}\sum _{t_{i}}K(t-t_{i})+V_{rest},}

denotes the spike time of the i-th afferent synapse with synaptic efficacy

{\displaystyle V_{rest}}

the resting potential.

describes the postsynaptic potential (PSP) elicited by each incoming spike:

τ ) − exp ⁡ ( − ( t −

with parameters

denoting decay time constants of the membrane integration and synaptic currents.

The factor

is used for the normalization of the PSP kernels.

When the potential crosses the firing threshold

the potential is reset to its resting value by shunting all incoming spikes.

Next, a binary classification of the input patterns is needed(

refers to a pattern which should elicit at least one post synaptic action potential and

refers to a pattern which should have no response accordingly).

In the beginning, the neuron does not know which pattern belongs to which classification and has to learn it iteratively, similar to the perceptron .

The tempotron learns its tasks by adapting the synaptic efficacy

pattern is presented and the postsynaptic neuron did not spike, all synaptic efficacies are increased by

pattern followed by a postsynaptic response leads to a decrease of the synaptic efficacies by

denotes the time at which the postsynaptic potential

reaches its maximal value.

It should be mentioned that the Tempotron is a special case of an older paper which dealt with continuous inputs.