[1] While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known.
In the standard case, you would wait until Saturday and then adjust all your models.
It is a special case of more general stochastic approximation methods.
It estimates the state value function of a finite-state Markov decision process (MDP) under a policy
This observation motivates the following algorithm for estimating
The algorithm starts by initializing a table
and update the value function for the current state using the rule:[10] where
TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel.
[11] This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players.
Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when
producing parallel learning to Monte Carlo RL algorithms.
[13] The TD algorithm has also received attention in the field of neuroscience.
Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm.
[3][4][5][6][7] The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received.
When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.
Dopamine cells appear to behave in a similar manner.
In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.
[14] Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards.
Over time this increase in firing back propagated to the earliest reliable stimulus for the reward.
Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward.
Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced.
This mimics closely how the error function in TD is used for reinforcement learning.
The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.
[15][16] It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning.