Long short-term memory

Long short-term memory (LSTM)[1] is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem[2] commonly encountered by traditional RNNs.

Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods.

LSTM has wide applications in classification,[5][6] data processing, time series analysis tasks,[7] speech recognition,[8][9] machine translation,[10][11] speech activity detection,[12] robot control,[13][14] video games,[15][16] and healthcare.

[17] In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences.

The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning.

[18] The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information.

For instance, in the context of natural language processing, the network can learn grammatical dependencies.

[19] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][4] where the initial values are

[21][22] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.

represent the activations of respectively the input, output and forget gates, at time step

The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[5] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences.

[10][54][55] Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[56][57][58] in the iPhone and for Siri.

[59][60] Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.

[61] 2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.

[11] Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words.

[62] 2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[15] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.

[14][63] 2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.

[1] Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method.

[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem.

The initial version of LSTM block included cells, input and output gates.

[20] (Graves, Fernandez, Gomez, and Schmidhuber, 2006)[5] introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.

[72] A modern upgrade of LSTM called xLSTM is published by a team leaded by Sepp Hochreiter (Maximilian et al, 2024).

[76] 2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.

[13] 2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.

[77] Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.

[78][63] 2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition.

[63] 2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.

The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through time.
A peephole LSTM unit with input (i.e. ), output (i.e. ), and forget (i.e. ) gates