Multiple trace theory

It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes.

[3] In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.

There may be a kind of semantic categorization at play, whereby an individual trace is incorporated into overarching concepts of an object.

[5] This may range from aspects of an individual's mood to other semantic associations the presentation of the word evokes.

The mathematical formulation of traces allows for a model of memory as an ever-growing matrix that is continuously receiving and incorporating information in the form of a vectors of attributes.

By assigning numerical values to all possible attributes, it is convenient to construct a column vector representation of each encoded item.

This vector representation can also be fed into computational models of the brain like neural networks, which take as inputs vectorial "memories" and simulate their biological encoding through neurons.

As explained in the subsequent section, the hallmark of multiple trace theory is an ability to compare some probe item to the pre-existing matrix of encoded memories.

For example, given m1 as a probe item, we will get a near 0 distance (not exactly due to context) for i=1, which will add nearly the maximal boost possible to SS.

The criterion can be varied based on the nature of the task and the desire to prevent false alarms.

In the mathematical framework of this theory, we can model recognition of an individual probe item p by summed similarity with a criterion.

We translate the test item into an attribute vector as done for the encoded memories and compared to every trace ever encountered.

In the "ab" framework described above, we can take all attributes present in a cue and list consider these the a item in an encoded association as we try to recall the b portion of the mab memory.

In this example, attributes like "first", "President", and "United States" will be combined to form the a vector, which will have already been formulated into the mab memory whose b values encode "George Washington".

The mab memory gives best chance of recall since its a elements will have high similarity to the cue a.

2) We can use a probabilistic choice rule to determine probability of recalling an item as proportional to its similarity.

This is akin to throwing a dart at a dartboard with bigger areas represented by larger similarities to the cue item.

Phenomena in memory associated with repetition, word frequency, recency, forgetting, and contiguity, among others, can be easily explained in the realm of multiple trace theory.

However, rarer words are typically encountered less throughout life and so their presence in the memory matrix is limited.

[7][8] Finally, empirical data have shown a contiguity effect, whereby items that are presented together temporally, even though they may not be encoded as a single memory as in the "ab" paradigm described above, are more likely to be remembered together.

One of the biggest shortcomings of multiple trace theory is the requirement of some item with which to compare the memory matrix when determining successful encoding.

It is hard to imagine that the brain has unlimited capacity to keep track of such a large matrix of memories and continue expanding it with every item with which it has ever been presented.