Victim cache

It stores all the blocks evicted from that level of cache and was originally proposed in 1990.

As hardware architecture and technology advanced, processor performance and frequency increased at a much faster rate than memory cycle times, resulting in a significant performance gap.

This issue, known as the cache-conflict problem, arises due to the limited associativity of the cache.

It is linked to a 2 entry fully associative victim cache with blocks C, D in it.

The trace to be followed: A, B, A, B... From the diagram, we can see that, in case of victim cache (VC) hit, blocks A and B are swapped.

Hence, it gives an illusion of associativity to the direct-mapped L1 cache, in turn reducing the conflict misses.

Experimental results are deduced by considering a 32-Kb Direct-Mapped, 2-way and fully associative cache augmented with a 256 block (8 KB) victim cache and running on it 8 randomly selected SPEC95 Benchmarks.

[4] Miss rate reduction for a 64 KB cache size are found to be significantly lower, proving that victim caching is not indefinitely scalable.

Implementation Example