Speedup

In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem.

The notion of speedup was established by Amdahl's law, which was particularly focused on parallel processing.

Speedup can be defined for two different types of quantities: latency and throughput.

Speedup is dimensionless and defined differently for each type of quantity so that it is a consistent metric.

Speedup in throughput is defined by the formula:[3] where We are testing the effectiveness of a branch predictor on the execution of a program.

Efficiency is a metric of the utilization of the resources of the improved system defined as Its value is typically between 0 and 1.

With the larger accumulated cache size, more or even all of the working set can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation.