Instruction-level parallelism

More specifically, ILP refers to the average number of instructions run per step of this parallel execution.

A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible.

years, ILP techniques have been used to provide performance improvements in spite of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as the IBM System/360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file).

], a cache miss penalty to main memory costs several hundreds of CPU cycles.

While in principle it is possible to use ILP to tolerate even such memory latencies, the associated resource and power dissipation costs are disproportionate.

Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data.

Atanasoff–Berry computer , the first computer with parallel processing [ 1 ]