In computer architecture, Gustafson's law (or Gustafson–Barsis's law[1]) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline.
To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine.
It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988.
Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources.
Gustafson's law instead proposes that programmers tend to increase the size of problems to fully exploit the computing power that becomes available as the resources improve.
[2] Gustafson and his colleagues further observed from their workloads that time for the serial part typically does not grow as the problem and the system scale,[2] that is,
[2] With these observations, Gustafson "expect[ed] to extend [their] success [on parallel computing] to a broader range of applications and even larger values for
[2] The impact of Gustafson's law was to shift[citation needed] research goals to select or reformulate problems so that solving a larger problem in the same amount of time would be possible.
In a way the law redefines efficiency, due to the possibility that limitations imposed by the sequential part of a program may be countered by increasing the total amount of computation.
The execution time of a program running on a parallel system can be split into two parts: Example.
A part of that program may scan the directory of the disk and create a list of files internally in memory.
After that, another part of the program passes each file to a separate thread for processing.
Without loss of generality, let the total execution time on the parallel system be
Amdahl's law presupposes that the computing requirements will stay the same, given increased processing power.
In other words, an analysis of the same data will take less time given more computing power.
Gustafson, on the other hand, argues that more computing power will cause the data to be more carefully and fully analyzed: pixel by pixel or unit by unit, rather than on a larger scale.
Where it would not have been possible or practical to simulate the impact of nuclear detonation on every building, car, and their contents (including furniture, structure strength, etc.)
because such a calculation would have taken more time than was available to provide an answer, the increase in computing power will prompt researchers to add more data to more fully simulate more variables, giving a more accurate result.
Amdahl's Law reveals a limitation in, for example, the ability of multiple cores to reduce the time it takes for a computer to boot to its operating system and be ready for use.
If the one-minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system.
As an example, processing one data point per world citizen gets larger at only a few percent per year.
The principal point of Gustafson's law is that such problems are not likely to be the most fruitful applications of parallelism.
Algorithms with nonlinear runtimes may find it hard to take advantage of parallelism "exposed" by Gustafson's law.
algorithm means that double the concurrency gives only about a 26% increase in problem size.
Hill and Marty[4] emphasize also that methods of speeding sequential execution are still needed, even for multicore machines.
They point out that locally inefficient methods can be globally efficient when they reduce the sequential phase.
Furthermore, Woo and Lee[5] studied the implication of energy and power on future many-core processors based on Amdahl's law, showing that an asymmetric many-core processor can achieve the best possible energy efficiency by activating an optimal number of cores given the amount of parallelism is known prior to execution.
These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels.