Gustafson's law
In computer architecture, Gustafson's Law (or Gustafson–Barsis's Law[1]) gives the theoretical speedup in latency of the execution of a task at fixed execution time that can be expected of a system whose resources are improved. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988.[2]
Definition
Gustafson's Law can be formulated the following way:
where
- Slatency is the theoretical speedup in latency of the execution of the whole task;
- s is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system;
- p is the percentage of the execution workload of the whole task concerning the part that benefits from the improvement of the resources of the system before the improvement.
Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to set the size of problems to fully exploit the computing power that becomes available as the resources improve. Therefore, if faster equipment is available, larger problems can be solved within the same time.
The impact of Gustafson's Law was to shift research goals to select or reformulate problems so that solving a larger problem in the same amount of time would be possible. In a way the Law redefines efficiency, due to the possibility that limitations imposed by the sequential part of a program may be countered by increasing the total amount of computation.
Derivation
A task executed by a system whose resources are improved compared to an initial similar system can be split into two parts:
- a part that does not benefit from the improvement of the resources of the system;
- a part that benefits from the improvement of the resources of the system.
Example. — A computer program that processes files from disk. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can.
The execution workload of the whole task before the improvement of the resources of the system is denoted . It includes the execution workload of the part that does not benefit from the improvement of the resources and the execution workload of the one that benefits from it. The fraction of the execution workload that would benefit from the improvement of the resources is denoted by The fraction concerning the part that would not benefit from it is therefore . Then
It is the execution of the part that benefits from the improvement of the resources that is sped up by a factor after the improvement of the resources. Consequently, the execution workload of the part that does not benefit from it remains the same. The theoretical execution workload of the whole task after the improvement of the resources is then
Gustafson's Law gives the theoretical speedup in latency of the execution of the whole task at fixed time , which yields
A driving metaphor
Amdahl's Law approximately suggests:
“ | Suppose a car is traveling between two cities 60 miles apart, and has already spent one hour traveling half the distance at 30 mph. No matter how fast you drive the last half, it is impossible to achieve 90 mph average before reaching the second city. Since it has already taken you 1 hour and you only have a distance of 60 miles total; going infinitely fast you would only achieve 60 mph. | ” |
Gustafson's Law approximately states:
“ | Suppose a car has already been traveling for some time at less than 90mph. Given enough time and distance to travel, the car's average speed can always eventually reach 90mph, no matter how long or how slowly it has already traveled. For example, if the car spent one hour at 30 mph, it could achieve this by driving at 120 mph for two additional hours, or at 150 mph for an hour, and so on. | ” |
Applications
Application in research
Amdahl's law presupposes that the computing requirements will stay the same, given increased processing power. In other words, an analysis of the same data will take less time given more computing power.
Gustafson, on the other hand, argues that more computing power will cause the data to be more carefully and fully analyzed: pixel by pixel or unit by unit, rather than on a larger scale. Where it would not have been possible or practical to simulate the impact of nuclear detonation on every building, car, and their contents (including furniture, structure strength, etc.) because such a calculation would have taken more time than was available to provide an answer, the increase in computing power will prompt researchers to add more data to more fully simulate more variables, giving a more accurate result.
Application in everyday computer systems
Amdahl's Law reveals a limitation in, for example, the ability of multiple cores to reduce the time it takes for a computer to boot to its operating system and be ready for use. Assuming the boot process was mostly parallel, quadrupling computing power on a system that took one minute to load might reduce the boot time to just over fifteen seconds. But greater and greater parallelization would eventually fail to make bootup go any faster, if any part of the boot process were inherently sequential.
Gustafson's Law argues that a fourfold increase in computing power would instead lead to a similar increase in expectations of what the system will be capable of. If the one-minute load time is acceptable to most users, then that is a starting point from which to increase the features and functions of the system. The time taken to boot to the operating system will be the same, i.e. one minute, but the new system would include more graphical or user-friendly features.
Limitations
Some problems do not have fundamentally larger datasets. As an example, processing one data point per world citizen gets larger at only a few percent per year. The principal point of Gustafson's Law is that such problems are not likely to be the most fruitful applications of parallelism.
Algorithms with nonlinear runtimes may find it hard to take advantage of parallelism "exposed" by Gustafson's Law. Snyder[3] points out an O(N3) algorithm means that double the concurrency gives only about a 26% increase in problem size. Thus, while it may be possible to occupy vast concurrency, doing so may bring little advantage over the original, less concurrent solution—however in practice there have still been considerable improvements.
Hill and Marty[4] emphasize also that methods of speeding sequential execution are still needed, even for multicore machines. They point out that locally inefficient methods can be globally efficient when they reduce the sequential phase. Furthermore, Woo and Lee[5] studied the implication of energy and power on future many-core processors based on Amdahl's law, showing that an asymmetric many-core processor can achieve the best possible energy efficiency by activating an optimal number of cores given the amount of parallelism is known prior to execution.
See also
References
- ↑ McCool, Michael D.; Robison, Arch D.; Reinders, James (2012). "2.5 Performance Theory". Structured Parallel Programming: Patterns for Efficient Computation. Elsevier. pp. 61–62. ISBN 978-0-12-415993-8.
- ↑ Gustafson, John L. (May 1988). "Reevaluating Amdahl's Law". Communications of the ACM. 31 (5): 532–3. doi:10.1145/42411.42415.
- ↑ Snyder, Lawrence (June 1986). "Type Architectures, Shared Memory, and The Corollary of Modest Potential" (PDF). Annu. Rev. Comput. Sci. 1: 289–317. doi:10.1146/annurev.cs.01.060186.001445.
- ↑ Hill, Mark D.; Marty, Michael R. (July 2008). "Amdahl's Law in the Multicore Era". IEEE Computer. 41 (7): 33–38. doi:10.1109/MC.2008.209. UW CS-TR-2007-1593.
- ↑ Dong Hyuk Woo; Hsien-Hsin S. Lee (December 2008). "Extending Amdahl's Law for Energy-Efficient Computing in the Many-Core Era". IEEE Computer. 41 (12): 24–31. doi:10.1109/MC.2008.530.