Instruction level parallelism

From Wikipedia, the free encyclopedia

Instruction-level parallelism (ILP) is a measure of how many of the operations in a computer program can be performed simultaneously. Consider the following program:

1. e = a + b
2. f = c + d
3. g = e * f

Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. (See below for more on which operations are dependent on each other, and on the various types of instruction dependencies). If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2.

A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible. Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.

How much ILP exists in programs is very application specific. In certain fields, such as graphics and scientific computing the amount can be very large. However, workloads such as cryptography exhibit much less parallelism.

Micro-architectural techniques that are used to exploit ILP include:

  • Instruction pipelining where the execution of multiple instructions can be partially overlapped.
  • Superscalar execution in which multiple execution units are used to execute multiple instructions in parallel. In typical superscalar processors, the instructions executing simultaneously are adjacent in the original program order.
  • Out-of-order execution where instructions execute in any order that does not violate data dependencies. Out-of-order execution is orthogonal to pipelining and superscalar in that it can be used with or without these other two techniques.
  • Register renaming which refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations. Register renaming is used with out-of-order execution.
  • Speculative execution which allow the execution of complete instructions or parts of instructions before being certain whether this execution should take place. A commonly used form of speculative execution is control flow speculation where instructions past a control flow instruction (e.g., a branch) are executed before the target of the control flow instruction is determined. Several other forms of speculative execution have been proposed and are in use including speculative execution driven by value prediction, memory dependence prediction and cache latency prediction.
  • Branch prediction which is used to avoid stalling for control dependencies to be resolved. Branch prediction is used with speculative execution.

Current implementations of out-of-order execution dynamically (i.e., while the program is executing and without any help from the compiler) extract ILP from ordinary programs. An alternative is to extract this parallelism at compile time and somehow convey this information to the hardware. Due to the complexity of scaling the out-of-order execution technique, the industry has re-examined instruction sets which explicitly encode multiple independent operations per instruction. These instruction set types include:

VLIW is an example of an independence instruction set architecture where the instruction set allows for the explicit specification of independence across instructions. VLIW and out-of-order execution are orthogonal techniques since in principle they can be used together.

Dataflow architectures is another class of architectures where ILP is explicitly specified.

In recent years, ILP techniques have been used to provide performance improvements in spite of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as the IBM 360 used ILP techniques to overcome the limitations imposed by a relatively small register file). Presently, a cache miss penalty to main memory costs several hundreds of CPU cycles. While in principle it is possible to use ILP to tolerate even such memory latencies the associated resource and power dissipation costs are disproportionate. Moreover, the complexity and often the latency of the underlying hardware structures results in reduced operating frequency further reducing any benefits. Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data. Instead, the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such as multiprocessing and multithreading. [1]

[edit] Dependencies

There are two types of dependencies: data and control. Conventional programs are written assuming the sequential execution model. Under this model, instructions execute one after the other, atomically (i.e., at any given point of time only one instruction is executed) and in the order specified by the program.

There are various dependencies that may hinder instruction level parallelism - parallel execution of multiple instructions may lead to a different instruction execution order, and a different instruction ordering will change the final effect of a set of instructions in the case of a dependency. The data dependencies are the following:

True dependency

A true dependency, also known as a data dependency, occurs when an instruction depends on the result of a previous instruction:

1. A = 3
2. B = A
3. C = B

Instruction 3 is truly dependent on instruction 2, as the final value of C depends on the instruction updating B. Instruction 2 is truly dependent on instruction 1, as the final value of B depends on the instruction updating A. Since instruction 3 is truly dependent upon instruction 2 and instruction 2 is truly dependent on instruction 1, instruction 3 is also truly dependent on instruction 1. Instruction level parallelism is therefore not an option in this example. [2]

Anti-dependency

An anti-dependency occurs when an instruction requires a value that is later updated. In the below example, instruction 3 anti-depends on instruction 2 - the ordering of these instructions cannot be changed, nor can they be executed in parallel (possibly changing the instruction ordering), as this would affect the final value of A.

1. B = 3
2. A = B + 1
3. B = 7

An anti-dependency is an example of a name dependency. That is, renaming of variables could remove the dependency, as in the below example:

1. B = 3
N. B2 = B
2. A = B2 + 1
3. B = 7

A new variable, B2, has been declared as a copy of B in a new instruction, instruction N. The anti-dependency between 2 and 3 has been removed, meaning that these instructions may now be executed in parallel. However, the modification has introduced a new dependency: instruction 2 is now truly dependent on instruction N, which is truly dependent upon instruction 1. As true dependencies, these new dependencies are impossible to safely remove. [2]

Output dependency

An output dependency occurs when the ordering of instructions will affect the final output value of a variable. In the below example, there is an output dependency between instructions 3 and 1 - changing the ordering of instructions in this example will change the final value of A, thus these instructions cannot be executed in parallel.

1 A = 2 * X
2 B = A / 3
3 A = 9 * Y

As with anti-dependencies, output dependencies are name dependencies. That is, they may be removed through renaming of variables, as in the below modification of the above example:

1 A2 = 2 * X
2 B = A2 /3
3 A = 9 * Y

A commonly used naming convention for data dependencies is the following: Read-after-Write (true dependency), Write-after-Write (output dependency), and Write-After-Read (anti-dependency).

An instruction B is control dependent on a preceding instruction A if the latter determines whether B should execute or not. In the following example, instruction 2 is control dependent on instruction 1.

1.         if a == b goto AFTER
2.         A = 2 * X
3. AFTER:

[2]

[edit] References

  1. ^ Reflections of the Memory Wall
  2. ^ a b c Hennesy; Patterson (2003). Computer Architecture: a quantitative approach (3rd ed). Morgan Kaufman. ISBN 1-55860-724-2.

[edit] External links

In other languages