Out-of-order execution

From Wikipedia, the free encyclopedia

In computer engineering, out-of-order execution, OoOE, is a paradigm used in most high-speed microprocessors in order to make use of cycles that would otherwise be wasted by a certain type of costly delay. Most modern CPU designs include support for out of order execution.

Contents

[edit] History

Out-of-order execution is a restricted form of data flow computation, which was a major research area in computer architecture in the 1970s and early 1980s. Important academic research in this subject was led by Yale Patt and his HPSm simulator. A paper by James E. Smith and A.R. Pleszkun, published in 1985 completed the scheme by describing how the precise behavior of exceptions could be maintained in out-of-order machines.

The first machine to use out-of-order execution was probably the CDC 6600 (1964) which used a scoreboard to resolve conflicts. About three years later the IBM 360/91 (1966) introduced Tomasulo's algorithm. IBM also introduced the first out-of-order microprocessor, the POWER1 (1990) for its RS/6000. It was the release of the Intel Pentium Pro (1995) which brought the technology to the mainstream. Most other vendors also started to produce OoO designs: IBM/Motorola PowerPC 601 (1992/1993), the Fujitsu/HAL Sparc64 I (1995), the HP PA-8000 (1996), the MIPS R-10000 (1996), the AMD K5 (1996), and the DEC Alpha 21264 (1998). Notable exceptions to this trend are Sun's UltraSparc, HP/Intel's IA-64, and Transmeta's Crusoe.

The logical complexity of the out-of-order schemes was the reason that this technique did not reach mainstream machines until the mid-1990s. Many low-end processors meant for cost-sensitive markets still do not use this paradigm due to large silicon area that is required to build this class of machine.

[edit] Basic concept

Summary: Earlier in-order processors tackled only one instruction at a time, and if necessary waited for its operands to arrive (from memory). Newer out-of-order processors look at several upcoming instructions, and if the next instruction has to wait for its operands, they will in the meantime process the instruction after the next. They will, however, commit the instructions' results in order.

[edit] In-order processors

In earlier processors, the processing of instructions is normally done in these steps:

  1. Instruction fetch.
  2. If input operands are available (in registers for instance), the instruction is dispatched to the appropriate functional unit. If one or more operands is unavailable during the current clock cycle (generally because they are being fetched from memory), however, the processor stalls until they are available.
  3. The instruction is executed by the appropriate functional unit.
  4. The functional unit writes the results back to the register file.

[edit] Out-of-order processors

This new paradigm breaks up the processing of instructions into these steps:

  1. Instruction fetch.
  2. Instruction dispatch to an instruction queue (also called instruction buffer or reservation stations).
  3. The instruction waits in the queue until its input operands are available. The instruction is then allowed to leave the queue before earlier, older instructions.
  4. The instruction is issued to the appropriate functional unit and executed by that unit.
  5. The results are queued.
  6. Only after all older instructions have their results written back to the register file, then this result is written back to the register file. This is called the graduation or retire stage.

The key concept of OoO processing is to allow the processor to avoid a class of stalls that occur when the data needed to perform an operation is not available. In the outline above, the OoO processor avoids the stall that occurs in step (2) of the in-order processor when the instruction is not completely ready to be processed due to missing data.

OoO processors fill these "slots" in time with other instructions that are ready, then re-order the results at the end to make it appear that the instructions were processed as normal. The way the instructions are ordered in the original computer code is known as program order, in the processor they are handled in data order, the order in which the data, operands, become available in the processor's registers. Fairly complex circuitry is needed to convert from one ordering to the other and maintain a logical ordering of the output; the processor itself runs the instructions in seemingly random order.

The benefit of OoO processing grows as the instruction pipeline deepens and the speed difference between main memory (or cache memory) and the processor widens. On modern machines, the processor runs many times faster than the memory, so during the time an in-order processor spends waiting for data to arrive, it could have processed a large number of instructions.

[edit] Dispatch and issue decoupling allows out-of-order issue

One of the differences created by the new paradigm is the creation of queues which allows the dispatch step to be decoupled from the issue step and the graduation stage to be decoupled from the execute stage. An early name for the paradigm was decoupled architecture. In the earlier in-order processors, these stages operated in a fairly lock-step, pipelined fashion.

To avoid false operand dependencies, which would decrease the frequency when instructions could be issued out of order, a technique called register renaming is used. In this scheme, there are more physical registers than defined by the architecture. The physical registers are tagged so that multiple versions of the same architectural register can exist at the same time.

[edit] Execute and writeback decoupling allows program restart

The queue for results is necessary to resolve issues as branch mispredictions and exceptions/traps. The results queue allows programs to be restarted after an exception, which requires the instructions to be completed in program order. The queue allows results to be discarded due to mispredictions on older branch instructions and exceptions taken on older instructions.

The ability to issue instructions past branches which have yet to resolve is known as speculative execution.

[edit] Micro-architectural choices

  • Are the instructions dispatched to a centralized queue or to multiple distributed queues?
IBM PowerPC processors use queues which are distributed among the different functional units while other Out-of-Order processors use a centralized queue. IBM uses the term reservation stations for their distributed queues.
  • Is there an actual results queue or are the results written directly into a register file? For the latter, the queueing function is handled by register maps which hold the register renaming information for each instruction in flight.
Early Intel out-of-order processors use a results queue called a re-order buffer, while most later Out-of-Order processors use register maps.

[edit] See also

In other languages