SIMD

From Wikipedia, the free encyclopedia

Flynn's Taxonomy
  Single
Instruction
Multiple
Instruction
Single
Data
SISD MISD
Multiple
Data
SIMD MIMD

In computing, SIMD (Single Instruction, Multiple Data) is a technique employed to achieve data level parallelism, as in a vector or array processor. First popularized in large-scale supercomputers (as opposed to MIMD parallelization), smaller-scale SIMD operations have now become widespread in personal computer hardware. Today the term is associated almost entirely with these smaller units.

In the past there were a number of dedicated processors for this sort of task, commonly referred to as Digital Signal Processors, or DSPs. The main difference between SIMD and a DSP is that the latter were complete processors with their own (often difficult to use) instruction set, whereas SIMD designs rely on the general-purpose portions of the CPU to handle the program details, and the SIMD instructions handle the data manipulation only. DSP's also tend to include instructions to handle specific types of data, sound or video for instance, whereas SIMD systems are considerably more general purpose.

Contents

[edit] Advantages

An application that may take advantage of SIMD is one where the same value is being added (or subtracted) to a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red, green and blue portions of the color. To change the brightness, the R G and B values are read from memory, a value is added (or subtracted) from it, and the resulting value is written back out to memory.

With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "get this pixel, now get the next pixel", a SIMD processor will have a single instruction that effectively says "get lots of pixels" ("lots" is a number that varies from design to design). For a variety of reasons, this can take much less time than it would to load each one by one as in a traditional CPU design.

Another advantage is that SIMD systems typically include only those instructions that can be applied to all of the data in one operation. In other words, if the SIMD system works by loading up eight data points at once, the add operation being applied to the data will happen to all eight values at the same time. Although the same is true for any superscalar processor design, the level of parallelism in a SIMD system is typically much higher.

[edit] Disadvantages

  • Many SIMD designers are hampered by design considerations outside their control. One of these considerations is the cost of adding registers for holding the data to be processed. Ideally one would want the SIMD units of a CPU to have their own registers, but many are forced for practical reasons to re-use existing CPU registers - typically the floating point registers. These tend to be 64-bits in size, smaller than optimal for SIMD use, as well as leading to problems if the code attempts to use both SIMD and normal floating point instructions at the same time - at which point the units fight over the registers. Such a system was used in Intel's first attempt at SIMD, MMX, and the performance problems were such that the system saw very little use. However, recent x86 processor designs from Intel and AMD (as of November 2006, or several months prior) have eliminated the problems of shared SIMD and floating-point math registers, by providing a new, separate bank of SIMD registers. Still, in most cases the programmer doesn't know which processor model his code will be run on.
  • Packing and unpacking data to/from SIMD registers can be time-consuming in some applications, reducing the efficiency gained. If each datum (say, an 8-bit value) needs to be gathered/dispersed separately rather than loading an entire register in one operation, it is advisable to reorganize the data if possible, or consider not using SIMD at all.
  • Though recently there has been a flurry of research activities into techniques for efficient compilation for SIMD, much remains to be done. For that matter, the state-of-the-art for SIMD, from a compiler perspective, is hardly comparable to that for vector processing.
  • Because of the way SIMD works, the data in the registers must be well-aligned. Even for simple stream processing like convolution this can be a challenging task.

[edit] History

The first use of SIMD instructions was in vector supercomputers and was especially popularized by Cray in the 1970s.

Later machines used a much larger number of relatively simple processors. Some examples of this type of machine included:

There were many others from this era as well.

[edit] Hardware

Small-scale (64 or 128 bits) SIMD has become popular on general-purpose CPUs, starting in 1994 with HP's PA-RISC MAX instruction set. SIMD instructions can be found to one degree or another on most CPUs, including the IBM's AltiVec and SPE for PowerPC, HP's MVI for Alpha, Intel's MMX, SSE, SSE2, SSE3 and SSSE3, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS, Sun's MAJC, HP's MAX for PA-RISC, ARM's NEON technology, MIPS' MDMX(MaDMaX) and MIPS-3D.

The instruction sets generally include a full set of vector instructions, including multiply, invert and trace. These are particularly useful for processing 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression.

[edit] Software

Adoption of SIMD systems in personal computer software has been slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, such as MMX and 3DNow!, offered support for data types that were not interesting to a wide audience. Compilers also often lacked support requiring programmers to resort to assembly language coding.

SIMD on x86 has had a slow start. The introduction of the various versions of SSE confused matters somewhat, but today the system seems to have settled down and newer compilers should result in more SIMD-enabled software.

Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple Computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were rewritten to use SSE2 and SSE3 instead of AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor, and its departure seriously weakens the prospect of future AltiVec development on the platform.

[edit] Commercial applications

Though it has generally proven difficult to find sustainable commercial applications for SIMD processors, one that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications such as conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from HDTV formats, etc.), deinterlacing, noise reduction, adaptive video compression, and image enhancement.

A more ubiquitous application for SIMD is found in video games; nearly every modern video game console since the Sony PlayStation 2 has incorporated a SIMD processor somewhere in its architecture. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of the math operations, including the use of SIMD-capable instructions.

One of the more recent processors to use vector processing is the Cell Processor developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (each with independent RAM and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications.

[edit] External links