MISD
Single instruction stream | Multiple instruction streams | Single program | Multiple programs | |
---|---|---|---|---|
Single data stream | SISD | MISD | ||
Multiple data streams | SIMD | MIMD | SPMD | MPMD |
In computing, MISD (multiple instruction, single data) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. Fault-tolerant computers executing the same instructions redundantly in order to detect and mask errors, in a manner known as task replication, may be considered to belong to this type. Not many instances of this architecture exist, as MIMD and SIMD are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources than MISD does. However, one prominent example of MISD in computing are the Space Shuttle flight control computers.
Systolic arrays (< wavefront processors), first described by H. T. Kung and Charles E. Leiserson are an example of MISD architecture. In a typical systolic array, parallel input data flows through a network of hard-wired processor nodes, resembling the human brain which combine, process, merge or sort the input data into a derived result.
Systolic arrays are often hard-wired for a specific operation, such as "multiply and accumulate", to perform massively parallel integration, convolution, correlation, matrix multiplication or data sorting tasks. A Systolic array typically consists of a large monolithic network of primitive computing nodes which can be hardwired or software configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. More general wavefront processors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. Because the wave-like propagation of data through a systolic array resembles the pulse of the human circulatory system, the name systolic was coined from medical terminology.
A major benefit of systolic arrays is that all operand data and partial results are contained within (passing through) the processor array. There is no need to access external buses, main memory or internal caches during each operation as is the case with standard sequential machines. The sequential limits on parallel performance dictated by Amdahl's theorem also do not apply in the same way, because data dependencies are implicitly handled by the programmable node interconnect.
Systolic arrays are therefore extremely good at artificial intelligence, image processing, pattern recognition, computer vision and other tasks which animal brains do so particularly well. Wavefront processors in general can also be very good at machine learning by implementing self configuring neural nets in hardware.
While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is definitely not SISD. Since these input values are merged and combined into the result(s) and do not maintain their independence as they would in a SIMD vector processing unit, the array cannot be classified as such. Consequently, the array cannot be classified as a MIMD either, since MIMD can be viewed as a mere collection of smaller SISD and SIMD machines.
Finally, because the data swarm is transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification a misnomer. The other reason why a systolic array should not qualify as a MISD is the same as the one which disqualifies it from the SISD category: The input data is typically a vector not a single data value, although one could argue that any given input vector is a single dataset.
All of the above not withstanding, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in the engineering class. If the array is viewed from the outside as atomic it should perhaps be classified as SFMuDMeR = Single Function, Multiple Data, Merged Result(s).
Footnotes
- ↑ Michael J. Flynn, Kevin W. Rudd. Parallel Architectures CRC Press, 1996.
- ↑ Quinn, Michael J. Parallel Programming in C with MPI and OpenMP. Boston: McGraw Hill, 2004.
- ↑ Ibaroudene, Djaffer. "Parallel Processing, EG6370G: Chapter 1, Motivation and History." St Mary's University, San Antonio, TX. Spring 2008.
- ↑ Null, Linda; Lobur, Julia (2006). The Essentials of Computer Organization and Architecture. 468: Jones and Bartlett.
|
|