Massive parallel processing

From Wikipedia, the free encyclopedia

Massive parallel processing (MPP) is a term used in computer architecture to refer to a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, the Connection Machine, and the Ultracomputer.

Today's most powerful supercomputers are all MP systems such as Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, ASCI Thor's Hammer.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

The earliest massively parallel processing systems all used serial computers as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost.

Through advances in Moore's Law, single-chip implementations of massively parallel processor arrays are becoming cost effective, and finding particular application in high performance embedded systems appplications such as video compression. Examples include chips from Ambric, picoChip, and Tilera.

[edit] See also

Languages