Massive parallelism

From Wikipedia, the free encyclopedia

Massive parallelism (MP) is a term used in computer architecture, reconfigurable computing, application-specific integrated circuit (ASIC) and field-programmable gate array (FPGA) design. It signifies the presence of many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MP, and the Connection Machine.

Today's most powerful supercomputers are all MP systems such as Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, ASCI Thor's Hammer.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

Through advances in Moore's Law, System-On-Chip (SOC) implementations of massively parallel architectures are becoming cost effective, and finding particular application in high definition video processing.

[edit] See also

In other languages