Von Neumann syndrome
From Wikipedia, the free encyclopedia
The von Neumann syndrome is the reason of the supercomputing crisis and explains the reconfigurable computing paradox. This term (von Neumann syndrome) has been coined by Prof. C. V. Ramamoorthy (after having listened to a keynote by Reiner Hartenstein). For most applications in massively parallel computing systems with thousands of processors or tens of thousands of processors only disappointing performance can be achieved. With the number of processors the programmer productivity usually goes down dramatically (The Law of "More"). The overhead-prone and memory cycle-hungry inefficiency of moving data around and other communication requirements are the problem, not the amount of available processing resources. The instruction-stream-driven computing paradigm is the reason of the von Neumann syndrome. The migration of an application to Reconfigurable Computing on FPGAs or on coarse-grained reconfigurable platforms means a shift to the data-stream-driven basic Anti machine paradigm using data counters instead of program counters (during execution time there is no instruction fetch). Instead of moving data around, the locality of execution is optimized by placement and routing.
One of the future battlefields affected by the Von Neumann syndrome is programming Many-core microprocessors. A Many-core microprocessor is one that combines multiple independent processors (between 4 and more than 30 cores as pre-announced by 2007) into a single integrated circuit chip. Because in Massive parallelism the programmer productivity rapidly declines with the number of CPU cores involved ("The Law of More" - programming ready: hardware obsolete). There are severe oubts, wether thread-level parallelism (TLP, often known as chip-level multiprocessing), will solve the programmer productivity problem of Massive parallelism. Only a few high performance computing (HPC) specialists or supercomputer programmers are qualified for only a narrow application domain. Another solution discussed in the (HPC) and supercomputer community is Reconfigurable Computing which promises better solutions to cope with the Memory wall, for the price of requiring a paradigm shift - toward a dual-paradigm approach (von Neumann machine and Anti machine). The conclusion: programming many-core microprocessors and FPGAs is also a problem of educational deficits, challenging the upgrade of obsolete CS-related curricula.
[edit] References
- Satnam Singh: Reconfigurable Computing Systems as Platforms for Heterogenous Many-Core Architectures; Proc. DATE-Conference, April 2007, Nice, France
- Walid Najjar: Reconfigurable Supercomputing - reality or pipedreams; Proc. DATE-Conference, April 2007, Nice, France
- Thomas Sterling, Peter Kogge, Ken Kennedy, Steve Scott, Don Becker, William Gropp (panlists): Multi-Core for HPC: Breakthrough or Breakdown? Supercomputing Conference (SC06), November 2006, Tampa, Florida, USA
- Tarek El-Ghazawi, Dave Bennett, Dan Poznanovic, Allan Cantle, Keith Underwood, Rob Pennington, Duncan Buell, Alan George, Volodymyr Kindratenko (panelists): Is High-Performance, Reconfigurable Computing the Next Supercomputing Paradigm? Supercomputing Conference (SC06), November 2006, Tampa, Florida, USA
[edit] External links
- The Law of "More"
- Reconfigurable Supercomputing: Hurdles and Chances
- A discussion with Andre LaMothe on multi-core programming - electricalfun.com
- A Berkeley View on the Parallel Computing Landscape Argues for the desperate need to innovate around "manycore".
- Multi-core Computing course by Rice University.
- Is High-Performance, Reconfigurable Computing the Next Supercomputing Paradigm?