Computer performance by orders of magnitude
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
Hecto-scale computing (102)
- 2.2×102 Upper end of serialized human through put. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductors arm, the reaction time to lights on a drag strip etc.)[1]
- 2×102 IBM 602 1946 computer.
Kilo-scale computing (103)
- 92×103 Intel 4004 First commercially available full function CPU on a chip 1971
- 500×103 Colossus computer vacuum tube supercomputer 1943
Mega-scale computing (106)
- 1×106 Motorola 68000 commercial computing 1979
- 1.2×106 IBM 7030 "Stretch" transistorized supercomputer 1961
Giga-scale computing (109)
- 1×109 ILLIAC IV 1972 supercomputer does first computational fluid dynamics problems
- 1.354×109 Intel Pentium III commercial computing 1999
- 147.6×109 Intel Core-i7 980X Extreme Edition commercial computing 2010[2]
Tera-scale computing (1012)
- 1.34×1012 Intel ASCI Red 1997 Supercomputer
- 1.344×1012 GeForce GTX 480 from NVIDIA at its peak performance
- 4.64×1012 Radeon HD 5970 from ATI at its peak performance
- 5.152×1012 S2050/S2070 1U GPU Computing System from NVIDIA
- 80×1012 IBM Watson[3]
Petascale computing (1015)
Main article: Petascale computing
- 1.026×1015 IBM Roadrunner 2009 Supercomputer
- 8.1×1015 Fastest computer system as of 2012 is the Folding@home distributed computing system
- 17.17×1015 IBM Sequoia's Linpack performance, June 2013[4]
- 33.86×1015 Tianhe-2's Linpack performance, June 2013[4]
- 36.8×1015 Estimated computational power required to simulate a human brain in real time.[5]
Exascale computing (1018)
Main article: Exascale computing
- 1×1018 It is estimated that the need for exascale computing will become pressing around 2018[6]
- 1×1018 Bitcoin network Hash Rate is expected to reach 1 Exahash per seconds in 2016[7]
Zetta-scale computing (1021)
- 1×1021 Accurate global weather estimation on the scale of approximately 2 weeks.[8] Assuming Moore's law remains constant, such systems may be feasible around 2030.
A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in first quarter 2011.
Yotta-scale computing (1024)
- 257.6×1024 Estimated computational power required to simulate 7 billion brains in real time.
See also
- Futures studies – study of possible, probable, and preferable futures, including making projections of future technological advances
- History of computing hardware (1960s–present)
- List of emerging technologies – new fields of technology, typically on the cutting edge. Examples include genetics, robotics, and nanotechnology (GNR).
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- History of artificial intelligence (AI)
- Strong AI – hypothetical AI as smart as a human. Such an entity would likely be recursive, that is, capable of improving its own design, which could lead to the rapid development of a superintelligence.
- Quantum computing
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- Moore's law – observation (not actually a law) that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.[9]
- Supercomputer
- Superintelligence
- Timeline of computing
- Technological singularity – hypothetical point in the future when computer capacity rivals that of a human brain, enabling the development of strong AI — artificial intelligence at least as smart as a human.
- The Singularity is Near – book by Raymond Kurzweil dealing with the progression and projections of development of computer capabilities, including beyond human levels of performance.
- TOP500 – list of the 500 most powerful (non-distributed) computer systems in the world
References
- ↑ http://www.100fps.com/how_many_frames_can_humans_see.htm
- ↑ Overclock3D - Sandra CPU
- ↑ Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
- 1 2 http://top500.org/list/2013/06/
- ↑ http://hplusmagazine.com/2009/04/07/brain-chip/
- ↑
- ↑ Bitcoin hash rate chart
- ↑ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1-59593-019-1.
- ↑ Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.
External links
|
This article is issued from Wikipedia - version of the Thursday, February 11, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.