Talk:Supercomputer/Archive 1
From Wikipedia, the free encyclopedia
SFX, c. animation: Super apps?
Are "special effects" and "computer animation" really applications of supercomputers? I kind of doubt it. I would add molecular modelling and climate research to the list. AxelBoldt, Monday, April 22, 2002
- They used to be, IIRC, but these days it seems that CGI is done on either workstations, or, if that's not enough, rackfuls of commodity compute servers. --Robert Merkel
-
- The history of supercomputers and animation is rather limited and somewhat overblown.
To appreciate this statement, a reader has to remember that early machines had limited enough memory that graphics came late, because the application needed the memory or secondary storage (competition in compute time and space [and this is not the linear address space in memory but the scaling of 2-dimensions of graphics space]). Early crude and high resolution graphics did exist, it was just very expensive, and expense is not something show "business" is known, especially expensive computers (consider that movies don't use full sized real buildings at studios, they use facades, their computers were at first facades of blinking lights).
-
- In short, short output movies were made by supercomputer users (many classified as many supercomputers were), but full length Hollywood style animation began a spin off firms from supercomputer firms and users like MAGI (Mathematical Algorithms Group, Inc), Information Intl., Inc. (better known as III or I^3), Whitney-Deimos, etc. Lucasfilm, for instance, used a Cray at Cray Research, but never brought one despite one of the best offers presented, in large part due to maintenance costs.
-
- This begs the question whether the existing clusters used by animation firms are supercomputers: they sort of are if you want to view clusters as supercomputers (and not every one does, and in part this is reflected by what the older end users of machines use ).
-
- --enm
Distributed computing §
Would be interesting to put a paragraph about distributed computing (such as seti@home, folding etc). --user:Extremist
- Distributed computing has some discussion of that. Scott McNay 08:37, 2004 Feb 15 (UTC)
New king of the hill from NEC?
Hmm, NEC strikes back: NEC Launches World’s Fastest Vector Supercomputer (press release, 20 October 2004). Perhaps it should be listed in this article's table (if the performance figures are for real)? --Wernher 01:33, 21 Oct 2004 (UTC)
- It hasn't been tested yet for Top500, and the full version of Blue Gene will be up and running shortly. NEC just wanted to get headlines before IBM really sets the bar high. -Joseph (Talk) 01:53, 2004 Oct 21 (UTC)
- It appears that's a moot point now, as the new SGI/NASA system is potentially faster—certainly faster than Blue Gene/L right now, and maybe faster than the SX-8. I think we should hold off making any such changes until the Top500 list is released in a week. Once the full Blue Gene system comes online, hopefully the situation will stabilize (for a little while anyhow.) -Joseph (Talk) 11:18, 2004 Oct 27 (UTC)
-
- Hey, cool that the Top500 list is coming so soon. I certainly look forward to seeing it, considering all the interesting stuff happening in supers these days. --Wernher 18:13, 27 Oct 2004 (UTC)
Preserved section
I preserved this edited section by an anonymous individual. I am reverting the main article because this whole section is changing in just a couple of days, so it seems pointless to do these edits. Plus he made a couple of changes that do not make any sense. -Joseph (Talk) 22:10, 2004 Oct 28 (UTC)
- BEGIN
- == The fastest supercomputers today ==
- The speed of a supercomputer is generally measured in flops (floating point operations per second); this measurement ignores communication overheads and assumes that all processors of the machine are provided with data and are working at full speed. It is therefore less than ideal as a metric, but is widely used nevertheless.
- As of October 26, 2004, the fastest supercomputer is NASA/SGI's Columbia (named in honour of the crew that died on the Columbia, with a total of 10,240 Intel processors which reached 42.7 teraflops. The system, running the Linux operating system is already in use at a customer website and is fully functional unlike other recent supercomputer announcements. It was built in just 120 days by SGI and Intel and consists of 20 machines, although only 16 were used to acheive the 42.7 record.
- Prior to Columbia, the fastest supercomputer was IBM's Blue Gene/L prototype, with 16,250 PowerPC based processors which reached 36.01 teraflops beating the NEC Earth Simulator which reached 35.86 teraflops.
- END
-
- Also, I wanted to note that the figures for the Columbia are not yet final. They will likely be releasing new figures at Supercomputer 2004, and at that conference we may see new figures for other systems. Also, Top 500 has not tested the Columbia yet—or at least the figures are not public. (They will be in a few days anyhow.) -Joseph (Talk) 23:34, 2004 Oct 28 (UTC)
Separate categorization
Should we separately categorize vector vs. scalar systems? Or at least have a timeline split at that point? -Joseph (Talk) 03:50, 2004 Nov 6 (UTC)
SETI@Home
I don't really think this belongs. It's a distributed computing project, sure, but not a supercomputer. -Joseph (Talk) 18:39, 2004 Nov 16 (UTC)
- It's not a supercomputer in perhaps the original sense, but then, neither are the modern computing clusters that we call supercomputers. Blue Gene is nothing more than an amalgamation of a bunch of off-the-shelf processors and networking equipment, with some specialized technology. SETI@home acts as a supercomputer in a similar manner with a processing throughput in excess of what is supposed to be the world's fastest supercomputer. It is an achievement that should be noted in this article. --Alexwcovington 22:06, 16 Nov 2004 (UTC)
-
- SETI@home doesn't act as a classic supercomputer, it's a distributed machine. Modern computing clusters do act like a supercomputer. BlueGene runs tightly-coupled problems; SETI@home does not. This doesn't mean SETI@home is bad or anything, it just is a different beast. Can you name anyone involved in supercomputing who think SETI@home is a supercomputer? I don't think so. Apple marketing has tried hard enough to debase the term, don't do the same regarding SETI@home. Greg
I separated this out into its own section. We may want to put it somewhere else in the page. -Joseph (Talk) 21:39, 2004 Nov 17 (UTC)
-
- Since distributed computing has its own page, shouldn't this section be moved there, with maybe a small ref here along the lines of, "For projects such as SETI@home, please see distributed computing"? --Mary quite contrary (hai?) 15:53, 26 March 2007 (UTC)
- You do realize the comment you just replied to is 2 1/2 years old, right? Raul654 15:59, 26 March 2007 (UTC)
- Wow, so much for posting in a hurry before a meeting. I guess I assumed the discussion to be more current since someone just updated the "Quasi-supercomputing" section. Thoughts? --Mary quite contrary (hai?) 16:41, 26 March 2007 (UTC)
- You do realize the comment you just replied to is 2 1/2 years old, right? Raul654 15:59, 26 March 2007 (UTC)
- Since distributed computing has its own page, shouldn't this section be moved there, with maybe a small ref here along the lines of, "For projects such as SETI@home, please see distributed computing"? --Mary quite contrary (hai?) 15:53, 26 March 2007 (UTC)
FLOPS
Hi. As someone who knows jack about supercomputers, I'd like to point out that some parts of the article are just impenetrable for the uninitiated. For instance, when it says that the IBM supercomputer is capable of "70 teraflops", that tells me nothing, and the article on Flops doesn't help any in understanding what it means in terms of how fast the computer works. Maybe you could include a reference in the good old "calculations per second" unit, or maybe something else, but right now I just have no idea of how fast and efficient the IBM supercomputer is, no matter how many flops I know it can throw around. Regards, Redux 05:52, 20 Nov 2004 (UTC)
Google?
I know the Googleplex must be pretty impressive, but I'm not sure if it's valid to compare it to traditional supercomputers - surely estimating its speed in FLOPS is misleading at best, as the majority of its workload is probably integer (string-heavy parsing, analysis, crawling, network stuff, etc.)?
Supercomputer timeline - FLOPS, ENIAC, and earlier computers
I think that the supercomputer timeline is very interesting However,
- (1) the measurement is supposed to be in FLOPS, and some of the early computers listed couldn't do floating-point arithmetic. The ENIAC is one - it used fixed point arithmetic. The Colossus only did logical operations (although it is possible to break FP arithmetic down to logical operations).
- (2) Besides that, giving the ENIAC a 50k FLOPS rating is stretching it. It could do an addition in 1/5,000 second but a multiplication took 1/385 sec (13 times as long). Now, the ENIAC was parallel in a sense, and it had 20 registers, so you could add 10 of the registers into the other 10 registers in 1/5,000 second, for a rate of 50,000/sec, so that is probably where the 50k figure comes from. But that is an unrealistic problem because if you are doing more than 13 of them, multiplication would be faster than repeated addition. (Of course, I know that in recent decades FLOPS ratings have been based on a theoretical maximum, not what can actually be done on a real-world problem.) Besides that, the (now) standard LINPACK doesn't use just additions.
- (3) there were several computers after the ENIAC and before the TX-0 that were actually faster than the ENIAC, for real-world problems. And some of these could do floating point arithmetic as well. (Off the top of my head, probably SWAC, NORC, Whirlwind; maybe IAS, EDVAC, ORDVAC, UNIVAC I, etc.) Bubba73 17:07, July 12, 2005 (UTC)
-
- Regarding item (1) above, how about the hitherto unheard-of computing performance unit FIXPS? (="FIXed Point operations per Second"). :-) On a more serious note: at least, there should be a footnote or something pointing this out; an encyclopedia shouldn't be misleading its readers. As for your item (3), perhaps some research could be undertaken into this? Honor where it's due! --Wernher 04:41, 14 July 2005 (UTC)
-
-
- I've put the addition and multiplication time of most of the early machine in to their articles. Some of these had FP hardware and some didn't, that will have to be looked up if we want to make it strictly FP. And do we want to use addition time only (the ENIAC figure is for 10 additions in parallel) or perhaps the average of the add time and mult time? Bubba73 15:53, July 14, 2005 (UTC)
-
-
-
- Update: I got the following times mostly from the 1955 Ballistics Lab Report, so they all should be between ENIAC and TX-0. In order by name, not date or speed, and this doesn't take into account the possible 10x parallelism of ENIAC. Even with that 10x, LARC (1960), SWAC (1950), Whirlwind (1953). MANIAC II (1957), and NORC (1954) beat it. I didn't note which ones actually had floating point arithmetic. Bubba73 03:59, July 15, 2005 (UTC)
-
Computer Add time Mul time Year (microseconds) ===================================== ENIAC 200 2800 1945 ------------------------------------- SEAC 48 242 1950 (April) SWAC 6 269 1950 (summer?) EDVAC 864 2880 * 1951 ORDVAC 50 750 1951 UNIVAC I 120 1800 1951 IAS 31 620 1952 ILLIAC I 24 600-750 1952 MANIAC I 80 1000 1952 RAYDAC 38 240 1953 WHIRLWIND 8 25.5 1953 DYSEAC 48 2100 1954 *** NORC 15 31 1954 MANIAC II 17 280 1957 ORACLE 11 440 TBD** ------------------------------------- TX-0 10 TBD** 1957 ------------------------------------- LARC 4 8 1960 (* incl memory access) (** to be determined ) |
I've rearranged the table to be in approximate order by date. There are several problems in determining if any of these should be placed on the supercomputer timeline:
- We can't go back and run LINPACK on these machines
- Should the theoretical 10x parrallelism of ENIAC be considered
- The memory access time is not included in most of these
- do they do floating point
- consider addition time only or a mix of operations (perhaps average of add and mult)
The SWAC was 33 times faster than ENIAC on addition, so I think it should be there. On real-world problems, SEAC would probably be faster than ENIAC. Also, Whirlwind should definitely be on the list since it beats ENIAC by more than a factor of 10 on addition and multiplication, even though it was a little slower than SWAC on addition alone.
It seems to me that there is no clear answer. It depends on how the problems above are addressed. But at least SWAC and/or Whirlwind should go between ENIAC and TX-0, and i may have overlooked some others, perhaps not in the table, Bubba73 16:35, July 17, 2005 (UTC)
PS. And this is giving the ENIAC the advantages of counting addition only (even though repeated addition doesn't make much sense), 1 factor of 10 parallelism, and counting its operations as FP. Bubba73 19:27, July 17, 2005 (UTC)
Images
I have a big problem with Image:PPTSuperComputersPRINT.jpg, and Image:PPTExponentialGrowthof Computing.jpg. They have no context associated, no real explanation of the meaning of the measurements involved (where do the figures for number of "operations" per second in "all human" brains come from?)
Please either re-format and explain these graphs or remove them. Thanks -Harmil 04:17, 14 July 2005 (UTC)
- I agree with removing them. I like them except for the biological prganizms. If that was removed, I'd like to have them reinstated. Bubba73 04:36, August 8, 2005 (UTC)
More FLOPS doubts
Regarding item (1) of the #Supercomputer timeline - FLOPS, ENIAC, and earlier computers thread above: after some quick reseach, I haven't found any evidence indicating that TX-0 or the SAGE systems supported hardware floating-point calculations. I thus wonder if we should list their performance numbers in (k)OPS rather than (k)FLOPS? --Wernher 03:47, 8 August 2005 (UTC)
- You're probably right. The BRL reports are often a good source of data. Also, I'm wondering if the special-purpose machines ABC, Colossus, and Heath Robinson should be listed. Bubba73 04:33, August 8, 2005 (UTC)
- Semi Automatic Ground Environment was a fixed point machine with a 32-bit word containing a coordinate composed of two 16-bit fixed point numbers. So each instruction normally did two operations. It had no support for floating point in the hardware[1]. -- RTC 21:32, 20 September 2005 (UTC)
-
- I have now changed the TX-0 and SAGE data from FLOPS to OPS, after having searched high and wide for any evidence to suggest that these machines ever had HW FP. Feel free to double-check, of course. :) --Wernher 02:01, 22 September 2005 (UTC)
I don't see how the Z3 can claim 20 FLOPS when its clock rate was only 5-10Hz - that would have needed it to do 4 floating point operations in every clock! It only had ONE ALU... The table at the external link says "Average calculation speed: Multiplication 3 seconds, division 3 seconds, addition 0.7 seconds". 0.7 seconds per addition is 1.4 FLOPS, not 20 FLOPS. -- 205.175.225.5 00:12, 29 September 2005 (UTC)
ENIAC and repeated addition
"Repeated addition" makes perfect sense on ENIAC. The "NI" stood for Numerical Integrator and the machine was originally designed as a faster digital version of the mechanical analog Differential Analyzer (ignoring the Multiplier and Divider/Square Rooter modules). A Digital Differential Analyzer (DDA) operates entirely by repeated addition in each "integrator" (Accumulator on ENIAC). In its mode as a DDA (exactly what you need for ballistics solutions) it could handle problems requiring up to 20 "integrators" and in such a problem would naturally operate at its peak speed. Aberdeen however found it difficult to program this way, which resulted in its conversion in 1948 to ROM based stored program and a severe drop in peak performance. -- 205.175.225.5 20:05, 13 October 2005 (UTC)
- I still think it is misleading to use the addition time for ENIAC, because modern measures of FLOPS are based on a wider range of operations. Bubba73 (talk), 02:49, 25 January 2006 (UTC)
About supercomputer locations
Regarding my reversion of the recently added footnotes to the timeline table: AFAIK, the locations listed are the installation sites of the computers, not the locations of the development/manufacturing companies. --Wernher 22:49, 17 November 2005 (UTC)
ILLIAC IV performance figures
One of the recently reverted footnotes (see above) stated: I believe the ILLIAC IV speed accepted was 15 not 150 GFLOPS [sic]. Double check this before using either number. In the ILLIAC IV article, a figure of 150 MFLOPS is indicated as the peak performance value, which is also the value cited in the timeline. So, it seems everything's OK, then, doesn't it? --Wernher 23:03, 17 November 2005 (UTC)
I did a little more research and now accept the 150 MFLOPS as a reasonable value. I have found a couple of outside sources that state >100 MFLOPS. As a side note to everyone else, the ILLIAC IV has an increadable theoredical performance of 500 MFLOPS, a peak performance of >100 MFLOPS, but a real world performance of 15-40 MFLOPS. -- 66.41.181.0 14:30, 1 December 2005 (UTC)
Hybrid vs. Super
How do computer speeds compare between
- Supercomputers
- Hybrid computers
- Engineeering workstations
Editors in both the Super and Hybrid articles claim this type of computer is the fastest.
I had always thought the advantage of super computer was in its specialized tasks, where algorithms normally found in programs can get transferred to hardware to operate thousands of times faster than as software, to perform specialized tasks, while a supercomputer is incapable of general purpose computer tasks, supercomputers, being a class of special purpose computer can perform tasks impossible for any other kind of computer, like say modeling the billions of years of the life of millions of star clusters, to explore competing theories of astronomy. User:AlMac|(talk) 09:32, 18 January 2006 (UTC)
- There is no one meaning to "fastest", so both are probably right. Supercomputers are usually general purpose machines, in that you can program them to do just about any kind of problem. Special purpose machines usually only solve one problem, because it's wired into their hardware. So Grape would be special purpose, as would most hybrid computers. FWIW, I've never seen anyone equate "supercomputer" with what you describe. Greg 21:09, 1 May 2006 (UTC)
Typical PC performance through history?
I think it would be cool and interesting to add some (highlighted) "typical PC" entries in the table of supercomputer performance. Like, how did/does a Commodore 64, 8086, 80486, Pentium, Pentium III, Pentium IV, Athlon X2, Cell processor, 4-core MAC... compare to recent and past numbercrunchers. Would help to get a feel for the real performance and evolution of computing power.
Is this meaningful/achievable at all, and any takers? :)
JH-man 10:20, 3 March 2006 (UTC)
- It would be useful, but you have to include the things which really indicate what's going on, like memory bandwidth. Most bogus comparisons involve peak FLOPS without any indication that vector supercomputers can sustain a higher percentage of peak due to their memory subsystems. Greg 21:11, 1 May 2006 (UTC)
Interestingly, the diference between a supercomputer and a PC from prehistoric time through about the mid 1990's was clockspeed. When the Cray X-MP came out in the early 1980's it had a clock that was 200x faster than the widely used VAX (what most people used to do scientific computing). By the mid 1990's there was no difference in clockspeed, thus supercomputers went massively parallel (clusters). RISC concepts combined with better and better VLSI eventually drove commodity processors to where they are today - several orders of magnitude faster than the supercomputers that came out in T-20yrs. In very general terms today's supercomputers are n times faster than today's best PCs, where n is the number of processors in the cluster. Cec 15:06, 5 September 2006 (UTC)
And today with SMP and NUMA architecture becoming commonplace on the typical user's PC the lines between supercomputers and the normal PC are even more hazy, everyday PCs can 'gang up' on embarrasingly parallel tasks (such as seti@home and other grid.org projects) and become part of a supercomputer just as if they were built into a supercomputing cluster... the only real difference besides geographical distribution of nodes here is bandwidth between nodes, and even that line is growing more hazy each year, such as with FiOS, for example. Jaqie Fox 05:58, 30 June 2007 (UTC)
Table heading: Period-->Year
I changed the heading of the "Period" column to "Year", since all but one* of the computers were listed with a single year (i.e., the initial year of operation/installation), implicitly indicating that each particular computer held its place as 'King of the Hill' until the computer listed next appeared on the scene. (* I changed the ENIAC entry accordingly.) --Wernher 07:02, 30 March 2006 (UTC)
Computer lifetime
On the National Center for Atmospheric Research's computer division webpage, I believe I saw a statement that the lifetime of a computer is 3 to 5 years. Is this a physical lifetime? PCs, after all seem to last longer (unless replaced). And if not worn out, what happens to old supercomputers? Simesa 23:56, 17 May 2006 (UTC)
- Many of them - especially groundbreaking ones - become collector's items (I'm serious) Raul654 16:53, 27 June 2006 (UTC)
- One of the problems is these computers use so much power that after a period of time it's cheaper to buy a new computer than it is to keep running the old ones. That said I doubt the 3 to 5 years figure is widespread, companies invest a lot in these machines so there is a degree of inertia in keeping "old" computers going long after it would be cost effective to replace them 194.202.174.11 11:38, 12 July 2006 (UTC)
And then there is the Tin Whisker problem, which is much more severe when lead or silver solder was not used, thereby causing computer equipment (and all electronics) to have a set life to them (they bridge between soldered connectors and short them out on circuit boards, especially on the interconnects to ICs). This issue has already (allegedly, no repair crews or recovery crews have been sent up to prove this, but it is the accepted working theory) cause several to stop functioning entirely and many more to run at reduced functionality/capacity. Jaqie Fox 06:05, 30 June 2007 (UTC)
Fastest computer
The fastest computer is not the Blue Gene/L. It is a petaflop Japanese computer. —The preceding unsigned comment was added by 24.0.194.179 (talk • contribs) 22:19, 26 June 2006 (UTC).
- If you'd actually read the link you provided it says:
The new monster box (well, room) was announced yesterday...
- So until they get it up and running, the BlueGene/L is still the fastest. Please stop vandalising this page with your imaginary computer. Imroy 05:16, 27 June 2006 (UTC)
It's not imaginary; there is a picture of it. It "is." http://www.digitalworldtokyo.com/2006/06/japanese_supercomputer_hits_th.php It has been installed: http://ipcommunications.tmcnet.com/news/2006/06/21/173130.htm It was being talked about as something in the future in 2003. http://www.primidi.com/2003/10/01.html http://www.pinktentacle.com/2006/06/petaflops-level-supercomputer-to-be-unveiled/ http://www.primidi.com/2004/09/01.html http://en.wikipedia.org/wiki/Image:PPTSuperComputersPRINT.jpg http://search-asp.fresheye.com/?ord=s&id=10688&kw=petaflop&Search_Execute.x=45&Search_Execute.y=2 Only the following 10 petaflops one is just planned. http://hardware.slashdot.org/article.pl?sid=05/07/26/0021238 Here's another one: http://www.hpcwire.com/hpc/694425.html —The preceding unsigned comment was added by 24.0.194.179 (talk • contribs) 06:09, 27 June 2006 (UTC).
- Ok, first - don't remove other people's comments on talk pages. It is considered vandalism. Secondly, please learn to use the preview button. You're making lots of little edits that could be avoided if you'd take the time to think about a response and check it with preview before saving.
- Now as to your claims - the HPCwire article is about a Cray machine, not the MDGrape-3. It even says "ORNL is then expected to install a next-generation Cray supercomputer in late 2008". So not a current machine. And none of the links about the MDGrape-3 say it's actually been tested. Until they test it, and get a good sustained speed, then all they can claim is the theoretical peak. Even then, several of the articles you linked to say "direct comparisons are not possible, because the BlueGene is a general-purpose supercomputer and Riken's machine is a special-purpose computer". So even if/when it beats BlueGene/L, the numbers should note this and still list BlueGene/L as the fastest general purpose computer.
- So, once again, please stop vandalising this article with mentions of installed-but-not-tested machines and links to empty news articles (i.e digitalworldtokyo). At least wait until it's been tested, and reported in a proper publication (not a technology blog). If you continue to act in an uncooperative and anti-social fashion, you will be banned. Imroy 07:08, 27 June 2006 (UTC)
The anon is flatly wrong. Notice in the linked article: "is the first super-(duper)-computer to be capable of calculating at the petaflop level". In other words, its peak theoretical performance is more than one petaflop. There's a BIG difference between saying it can theoretically go above a petaflop and actually doing it - theoretical performance is notoriously optimistic. Raul654 16:52, 27 June 2006 (UTC)
Sorry
I messed up the table. Can someone fix it, please? Thanks so much and my apologies. [User:vaceituno|vaceituno]
Timeline of supercomputers...
The Timeline of supercomputers doesn't mention the following supercomputer made in India...
"PARAM Padma computer was developed by India's Center for Development of Advanced Computer (C-DAC)"
For ref... http://www.cdac.in/html/parampma.asp
PARAM Padma is C-DAC's next generation high performance scalable computing cluster, currently with a peak computing power of One Teraflop.
The table, i think must mention about this supercomputer and its evolution.
- Please look at thie link : http://www.cdac.in/html/ctsf/padma/padma500.asp -- Padma is very impressive, but it ranks 171 on the Top 500 list. The table in our article only lists the computer that is at the top of the top 500 list for the specified list, at least for entries in 1993 or later. The computers at the top of the list are curently several hundred times as powerful as Padma. -Arch dude 23:10, 26 October 2006 (UTC)
OS usage graph.
The graph has been part of the article for quite some time. I modified the description, because UNIX(R) and Linux(R) are in fact legally and historically quite distinct. Even if you believe that the casual user has no interest in the Linux/UNIX distinction, you should not remove the graph. Instead, you should adjust the description. The graph is important more because it shows that Linux/UNIX overwhelms alternative OSs than because of the distinction between Linux and UNIX. I personally feel that the free-versus-proprietary distinction is extremely interesting, but this in minor by comparison to the UNIX/Linux versus alternatives. If we leave it, we get both trends on a single graph. -Arch dude 02:06, 9 December 2006 (UTC)