Talk:Instruction level parallelism
From Wikipedia, the free encyclopedia
Hi:
The 1st sentence of the last paragraph of instruction level parallelism says "As of 2004, the computer industry has hit a roadblock in getting further performance gains from ILP". I wondering what roadblock refers to. Does it refer to software techniques or hardware techniques? From which papers/reports/experiences/perspectives, the author made this assumption? I am a student and curious about it.
Thanks very much!
John
-
- The roadblock that I was referring to was the difference in operating frequencies between the CPU and main memory. CPUs are now running in multiple Gigahertz (1 cycle << 1 nanosecond), while the access of times of DRAMS are still in the range of ~50 nanoseconds. The result is that any memory reference that misses within all of the on-chip caches will force the CPU to encur a penalty of hundreds of cycles. None of the techniques that exploit ILP can overcome this very large discrepancy. That's why a PC with a 4Ghz CPU is only marginally faster then one with a 3Ghz CPU. Dyl 23:45, 28 November 2005 (UTC)
Dyl, please explain why you reverted my edit. The memory wall can limit performance, but it does not limit ILP. Software is ultimately what determines ILP. The industry isn't shifting to TLP because of the memory wall, it's because ordinary code doesn't parallelize well.the1physicist 23:56, 18 May 2006 (UTC)
- Your diagnosis of limited ILP in "normal" software as the reason TLP/MP being used instead of more ILP-heavy techniques is not correct. The IPC (instructions per cycle) is first limited by the memory wall. That is, the main reason wider/faster machines are not being built is due to the memory wall. If memory latency dropped dramatically, the industry would start building wider&faster machines again. I'm not saying that ILP in "normal" software is infinitely high, of course it is not. I am saying that the memory latency is currently the main culprit in limiting performance, not the ILP in the code. Until you solved the memory issue, there was no reason to try more esoteric ILP enhancing techniques as the performance was already throttled by another seemingly unsolveable issue. Also, the re-newed popularity of TLP is due to its latency tolerant properties more then anything else. Dyl 06:00, 19 May 2006 (UTC)
-
- Some comments: "I'm not saying that ILP in "normal" software is infinitely high, of course it is not." Well then the article need to say something about this. Next time, instead of wholesale reverting my edit (as is done with vandalism), change what you think is wrong, and keep the improvements. Reverting good faith edits for a minor error tends to piss people off. "the memory latency is currently the main culprit in limiting performance, not the ILP" Nope, the 'effective' ILP can be limited by the memory wall, but ILP is inherently a software concept. Either way, you need to cite a source defending your position.the1physicist 03:53, 21 May 2006 (UTC)
-
-
- I have put in the changes that you have requested. There aren't many academic papers that explictly state the memory wall is the main performance limitor beyond the few sources I have cited. One reason for that is that the industry gradually found out by itself as it tried to build the next generations of products. By the time everyone in the industry acknowledged the problem, there was little need for academia to state the obvious. Another reason for this is that such issues are first found out by in-depth performance modeling done inside CPU/Computer design teams. Such findings are very proprietary as they obviously are meant to give a company competitive advantage over its rivals. Also, the "Random access memory" wikipedia article makes the same claim and I have never edited that page. You put the burden on me for citing sources for my positions. That's fine. You should put the same burden on yourself. Please back up your opinions before making your edits. I reverted that edit as most of it was inaccurate. The statement, "ILP is inherently a software concept" is somewhat meaningless. The only reason people study ILP is to see what performance they can get on hardware, either in existing machines or future machines. Also, ILP is directly related to such hardware issues as how many registers are available. Dyl 14:18, 21 May 2006 (UTC)
- More comments: "You should put the same burden on yourself." Bwahaha, I was waiting for you to catch that. Your changes don't exactly reflect my arguments. The article definitely needs to differentiate between theoretical ILP in software and effective ILP in hardware. I like your example of graphics and scientific computing, but the article should include all types of code and their theoretical ILPs. When possible, ILP should be given numerically, not in vague statements like "very large" or "much more modest". Also, a link to Thread_level_parallelism would be in order..the1physicist 22:45, 21 May 2006 (UTC)
- Yes, I understood your original point about theoretical vs. effective ILP. When I was doing my edits, it was from the point-of-view as a CPU designer, mainly concered with effective ILP. Finding numeric numbers for the ILP in different code bases needs alot of legwork, perhaps for some enthusiatic grad student. I'm many years past that much enthusiam. Dyl 04:47, 22 May 2006 (UTC)
- More comments: "You should put the same burden on yourself." Bwahaha, I was waiting for you to catch that. Your changes don't exactly reflect my arguments. The article definitely needs to differentiate between theoretical ILP in software and effective ILP in hardware. I like your example of graphics and scientific computing, but the article should include all types of code and their theoretical ILPs. When possible, ILP should be given numerically, not in vague statements like "very large" or "much more modest". Also, a link to Thread_level_parallelism would be in order..the1physicist 22:45, 21 May 2006 (UTC)
- I have put in the changes that you have requested. There aren't many academic papers that explictly state the memory wall is the main performance limitor beyond the few sources I have cited. One reason for that is that the industry gradually found out by itself as it tried to build the next generations of products. By the time everyone in the industry acknowledged the problem, there was little need for academia to state the obvious. Another reason for this is that such issues are first found out by in-depth performance modeling done inside CPU/Computer design teams. Such findings are very proprietary as they obviously are meant to give a company competitive advantage over its rivals. Also, the "Random access memory" wikipedia article makes the same claim and I have never edited that page. You put the burden on me for citing sources for my positions. That's fine. You should put the same burden on yourself. Please back up your opinions before making your edits. I reverted that edit as most of it was inaccurate. The statement, "ILP is inherently a software concept" is somewhat meaningless. The only reason people study ILP is to see what performance they can get on hardware, either in existing machines or future machines. Also, ILP is directly related to such hardware issues as how many registers are available. Dyl 14:18, 21 May 2006 (UTC)
-