Talk:Dual-channel architecture
From Wikipedia, the free encyclopedia
[edit] Headline text
Bold textI started the article, as it was requested. I'm not extremely familiar with the intracite architecture, so more detail would be good. DoomBringer 00:44, 12 Jun 2005 (UTC)
- Hello. Updated to the best of my ability; added the graphic. Will add a fuller of explanation of Intel vs. AMD later. --Bobcat 20:54, 11 July 2005 (UTC)
- Good work. The graphic could use some work, maybe to show dual vs single: basically, a wider pipe to memory in the dual channel scenario. I like it though. DoomBringer 03:10, 12 July 2005 (UTC)
- No offense, but I think the graphic is misleading. It represents peripherals (AGP, IDE, and USB) to be as fast as the CPU, which is very wrong. In fact, they are even slower than the memory controller. It would be more correct to represent this as an inverted pyramid. I question the applicability of the term "bottleneck", actually, as a bottleneck is a slow point between two fast points. -- Bilbo1507 02:09, 20 January 2007 (UTC)
- Good work. The graphic could use some work, maybe to show dual vs single: basically, a wider pipe to memory in the dual channel scenario. I like it though. DoomBringer 03:10, 12 July 2005 (UTC)
[edit] RAID 0 for memory?
Would it be wrong to think of dual channel memory as an analogous setup to two hard drives configured to RAID 0? You double the speed by splitting the bandwidth costs over two different mediums?
- Seems so, although I am not sure. -Yyy 12:13, 14 December 2005 (UTC)
- Wouldn't dual channel architecture effectively only have any benefit if the two channels are used separately? Meaning that a single application could never profit from more than 1 channel? Or that if two applications intensively use memory that is allocated on one bank, they effectively do not have any profit from dual channel architecture? Dabljuh 19:06, 8 February 2006 (UTC)
- All applications benefit from dual-channel. Imagine heavy traffic on two highways, one with 64 lanes and one with 128 lanes (Dual-channel), the trafic on the highway with 128 lanes has less problems to go through than on the smaller highway. --Denniss 19:48, 8 February 2006 (UTC)
- What's a highway? To use the Raid0 analogy: In RAID0, the data is written to the two disks in parallel, so disk one contains blocks 1,3,5 while disk2 contains 0,2,4 for example. That way, Raid0 can double the throughput because no file above a certain size can be located on one disc alone. Is dual channel working like this, with regard to the fragmentation of data? Dabljuh 08:28, 9 February 2006 (UTC)
- Highway is analogy is not complete. Because the toll booth (memory controller) would be only 32 lanes so all those 128 lanes of cars would still need to merge only 32 lanes. Imaging the backup. Remember the cpu can only accept 32 bits of data at once. Imagine 64 car lanes going 50mph before the booth, 32 cars lanes going 200mph after the booth and 128 lanes going 50mph before the booth and 32 lanes going 400mph after the booth (dual channel). And yes the data is striped as RAID 0. --NYC 1:06p, 20 Dec 2006 (EST)
- As I understand it, the interleaving is done on a much smaller scale. For example, the first 64 bits are stored on one chip, the next 64 bits on the other chip, the third on the first chip, etc. Memory is usually transferred in bigger bursts (are they page-sized bursts?) to the CPU's cache, so one tranfer of memory utilizes both chips. This has nothing to do with how many applications are running (or highways :P). So ya, it's kind of like RAID 0 with hard disks, except the stripe size is 64 bits instead of several KB. Hypertransport, on the other hand, allows each CPU to handle 1/n of the memory independently. For this, the memory is divided up into n large chunks. (n = # of CPUs) For this, how memory is allocated to applications does have a big impact. Hypertransport with dual channels (2 channels per CPU) has the potential to be four times faster than a single-CPU system with the single-channel memory architecture. By the way, do not copy this to the article without checking it, because I'm not sure that the details are correct. In fact there's probably at least one mistake. -- Bilbo1507 02:02, 20 January 2007 (UTC)
- All applications benefit from dual-channel. Imagine heavy traffic on two highways, one with 64 lanes and one with 128 lanes (Dual-channel), the trafic on the highway with 128 lanes has less problems to go through than on the smaller highway. --Denniss 19:48, 8 February 2006 (UTC)
- Wouldn't dual channel architecture effectively only have any benefit if the two channels are used separately? Meaning that a single application could never profit from more than 1 channel? Or that if two applications intensively use memory that is allocated on one bank, they effectively do not have any profit from dual channel architecture? Dabljuh 19:06, 8 February 2006 (UTC)
[edit] Practical data
Some real life data (benchmarks etc.) would be interesting to see. -- 213.253.102.145 16:57, 12 April 2006 (UTC)
- Oh we're not supposed to just make stuff up and use the power of positive thinking to make it so?? :P Ya, someone should look into this. -- Bilbo1507 02:28, 20 January 2007 (UTC)
That Intel Whitepaper is Hogwash. Comparing 1X256MB single-channel with 2X256MB single-channel is dumb: of course the system with more memory will perform better because there will be less page faults. The comparison should have been between a 1X512MB system and a 2X256MB system. Maybe Tom's Hardware or Sisoft have benchmark results.
[edit] Matching pairs of memory
As per the entry, "Each memory module in each slot should be identical to the one in its matching slot." Why is that? What if they aren't identical? --mriker 03:49, 11 May 2006 (UTC)
- Internal structure, speed rating and capacity should match, no need to have identical pairs although there are usually less problems with two identical sticks. --Denniss 16:11, 11 May 2006 (UTC)
- You should identify that the reason they need to match is because they will be run in sync, and most BIOSes will run them both at the speed of DIMM 0, rather than the fastest compatible speed --222.155.100.80 23:36, 23 June 2006 (UTC)
[edit] Criticism
I think its worthwhile to mention that technologies like XDR and FB-DIMM were created with the idea that the high pincount of DDR was a bad thing and those technologies instead seek to have wide internal busses which serialize data into thin external busses by having more on-chip circuitry. Possibly there should be mention of the 480 pins that dual channel DDR-II requires? Unfortunately my wordcraft skills aren't particularily high today. --222.155.100.80 23:36, 23 June 2006 (UTC)
[edit] What's new?
That kingston whitepaper is very overhyped. The idea of using more than one bank of memory has been around for at least a decade. Memory has been too slow for processor for even loger than that. My Indigo2 which was built in 1996 had 4 memory banks (up to 3 SIMMS per bank). My HP J210 also had 4 banks, and would do 16 way interleaving if it had 16 identical SIMMS. Dual Channel as they now seem to call it, is not something that was invented because DDR was too slow. Remember how annoying it was when your Pentium required SIMMs in pairs?
- My 486 required 30-pin SIMMs in quads. :) -- Bilbo1507 02:11, 20 January 2007 (UTC)
any CPU with a bus speed that is greater than the memory speed, is innacurate. Even a CPU bus clocked at the same speed as the RAM would end up waiting, because SDRAM takes multiple cycles to read (because of RAS and CAS latency). --Aij 05:59, 24 October 2006 (UTC)
- I don't know if there is RAS latency in computer memory. Memory speed is given according to RAS. That is a 10ns memeory has a RAS of 10ns because RAS is the clock strobe. If CPU and RAM ran at the same speed then waiting will only occur on CAS. There'll be no waiting for RAS in this case.
- SIMMs in pairs has nothing to do with interleaving. It has to do with data width. For 32 bit CPU, memory, data is accessed 32 bits at a time. But since SIMMs were only 8 bits wide, you always needed to add SIMMs in 4s. 386sx was an exception. It was 32 bit CPU with 16 bit lines. Since it needed 16 bits, SIMMs had to be added in 2s. This was the days of 32 pin SIMMs. SIMMs 72 pins and greater offer atleast 32 bit wide data so today we no longer need to add SIMM in pairs.
- Dual Channel could be implemented as interleaved memory but it probably isn't. I've never seen any non-consumer paper on this so I can't say how it works. But one way dual channel could work is akin to 386sx where more data is read at once then can be transmitted (16 bytes is read in 8 ns, and transmited 4 bytes at a time at 2 ns cycles, a bus speed of 500 MHz). Memory is not interleaved in this case. Interleaving was useful in days of DRAM, which required refreshing. It was interleaved because one bank would be accessed while the other refreshed. SDRAM does not need refreshing and this kind of interleaving would offer no gain for SDRAM.
- Note only sequential access is speed up by interleaving or dual channel. Random access gains little or no improvement. -NYC Dec 20, 2006 EST
- Actually, Pentiums did need 72-pin SIMMs in pairs to match their bus, and RAM in the same class as PC-100 provided a 64-bit bus (which would have required 4 matching 72-pin SIMMs if they hadn't changed things around again). This is what 8Mx64 means on a 64MB PC-100 DIMM. (And wasn't it 30 pins not 32 pins on the older SIMMs?) It does have something to do with interleaving, because on a 32-bit bus, bits 0-7 are stored on the first chip, 8-15 on the next chip, 16-23 on the next, and 24-31 on the next, and 32-39 are back on the first. So the memory is interleaved across 4 chips, with a tiny stripe size. I think this may be how a dual-channel memory controller works too. Regardless, both the old and new way (if they're different) see the same speed increase because each bus cycle transfers more data. I think what we'll see is another eventual consolidation of two chips into one if dual-channel out-paces cranking up the clock rate. So no, I don't think this is anything new, but just because it's an old trick doesn't mean it's ineffective. I wish I could match 8 or 16 chips to get 8x or 16x the bus speed, like Alphas mostly did. -- Bilbo1507 02:24, 20 January 2007 (UTC)
[edit] Question
Do dual channel motherboards accept memory which doesn't support dual channel? i.e. is it backward compatible? Would be handy to know...
- There is no memory supporting only single-channel or only dual-channel. All memory sticks may work in dual-channel if you have a pair of two. Best is is identical modules or at least two similar ones (speed grade, memory density). --Denniss 09:55, 1 February 2007 (UTC)
Thank you for the response, I now understand the concept of dual-channel operation. However it should be added to the main article for other people's benefit.
- Some retail boxed ram, especialy ones from Kingston, sometimes have a "Not Dual-Channel Compatible" warning label on them. I remember occasionally seeing this when I worked for <popular electronics store>. Keep in mind that these usually still work but, as mentioned in this Wiki article, some motherboards may have issues with them. My P4 board with an SIS chipset had no problem dual channel-ing them.66.177.213.82 21:32, 20 August 2007 (UTC)
[edit] I came here for clock speed information and couldn't find it
So, perhaps it should be added by someone who knows the answer. I'd like to know if RAM, in a dual-channel configuration, continues to run at the same clock speed or not. For example, PC-3200 RAM runs at 200 MHz. Would it continue to run at 200 MHz in a dual-channel configuration? It seems like I've read before that the speed drops in half (100 MHz in this example). Modul8r 22:50, 29 May 2007 (UTC)
- Why should the speed be cut in half ? In Dual-channel both PC-3200 sticks are still operating at 200 MHz as long as they are compatible to each other and the memory controller likes them, too. --Denniss 11:38, 30 May 2007 (UTC)
- Your memory would only down-clock if you mixed it with a slower chip. For example, if you put in a PC-2100 chip in there, all your ram would run at 133 MHz.66.177.213.82 21:34, 20 August 2007 (UTC)
[edit] MB or MiB?
This article is littered with misabbreviations. I once corrected them, however they were reverted, and the guilty party insisted that "GiB" was the correct abbreviation for gigabytes. Was there some inverse revolution where letters were ADDED to abbreviations, or does this other user not know what he's talking about? Seedsoflight 20:16, 7 September 2007 (UTC)
- See Binary prefix --Denniss 13:47, 9 September 2007 (UTC)
Yup, the boy don't know what's the difference between MB's and GB's 203.81.161.154 15:36, 14 September 2007 (UTC)
[edit] Dual Channel with one stick
If you only use one stick of DDR2, is the speed effectively halved because it can't use dual channel? 91.84.211.193 10:24, 17 September 2007 (UTC)
- The maximum memory transfer rate is halved. Usually system performance does not decrease at the same rate but it depends on the board/CPU how much it decreases (a P4 is severely affected but a Core/Athlon64 not that hard). --Denniss —Preceding signed but undated comment was added at 15:22, 17 September 2007 (UTC)
[edit] Inadequate information and sourcing
The sole source for this article is currently a whitepaper from two technology companies that stand to benefit from promoting new memory technology. The whitepaper provides an elementary explanation of how dual-channel architecture works, but fails to discuss the question of what kinds of application environments succeed or fail to take advantage of this feature. The wide disparity of benchmark results (as little as about 3% improvement to as much as about 80% improvement, based on the approximate but not clearly stated raw numbers of directly comparable components, and ignoring the sometimes misleading use of chart numbering) makes clear that the identification of these specific environments is vital to understanding whether dual-channel is of any use to a consumer for their particular needs. (That's why I came to this article in the first place.) Can't we find some more objective and more thorough sources for this 4-year-old technology? ~ Jeff Q (talk) 03:45, 20 September 2007 (UTC)
[edit] Modules should be in non-matching banks
In order to achieve this, two or more DDR/DDR2 SDRAM memory modules must be installed into matching banks
I don't know if this is universal or not, but the sentence above is misleading. Colored banks can (always?) belong to the same channel -- to get dual-channel working correctly, paired modules should be installed to opposite banks, not matching banks. Thus, in a 3-bank configuration where 2 banks are blue and one is black, dual channel setup would require 1 module in a blue slot and 1 module in the black. Ham Pastrami 06:11, 26 October 2007 (UTC)
[edit] Misuse of "bank"
If the motherboard has two pairs of differently coloured DIMM sockets (the colours indicate which bank they belong to, bank 0 or bank 1), then one can place a matched pair of memory modules in bank 0, but a different-capacity pair of modules in bank 1, as long as they are of the same speed. Using this scheme, a pair of 1 GiB memory modules in bank 0 and a pair of matched 512 MiB modules in bank 1 would be acceptable for dual-channel operation.[1]
This is highly misleading. First of all, the word "bank" is being used ambiguously to mean a pair of corresponding DIMMs from each channel, which is not technically correct since the term "bank" refers to a single side of a module (paired up for DIMMs). The whitepaper itself refers to these as "DIMM 0" and "DIMM 1" of each channel (A and B). In practice, these would normally be labeled as bank 0/1, 2/3, 4/5, and 6/7 or simply as DIMM-0 through DIMM-3. The rest of the paragraph is also needlessly ambiguous -- what it's trying to say is that you can mix modules of different size in a channel, as long as the configuration is mirrored in the other channel. Which is somewhat redundant with stating the need for matching pairs of DIMMs. I'll try to clean it up later if no one gets around to it. Ham Pastrami 18:15, 26 October 2007 (UTC)
[edit] Actual results
I've removed this line from actual results:
But still there where numerous reports of users who've felt a performance boost from dual-channel. Some users reported a boost circa 70% in comparison to single-channel.
Aside from the terrible grammar and misspelled words, I felt that the lack of evidence and weasel words necessitated the removal until someone can write it up better and provide actual evidence Kakomu (talk) 17:45, 28 April 2008 (UTC)