In computing, DDR4 SDRAM, an abbreviation for double data rate type four synchronous dynamic random-access memory, is a type of dynamic random-access memory (DRAM) with a high bandwidth interface currently under development and expected to be released to market in 2012. As a "next generation" successor to DDR3 SDRAM, it is one of several variants of DRAM which have been in use since the early 1970s[1]. It is not directly compatible with any earlier type of random access memory (RAM) due to different signaling voltages, timings, physical interface and other factors.
DDR4 itself is a DRAM interface specification. Its primary benefits compared to DDR3 include a higher range of clock frequencies and data transfer rates (2133–4266 MT/s compared to DDR3's 800–2133[2][3]) and significantly lower voltage (1.2 - 1.05 V for DDR4,[3] compared to 1.5 – 1.2 V for DDR3). DDR4 also anticipates a change in topology – it discards dual and triple channel approaches in favor of point-to-point where each channel in the memory controller is connected to a single module.[3][4] Switched memory banks are also an anticipated option for servers.[3]
Contents |
Standards body JEDEC began working on a successor to DDR3 around 2005,[5] about 2 years before the launch of DDR3 in 2007.[6][7] The high-level architecture of DDR4 was planned for completion in 2008 and, as of 2007, was said by JEDEC's Future DRAM task group chairman to be "on time".[8] The final specification is expected in the second half of 2011,[9] shortly before DDR4's commercial launch.[9] Some advance information was published in 2007,[10] and a guest speaker from Qimonda provided further public details in a presentation at the August 2008 San Francisco Intel Developer Forum (IDF).[11][10][12][13] DDR4 was described as involving a 30 nm process at 1.2 volts, with bus frequencies of 2133 MT/s "regular" speed and 3200 MT/s "enthusiast" speed, and reaching market in 2012, before transitioning to 1 volt in 2013.[13][11]
Subsequently, further details were revealed at MemCon 2010, Tokyo (a computer memory industry event), at which a presentation by a JEDEC director titled "Time to rethink DDR4" [14] with a slide titled "New roadmap: More realistic roadmap is 2015" led some websites to report that the introduction of DDR4 was probably[15] or definitely[16][17] delayed until 2015. However, DDR4 test samples were announced in line with the original schedule in early 2011 at which time manufacturers began to advise that large scale commercial production and release to market was scheduled for 2012.[9]
DDR4 is expected to represent 5% of the DRAM market in 2013,[9] and to reach mass market adoption and 50% market penetration around 2015;[9] the latter is comparable with the approximately 5 years taken for DDR3 to achieve mass market transition over DDR2.[3] In part, this is because changes required to other components would impact all other parts of computer systems, which would need to be updated to work with DDR4.[2]
In February 2009, Samsung validated 40 nm DRAM chips, considered a "significant step" towards DDR4 development[18] since in 2009, DRAM chips were only beginning to migrate to a 50 nm process.[19] In January 2011, Samsung announced the completion and release for testing of a 2 GB DDR4 DRAM module based on a process between 30 and 39 nm.[20] It has a maximum data transfer rate of 2133 Mb/s at 1.2 V, uses pseudo open drain technology (adapted from graphics DDR memory[21]) and draws 40% less power than an equivalent DDR3 module.[22][23][20]
Three months later in April 2011, Hynix announced the production of 2 GB DDR4 modules at 2400 MT/s, also running at 1.2 V on a process between 30 and 39 nm (exact process unspecified),[9] adding that it anticipated commencing high volume production in the second half of 2012.[9] Semiconductor processes for DDR4 are expected to transition to sub-30 nm at some point between late 2012 and 2014.[3][24]
The new chips are expected to run at 1.2 V or less,[25][26] versus the 1.5 V of DDR3 chips, and have in excess of 2 billion data transfers per second. They are expected to be introduced at clock speeds of 2133 MT/s, estimated to rise to a potential 4266 MT/s [2] and lowered voltage of 1.05 V [27] by 2013. DDR4 is likely to be initially commercialized using 32 – 36 nm processes,[2] and according to a roadmap by PC Watch (Japan) and comments by Samsung, as 4 Gbit chips.[24][20] Increased memory density was also anticipated, possibly using TSV ("through-silicon via") or other 3D stacking processes.[4][2][3][28] The DDR4 specification will include standardized 3D stacking "from the start" according to JEDEC.[28] X-bit Labs commented that "as a result DDR4 memory chips with very high density will become relatively inexpensive".[2] Prefetch an 8n prefetch with bank groups, including the use of two or four selectable bank groups.[29]
DDR4 also anticipates a change in topology. It discards dual and triple channel approaches (used since the original first generation DDR[30]) in favor of point-to-point where each channel in the memory controller is connected to a single module.[3][4] This mirrors the trend also seen in the earlier transition from PCI to PCI Express, where parallelism was moved from the interface to the controller,[4] and is likely to simplify timing in modern high-speed data buses.[4] Switched memory banks are also an anticipated option for servers.[3][4]
The minimum clock speed of 2133 MT/s was said to be due to progress made in DDR3 speeds which, being likely to reach 2133 Mb/s, left little commercial benefit to specifying DDR4 below this speed.[2][3] Techgage interpreted Samsung's January 2011 engineering sample as having CAS latency of 13 clock cycles, described as being comparable to the move from DDR2 to DDR3.[21]
In 2008, concerns were raised in the book Wafer Level 3-D ICs Process Technology that non-scaling analog elements such as charge pumps and voltage regulators, and additional circuitry "have allowed significant increases in bandwidth but they consume much more die area". Examples include CRC error-detection, on-die termination, burst hardware, programmable pipelines, low impedence, and increasing need for sense amps (attributed to a decline in bits per bitline due to low voltage). The authors noted that as a result, the amount of die used for the memory array itself has declined over time from 70-80% with SDRAM and DDR1, to 38% for DDR3 and potentially to less than 30% for DDR4.[31]
|