Dynamic random-access memory

Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Since capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.

The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM). It is the RAM in laptop, notebook and workstation computers as well as some of the RAM of home game consoles (PlayStation3, Xbox 360 and Wii),

The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash memory, DRAM is volatile memory (cf. non-volatile memory), since it loses its data quickly when power is removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.

Contents

History

The cryptanalytic machine code-named "Aquarius" used at Bletchley Park during World War II incorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store. ... The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')".[1]

In 1964 Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch. They replaced the latch with two transistors and two resistors, a configuration which became known as the Farber-Schlig cell. In 1965 Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, and four diodes. In 1966 DRAM was invented by Dr. Robert Dennard at the IBM Thomas J. Watson Research Center. He was granted U.S. patent number 3,387,286 in 1968. Capacitors had been used for earlier memory schemes such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.

The Toshiba "Toscal" BC-1411 electronic calculator, which was introduced in November 1966,[2] used a form of dynamic RAM built from discrete components.[3]

In 1969 Honeywell asked Intel to make a DRAM using a 3-transistor cell that they had developed. This became the Intel 1102 (1024x1) in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially-available DRAM memory, the Intel 1103 (1024x1), in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was designed by Joel Karp and laid out by Barbara Maness.

The first DRAM with multiplexed row and column address lines was the Mostek MK4096 (4096x1) designed by Robert Proebsting and introduced in 1973. This addressing scheme, a radical advance, enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16K density, the cost advantage increased; the Mostek MK4116 16K DRAM, introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64K in the early 80s, Mostek was overtaken by Japanese DRAM manufacturers selling higher quality DRAMs using the same multiplexing scheme at below-cost prices.

Operation principle

DRAM is usually arranged in a square array of one capacitor and transistor per data bit storage cell. The illustrations to the right show a simple example with 4 by 4 cells (modern DRAM matrices are many thousands of cells in height and width).

The long horizontal lines connecting each row are known as word Lines. Each column of cells is actually composed of two bit lines, each one connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the + and − bit lines. A sense amplifier is essentially a pair of cross-connected inverters between the bit lines, with the first inverter connected from the + bit line to the − bit line, and the second from the − bit line to the + bit line. This is an example of positive feedback, and the arrangement is only stable after one bit line is high and one bit line is low.

To read a bit from a column, the following operations take place:

  1. The sense amplifier is disconnected, then the bit lines are precharged to exactly equal voltages that are in-between high and low logic levels. The bit lines are physically symmetrical to keep the capacitance as equal and therefore the voltages as equal as possible.
  2. The precharge circuit is switched off. Because the bit lines are relatively long, they have enough capacitance to maintain the pre-charged voltage for a brief time. This is an example of dynamic logic.
  3. The desired row's word line is then driven high to connect a cell's storage capacitor to its bit line. This causes the transistor to conduct, transferring charge between the storage cell and the connected bit line. If the storage cell capacitor is discharged, it will greatly decrease the voltage on the bit-line as the precharge is transferred to the storage capacitor. If the storage cell is charged, the bit-line voltage decreases only slightly; this is because every effort is made to keep the capacitance of the storage cells high and the capacitance of the bit lines low.
  4. The sense amplifier is switched on. The positive feedback takes over and amplifies the small voltage difference between bit-lines until one bit line is fully at the lowest voltage and the other is at the maximum high voltage. Once this has happened, the row is "open" (the desired cell data is available).
  5. All columns are sensed in simultaneously and the result sampled into the data latch. A provided Column address then selects which latch bit to connect to the external circuit. Many reads can be performed quickly without delay sense for the open row, all data has already been sensed and latched.
  6. While reading all columns proceeds (the normal and desirable method because it most quickly provides data), current is flowing back up the bit lines from the sense amplifiers to the storage cells. This reinforces (i.e. "refreshes") the charge in the storage cell by increasing the voltage in the storage capacitor if it was charged to begin with, or by keeping it discharged if it was empty. Note that due to the length of the bit lines creating a fairly long propagation delay for the charge to be transferred, this takes significant time beyond the end of sense amplification, and thus overlaps with one or more column reads.
  7. When done with the reading all the columns in the current row, the word line is switched off to disconnect the cell storage capacitors (the row is "closed"), the sense amplifier is switched off, and the bit lines are precharged again.

To write to memory, the row is opened and a given column's sense amplifier is temporarily forced to the desired high or low voltage state, thus it drives the bit line to charge or discharge the cell storage capacitor to the desired value. Due to positive feedback, the amplifier will then hold it stable even after the forcing is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as in reading, a single column's cell storage capacitor charge is changed, and then the entire row is written back in, as illustrated in the figure to the right.

Typically, manufacturers specify that each row must have its storage cell capacitors refreshed every 64 ms or less, as defined by the JEDEC (Foundation for developing Semiconductor Standards) standard. Refresh logic is provided in a DRAM controller which automates the periodic refresh, that is no software or other hardware has to perform it. This makes the controller's logic circuit more complicated, but this drawback is outweighed by the fact that DRAM is much cheaper per storage cell and because each storage cell is very simple, DRAM has much greater capacity per geographic area than SRAM.

Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 213 = 8192 rows would require a staggered refresh rate of one row every 7.8 µs which is 64 ms divided by 8192 rows. A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment. All methods require some sort of counter to keep track of which row is the next to be refreshed. Most DRAM chips include that counter. Older types require external refresh logic to hold the counter (under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes).

Memory timing

Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998:[4]

"50 ns" "60 ns" Description
tRC 84 ns 104 ns Random read or write cycle time (from one full /RAS cycle to another)
tRAC 50 ns 60 ns Access time: /RAS low to valid data out
tRCD 11 ns 14 ns /RAS low to /CAS low time
tRAS 50 ns 60 ns /RAS pulse width (minimum /RAS low time)
tRP 30 ns 40 ns /RAS precharge time (minimum /RAS high time)
tPC 20 ns 25 ns Page-mode read or write cycle time (/CAS to /CAS)
tAA 25 ns 30 ns Access time: Column address valid to valid data out (includes address setup time before /CAS low)
tCAC 13 ns 15 ns Access time: /CAS low to valid data out
tCAS 8 ns 10 ns /CAS low pulse width minimum

Thus, the generally quoted number is the /RAS access time. This is the time to read a random bit from a precharged DRAM array. The time to read additional bits from an open page is much less.

When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as "5‐2‐2‐2" timing, as bursts of four reads within a page were common.

When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent tCLtRCDtRPtRAS in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is 3‐4‐4‐8[5] with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at 2‐2‐2‐5 timing.[6]

PC-3200 (DDR-400) PC2-6400 (DDR2-800) PC3-12800 (DDR3-1600) Description
Typical Fast Typical Fast Typical Fast
cycles time cycles time cycles time cycles time cycles time cycles time
tCL 3 15 ns 2 10 ns 5 12.5 ns 4 10 ns 9 11.25 ns 8 10 ns /CAS low to valid data out (equivalent to tCAC)
tRCD 4 20 ns 2 10 ns 5 12.5 ns 4 10 ns 9 11.25 ns 8 10 ns /RAS low to /CAS low time
tRP 4 20 ns 2 10 ns 5 12.5 ns 4 10 ns 9 11.25 ns 8 10 ns /RAS precharge time (minimum precharge to active time)
tRAS 8 40 ns 5 25 ns 16 40 ns 12 30 ns 27 33.75 ns 24 30 ns Row active time (minimum active to precharge time)

The improvement over 11 years is not that significant. Minimum random access time has improved from tRAC = 50 ns to tRCD + tCL = 22.5 ns, and even the premium 20 ns variety is only 2.5 times better. CAS latency has improved even less, from tCAC = 13 ns to 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns (1600 Mword/s), while the EDO DRAM can output one word per tPC = 20 ns (50 Mword/s).

Timing abbreviations

  • tCL – CAS latency
  • tCR – Command rate
  • tPTP – precharge to precharge delay
  • tRAS – RAS active time
  • tRCD – RAS to CAS delay
  • tREF – Refresh period
  • tRFC – Row refresh cycle time
  • tRP – RAS precharge
  • tRRD – RAS to RAS delay
  • tRTP – Read to precharge delay
  • tRTR – Read to read delay
  • tRTW – Read to write delay
  • tWR – Write recovery time
  • tWTP – Write to precharge delay
  • tWTR – Write to read delay
  • tWTW – Write to write delay

Error detection and correction

Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. The majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them. Recent studies[7] show that single event upsets due to cosmic radiation have been dropping dramatically with process geometry and previous concerns over increasing bit cell error rates are unfounded.

This problem can be mitigated by using redundant memory bits and memory controllers that exploit these bits, usually implemented within DRAM modules. These extra bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.

An ECC-capable memory controller as used in many modern PCs can typically detect and correct errors of a single bit per 64-bit "word" (the unit of bus transfer), and detect (but not correct) errors of two bits per 64-bit word. Some systems also 'scrub' the errors, by writing the corrected version back to memory. The BIOS in some computers, and operating systems such as Linux, allow counting of detected and corrected memory errors; this allows identification and replacement of failing memory modules.

Recent tests give widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[7][8][9]

Packaging

For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as PlayStation and Xbox) normally consist of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM).

General DRAM formats

Dynamic random access memory is produced as integrated circuits (ICs) bonded and mounted into plastic packages with metal pins for connection to control signals and buses. In early use individual DRAM ICs were usually either installed directly to the motherboard or on ISA expansion cards; later they were assembled into multi-chip plug-in modules (DIMMs, SIMMs, etc.). Some standard module types are:

Common DRAM modules

Common DRAM packages as illustrated to the right, from top to bottom:

  1. DIP 16-pin (DRAM chip, usually pre-fast page mode DRAM (FPRAM)
  2. SIPP 30-pin (usually FPRAM)
  3. SIMM 30-pin (usually FPRAM)
  4. SIMM 72-pin (often extended data out DRAM (EDO DRAM) but FPRAM is not uncommon)
  5. DIMM 168-pin (SDRAM)
  6. DIMM 184-pin (DDR SDRAM)
  7. RIMM 184-pin (RDRAM) not pictured
  8. DIMM 240-pin (DDR2 SDRAM and DDR3 SDRAM) not pictured

Versions

While the fundamental DRAM cell and array has maintained the same basic structure (and performance) for many years, there have been many different interfaces for communicating with DRAM chips. When one speaks about "DRAM types", one is generally referring to the interface that is used.

Asynchronous DRAM

This is the basic form from which all others derive. An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are four active-low control signals:

This interface provides direct control of internal timing. When /RAS is driven low, a /CAS cycle must not be attempted until the sense amplifiers have sensed the memory state, and /RAS must not be returned high until the storage cells have been refreshed. When /RAS is driven high, it must be held high long enough for precharging to complete.

Although the RAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle.

RAS Only Refresh (ROR)

Classic asynchronous DRAM is refreshed by opening each row in turn.

The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array using /RAS Only Refresh, the following steps must occur:

  1. The row address of the row to be refreshed must be applied at the address input pins.
  2. /RAS must switch from high to low. /CAS must remain high.
  3. At the end of the required amount of time, /RAS must return high.

This can be done by supplying a row address and pulsing /RAS low; it is not necessary to perform any /CAS cycles. An external counter is needed to iterate over the row addresses in turn.[10]

CAS before RAS refresh (CBR)

For convenience, the counter was quickly incorporated into RAM chips themselves. If the /CAS line is driven low before /RAS (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as /CAS-before-/RAS (CBR) refresh.

This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM.

Hidden refresh

Given support of CAS-before-RAS refresh, it is possible to deassert /RAS while holding /CAS low to maintain data output. If /RAS is then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known as "hidden refresh".[11]

Video DRAM (VRAM)

VRAM is a dual-ported variant of DRAM that was once commonly used to store the frame-buffer in some graphics adaptors.

Window DRAM (WRAM)

WRAM is a variant of VRAM that was once used in graphics adaptors such as the Matrox Millenium and ATI 3D Rage Pro. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills.[12]

Fast page mode DRAM (FPM DRAM)

Fast page mode DRAM is also called FPM DRAM, FPRAM, Page mode DRAM, Fast page mode memory, or Page mode memory.

In page mode, a row of the DRAM can be kept "open" by holding /RAS low while performing multiple reads or writes with separate pulses of /CAS so that successive reads or writes within the row do not suffer the delay of precharge and accessing the row. This increases the performance of the system when reading or writing bursts of data.

Static column is a variant of page mode in which the column address does not need to be strobed in, but rather, the address inputs may be changed with /CAS held low, and the data output will be updated accordingly a few nanoseconds later.

Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of /CAS. The difference from normal page mode is that the address inputs are not used for the second through fourth /CAS edges; they are generated internally starting with the address supplied for the first /CAS edge.

Extended data out DRAM (EDO DRAM)

EDO DRAM, sometimes referred to as Hyper Page Mode enabled DRAM, is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance. It was 5% faster than FPM DRAM, which it began to replace in 1995, when Intel introduced the 430FX chipset that supported EDO DRAM.

To be precise, EDO DRAM begins data output on the falling edge of /CAS, but does not stop the output when /CAS rises again. It holds the output valid (thus extending the data output time) until either /RAS is deasserted, or a new /CAS falling edge selects a different column address.

Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's performance and capabilities allowed it to somewhat replace the then-slow L2 caches of PCs. It created an opportunity to reduce the immense performance loss associated with a lack of L2 cache, while making systems cheaper to build. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. An EDO system with L2 cache was tangibly faster than the older FPM/L2 combination.

Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.

Much equipment taking 72-pin SIMMs could use either FPM or EDO. Problems were possible, particularly when mixing FPM and EDO. Early Hewlett-Packard printers had FPM RAM built in; some, but not all, models worked if additional EDO SIMMs were added.[13]

Burst EDO DRAM (BEDO DRAM)

An evolution of EDO DRAM, Burst EDO DRAM, could process four memory addresses in one burst, for a maximum of 5‐1‐1‐1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipelined stage allowing page-access cycle to be divided into two components. During a memory-read operation, the first component accessed the data from the memory array to the output stage (second latch). The second component drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO.

Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM [2]. Even though BEDO RAM was superior to SDRAM in some ways, the latter technology quickly displaced BEDO.

Multibank DRAM (MDRAM)

Multibank DRAM applies the interleaving technique for main memory to second-level cache memory to provide a cheaper and faster alternative to SRAM. The chip splits its memory capacity into small blocks of 256 kB and allows operations to two different banks in a single clock cycle.

This memory was primarily used in graphic cards with Tseng Labs ET6x00 chipsets, and was made by MoSys. Boards based upon this chipset often used the unusual RAM size configuration of 2.25 MB, owing to MDRAM's ability to be implemented in various sizes more easily. This size of 2.25 MB allowed 24-bit color at a resolution of 1024×768, a very popular display setting in the card's time.

Synchronous graphics RAM (SGRAM)

SGRAM is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other VRAM technologies.

Synchronous dynamic RAM (SDRAM)

SDRAM significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock.

The /RAS and /CAS inputs no longer act as strobes, but are instead, along with /WE, part of a 3-bit command:

SDRAM Command summary
/CS /RAS /CAS /WE Address Command
H x x x x Command inhibit (No operation)
L H H H x No operation
L H H L x Burst Terminate: stop a read or write burst in progress
L H L H column Read from currently active row
L H L L column Write to currently active row
L L H H row Activate a row for read and write
L L H L x Precharge (deactivate) the current row
L L L H x Auto refresh: Refresh one row of each bank, using an internal counter
L L L L mode Load mode register: Address bus specifies DRAM operation mode.

The /OE line's function is extended to a per-byte "DQM" signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes.

Although many timing parameters remain under the control of the DRAM controller (e.g. a certain minimum amount of time must elapse between a row being activated and a subsequent read or write command), one important parameter must be programmed into the SDRAM chip itself, namely the CAS latency. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. The "Load mode register" command is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command.

The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits of "bank address" which accompany each command, a second bank can be activated and begin reading data while a read from the first bank is in progress. By alternating banks, an SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot.

Single data rate (SDR)

Single data rate SDRAM (sometimes known as SDR) is a synchronous form of DRAM.

Double data rate (DDR)

Double data rate SDRAM (DDR) was a later development of SDRAM, used in PC memory beginning in 2000. Subsequent versions are numbered sequentially (DDR2, DDR3, etc.). DDR SDRAM internally performs double-width accesses at the clock rate, and uses a double data rate interface to transfer one half on each clock edge. DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively. The internal access rate is mostly unchanged (200 million per second for DDR-400, DDR2-800 and DDR3-1600 memory), but each access transfers more data.

Direct Rambus DRAM (DRDRAM)

Direct RAMBUS DRAM (DRDRAM) was developed by Rambus.

Pseudostatic RAM (PSRAM)

PSRAM or PSDRAM is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM (made by Numonyx) is used in the Apple iPhone and other embedded systems.[14]

Some DRAM components have a "self-refresh mode". While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, not to allow operation without a separate DRAM controller as is the case with PSRAM.

An embedded variant of PSRAM is sold by MoSys under the name 1T-SRAM. It is technically DRAM, but behaves much like SRAM. It is used in Nintendo Gamecube and Wii consoles.

Reduced Latency DRAM (RLDRAM)

Reduced Latency DRAM is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications.

1T DRAM

Unlike all of the other variants described in this section of this article, 1T DRAM is actually a different way of constructing the basic DRAM bit cell. 1T DRAM is a "capacitorless" bit cell design that stores data in the parasitic body capacitor that is an inherent part of Silicon on Insulator transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. Although refresh is still required, reads are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor.[15]

There are several types of 1T DRAM memories: the commercialized Z-RAM from Innovative Silicon, the TTRAM from Renesas and the A-RAM from the UGR/CNRS consortium. Faster versions of 1T DRAM are sometimes called 1T-SRAM.

The classic one-transistor/one-capacitor (1T/1C) DRAM cell is also sometimes referred to as "1T DRAM".

Security

Although dynamic memory is only specified and guaranteed to retain its contents when supplied with power and refreshed every 64 ms, the memory cell capacitors often retain their values for significantly longer, particularly at low temperatures.[16] Under some conditions most of the data in DRAM can be recovered even if it has not been refreshed for several minutes.[17]

This property can be used to circumvent security and recover data stored in memory and assumed to be destroyed at power-down by quickly rebooting the computer and dumping the contents of the RAM, or by cooling the chips and transferring them to a different computer. Such an attack was demonstrated to circumvent popular disk encryption systems, such as the open source TrueCrypt, Microsoft's BitLocker Drive Encryption, and Apple's FileVault.[16] This type of attack against a computer is often called a cold boot attack.

See also

References

  1. ^ Copeland B. Jack, and others (2006) Colossus: The Secrets of Bletchley Park's Codebreaking Computers Oxford: Oxford University Press, p301.
  2. ^ Spec Sheet for Toshiba "TOSCAL" BC-1411
  3. ^ Toshiba "Toscal" BC-1411 Desktop Calculator (The introduction date is listed here as November 1965, but this is a year too early and appears to be a typographical error.)
  4. ^ Micron 4 Meg x 4 EDO DRAM data sheet
  5. ^ cmx1024-3200.ai
  6. ^ http://www.corsairmemory.com/corsair/products/specs/twinx1024-3200xl.pdf
  7. ^ a b Borucki, "Comparison of Accelerated DRAM Soft Error Rates Measured at Component and System Level", 46th Annual International Reliability Physics Symposium, Phoenix, 2008, pp. 482–487
  8. ^ http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
  9. ^ http://www.ece.rochester.edu/~xinli/usenix07/
  10. ^ [1]
  11. ^ Various Methods of DRAM Refresh Micron Technical Note TN-04-30
  12. ^ The PC Guide, definition of WRAM.
  13. ^ Page on memory upgrades for HP printers
  14. ^ EE Times teardown of iPhone 3G
  15. ^ Sallese, Jean-Michel (2002-06-20). "Principles of the 1T Dynamic Access Memory Concept on SOI". MOS Modeling and Parameter Extraction Group Meeting. Wroclaw, Poland. http://legwww.epfl.ch/ekv/mos-ak/wroclaw/MOS-AK_JMS.pdf. Retrieved 2007-10-07. 
  16. ^ a b "Center for Information Technology Policy » Lest We Remember: Cold Boot Attacks on Encryption Keys". http://web.archive.org/web/20110722182409/http://citp.princeton.edu/memory/.  080222 citp.princeton.edu
  17. ^ Scheick, Leif Z.; Guertin, Steven M.; Swift, Gary M. (December 2000). "Analysis of radiation effects on individual DRAM cells" (PDF). IEEE Trans. on Nuclear Science 47 (6): 2534–2538. doi:10.1109/23.903804. ISSN 0018-9499. http://parts.jpl.nasa.gov/docs/DRAM_Indiv-00.pdf. Retrieved 2008-11-03. 

External links