Cell microprocessor implementations

From Wikipedia, the free encyclopedia

Contents

[edit] Implementation

Cell BE
architecture
software
development
fabrication

[edit] First edition Cell on 90 nm CMOS

IBM has published information concerning two different versions of Cell in this process, an early engineering sample designated DD1, and an enhanced version designated DD2 intended for production.

Known Cell Variants in 90 nm Process
Designation Die Area First Disclosed Enhancement
DD1 221 mm² ISSCC 2005
DD2 235 mm² Cool Chips April 2005 enhanced PPE core


The main enhancement in DD2 was a small lengthening of the die to accommodate a larger PPE core, which is reported to "contain more SIMD/vector execution resources"[1]. Some preliminary information released by IBM references the DD1 variant. As a result some early journalistic accounts of the Cell's capabilities now differ from production hardware.

[edit] Cell floorplan

[Powerpoint material accompanying an STI presentation given by Dr Peter Hofstee], includes a photograph of the DD2 Cell die overdrawn with functional unit boundaries which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows:


Cell Function Units and Footprint
Cell function unit Area (%) Description
XDR interface 5.7 interface to Rambus system memory
memory controller 4.4 manages external memory and L2 cache
512 KiB L2 cache 10.3 cache memory for the PPE
PPE core 11.1 Power PC processor
test 2.0 unspecified "test and decode logic"
EIB 3.1 element interconnect bus linking processors
SPE (each) x 8 6.2 synergistic coprocessing element
I/O controller 6.6 external I/O logic
Rambus FlexIO 5.7 external signalling for I/O pins


[edit] SPE floorplan

Additional details concerning the internal SPE implementation have been disclosed by IBM engineers, including Peter Hofstee, IBM's chief architect of the synergistic processing element, in a scholarly IEEE publication.[2]

This document includes a photograph of the 2.54 x 5.81 mm SPE, as implemented in 90-nm SOI. In this technology, the SPE contains 21 million transistors of which 14 million are contained in arrays (a term presumably designating register files and the local store) and 7 million transistors are logic. This photograph is overdrawn with functional unit boundaries, which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows:

SPU Function Units and Footprint
SPU function unit Area (%) Description Pipe
single precision 10.0 single precision FP execution unit even
double precision 4.4 double precision FP execution unit even
simple fixed 3.25 fixed point execution unit even
issue control 2.5 feeds execution units
forward macro 3.75 feeds execution units
GPR 6.25 general purpose register file
permute 3.25 permute execution unit odd
branch 2.5 branch execution unit odd
channel 6.75 channel interface (three discrete blocks) odd
LS0-LS3 30.0 four 64 KiB blocks of local store odd
MMU 4.75 memory management unit
DMA 7.5 direct memory access unit
BIU 9.0 bus interface unit
RTB 2.5 array built-in test block (ABIST)
ATO 1.6 atomic unit for atomic DMA updates
HB 0.5 obscure

Understanding the dispatch pipes is important to write efficient code. In the SPU architecture, two instructions can be dispatched (started) in each clock cycle using dispatch pipes designated even and odd. The two pipes provide different execution units, as shown in the table above. As IBM partitioned this, most of the arithmetic instructions execute on the even pipe, while most of the memory instructions execute on the odd pipe. The permute unit is closely associated with memory instructions as it serves to pack and unpack data structures located in memory into the SIMD multiple operand format that the SPU computes on most efficiently.

Unlike other processor designs providing distinct execution pipes, each SPU instruction can only dispatch on one designated pipe. In competing designs, more than one pipe might be designed to handle extremely common instructions such as add, permitting more two or more of these instructions to be executed concurrently, which can serve to increase efficiency on unbalanced workflows. In keeping with the extremely Spartan design philosophy, for the SPU no execution units are multiply provisioned.

Understanding the limitations of the restrictive two pipeline design is one of the key concepts a programmer must grasp to write efficient SPU code at the lowest level of abstraction. For programmers working at higher levels of abstraction, a good compiler will automatically balance pipeline concurrency where possible.

[edit] SPE power and performance

As tested by IBM under a heavy transformation and lighting workload [average IPC of 1.4], the performance profile of this implementation for a single SPU processor is qualified as follows:

Relationship of speed to heat
Voltage (V) Frequency (GHz) Power (W) Die Temp (C)
0.9 2.0 1 25
0.9 3.0 2 27
1.0 3.8 3 31
1.1 4.0 4 38
1.2 4.4 7 47
1.3 5.0 11 63

The entry for 2.0 GHz operation at 0.9 V represents a low power configuration. Other entries show the peak stable operating frequency achieved with each voltage increment. As a general rule in CMOS circuits, power dissipation rises in a rough relationship to V^2 * F, the square of the voltage times the operating frequency.

Though the wattage measurements provided by the IBM authors lack precision they convey a good sense of the overall trend. These figures show the part is capable of running above 5 GHz under test lab conditions--though at a die temperature too hot for standard commercial configurations. The first Cell processors made commercially available were rated by IBM to run at 3.2 GHz, an operating speed where this chart suggests a SPU die temperature in a comfortable vicinity of 30 degrees.

Note that a single SPU represents 6% of the Cell processor's die area. The wattage figures given in the table above represent just a small portion of the overall power budget.

[edit] Future editions in CMOS

IBM has publicly announced their intention to implement Cell on a future technology below the 90 nm node to improve power consumption. Reduced power consumption could potentially allow the existing design to be boosted to 5 GHz or above without exceeding the thermal constraints of existing products.

[edit] Prospects at 65 nm

The most likely design node for a future Cell processor is the upcoming 65nm node in which IBM and Toshiba have already invested great sums of money. All things remaining equal, a reduction to 65 nm would reduce the existing 230 mm² die based on the 90 nm process to half its current size, about 120 mm², greatly reducing IBM's manufacturing cost as well.

On 12th of March 2007, IBM announced that it started producing 65nm Cells in its East Fishkill fab. The chips produced there are apparently only for IBMs own Cell blade servers, a timeframe for integration of these chips into the Playstation 3 has not yet been announced. IBMs news release is scarse on technical details. So far it is only known that these 65nm-Cells clock up to 6 GHz and run on 1.3V core voltage, as demonstrated on the ISSCC 2007. This would give the chip a theoretical peak performance of 384 GLFOPS in single precision, a significant improvement to the 204.8 GFLOPS peak that a 90nm 3.2 GHz Cell could provide with 8 active SPUs. IBM further announced it implemented new power-saving features and a dual power supply for the SRAM array. Further improvements remain shady so far, but this version is not yet the rumoured "Cell+" with enhanced Double Precision floating point performance, which is still scheduled for 2008 according to the Roadmap.

So far this seems to be a pretty straighforward die-shrink, as the size of the Local Store RAM and number of SPUs remain the same. This chip should significantly reduce power consumption and be cheaper to produce thanks to the much smaller die-size.

IBM could elect to partially redesign the chip to take advantage of additional silicon area in future revisions. The Cell architecture already makes explicit provisions for the size of the local store to vary across implementations. A chip-level interface is available to the programmer to determine local store capacity, which is always an exact binary power.

Based on the reported die area of 30% for the local store in the 90 nm edition, it would be feasible to double the local store to 512 KiB per SPU leaving the total die area devoted to the SPU processors roughly unchanged. In this scenario, the SPU area devoted to the local store would increase to 60% while other areas shrink by half. Going this route would reduce heat, and increase performance on memory intensive workloads, but without yielding IBM much if any reduction in cost of manufacture.

[edit] Prospects beyond 65 nm

Process technologies below 65 nm capable of implementing a Cell processor have not been demonstrated. For any number of reasons dictated by technology or market, IBM might elect to discontinue the Cell technology without achieving these nodes. That said, IBM and Sony have made a substantial investment in the Cell technology and such a large investment will normally be realized over several generations of new process technology.

At this stage, the Sony Toshiba IBM alliance (STI) have announced their intention to continue to work together and share innovation beyond their current venture at 65 nm to the 45 nm and 32 nm process nodes[3], but they have not mentioned Cell for implementation by name in either of these nodes, though if Cell becomes greatly successful it would be surprising if subsequent Cell editions in these nodes were not someday forthcoming.