Cell software development
Cell Broadband Engine |
---|
Software development |
Fabrication |
Software development for the Cell microprocessor involves a mixture of conventional development practices for the Power Architecture-compatible PPU core, and novel software development challenges with regards to the functionally reduced SPU coprocessors.
Linux on Cell
An open source software-based strategy was adopted to accelerate the development of a Cell BE ecosystem and to provide an environment to develop Cell applications, including a GCC-based Cell compiler, binutils and a port of the Linux operating system.[1]
Software portability
Adapting VMX for SPU
Differences between VMX and SPU
The VMX (Vector Multimedia Extensions) technology is conceptually similar to the vector model provided by the SPU processors, but there are many significant differences.
feature | VMX | SPU |
---|---|---|
word size | 32 bits | 32 bits |
number of registers | 32 | 128 |
register width | 128-bit quadword | 128-bit quadword |
integer formats | 8, 16, 32 | 8, 16, 32, 64 |
saturation support | yes | no |
byte ordering | big (default), little | big endian |
floating point modes | Java, non-Java | single precision, IEEE double |
memory alignment | quadword only | quadword only |
The VMX Java mode conforms to the Java Language Specification 1 subset of the default IEEE Standard, extended to include IEEE and C9X compliance where the Java standard falls silent. In a typical implementation, non-Java mode converts denormal values to zero but Java mode traps into an emulator when the processor encounters such a value.
The IBM PPE Vector/SIMD manual does not define operations for double-precision floating point, though IBM has published material implying certain double-precision performance numbers associated with the Cell PPE VMX technology.
Intrinsics
Compilers for Cell provide intrinsics to expose useful SPU instructions in C and C++. Instructions that differ only in the type of operand (such as a, ai, ah, ahi, fa, and dfa for addition) are typically represented by a single C/C++ intrinsic which selects the proper instruction based on the type of the operand.
Porting VMX code for SPU
There is a great body of code which has been developed for other IBM Power processors that could potentially be adapted and recompiled to run on the SPU. This code base includes VMX code that runs under the PowerPC version of Apple's Mac OS X, where it is better known as Altivec. Depending on how many VMX specific features are involved, the adaptation involved can range anywhere from straightforward, to onerous, to completely impractical. The most important workloads for the SPU generally map quite well.
In some cases it is possible to port existing VMX code directly. If the VMX code is highly generic (makes few assumptions about the execution environment) the translation can be relatively straightforward. The two processors specify a different binary code format, so recompilation is required at a minimum. Even where instructions exist with the same behaviors, they do not have the same instruction names, so this must be mapped as well. IBM provides compiler intrinsics which take care of this mapping transparently as part of the development toolkit.
In many cases, however, a directly equivalent instruction does not exist. The workaround might be obvious or it might not. For example, if saturation behavior is required on the SPU, it can be coded by adding additional SPU instructions to accomplish this (with some loss of efficiency). At the other extreme, if Java floating-point semantics are required, this is almost impossible to achieve on the SPU processor. To achieve the same computation on the SPU might require that an entirely different algorithm be written from scratch.
The most important conceptual similarity between VMX and the SPU architecture is supporting the same vectorization model. For this reason, most algorithms adapted to Altivec will usually adapt successfully to the SPU architecture as well.
Local store exploitation
Transferring data between the local stores of different SPUs can have a large performance cost. The local stores of individual SPUs can be exploited using a variety of strategies.
Applications with high locality, such as dense matrix computations, represent an ideal workload class for the local stores in Cell BE.[2]
Streaming computations can be efficiently accommodated using software pipelining of memory block transfers using a multi-buffering strategy.[1]
The software cache offers a solution for random accesses.[3]
More sophisticated applications can use multiple strategies for different data types.[4]
Compiler-mediated parallelism
References
- The Cell Project at IBM Research
- Optimizing Compiler for a CELL Processor
- Using advanced compiler technology to exploit the performance of the Cell Broadband Engine architecture
- Compiler Technology for Scalable Architectures
References
- 1 2 "An Open Source Environment for Cell Broadband Engine System Software" (PDF). June 2007.
- ↑ "Synergistic Processing in Cell's Multicore Architecture" (PDF). March 2006.
- ↑ "Using advanced compiler technology to exploit the performance of the Cell Broadband Engine architecture" (PDF). January 2006.
- ↑ "Cell GC: Using the Cell Synergistic Processor as a Garbage Collection Coprocessor" (PDF). March 2008.