Berkeley RISC

From Wikipedia, the free encyclopedia

Berkeley RISC was one of two seminal research projects into RISC-based microprocessor design taking place under ARPA's VLSI project. RISC was led by David Patterson at the University of California, Berkeley between 1980 and 1984, while the other was taking place only a short drive away at Stanford University under their MIPS effort starting in 1981 and running until 1984. Berkeley's project was so successful that it became the name for all similar designs to follow, even the MIPS would become known as a "RISC processor". The RISC design was later commercialized as the SPARC processor.

Contents

[edit] The RISC concept

Both RISC and MIPS were developed from the realization that the vast majority of programs did not use the vast majority of a processor's instructions. In one calculation it was found that the entire Unix system, when compiled, used only 30% of the available instructions on the Motorola 68000. Much of the circuitry in the m68k, and similar designs, was dedicated to decoding these instructions which were never being used. The RISC idea was to include only those instructions that were really used, using those transistors to speed the system up instead.

To do this, RISC concentrated on adding many more registers, small bits of memory holding temporary values that can be accessed at zero cost. This contrasts with normal main memory, which might take several cycles to access. By providing more registers, and making sure the compilers actually used them, programs should run much faster. Additionally the speed of the processor would be more closely defined by its clock speed, because less of its time would be spent waiting for memory accesses. Transistor for transistor, a RISC design would outperform a conventional CPU, hopefully by a lot.

On the downside, the instructions being removed were generally performing several "sub-instructions". For instance, the ADD instruction of a traditional design would generally come in several flavours, one that added the numbers in two registers and placed it in a third, another that added numbers found in main memory and put the result in a register, etc. The RISC designs, on the other hand, included only a single flavour of any particular instruction, the ADD, for instance, would always use registers for all operands. This forced the programmer to write additional instructions to load the values from memory, if needed, making a RISC program "less dense".

In the era of expensive memory this was a real concern, notably because memory was also much slower than the CPU. Since a RISC design's ADD would actually require four instructions (two loads, an add, and a save), the machine would have to do much more memory access to read the extra instructions, potentially slowing it down considerably. This was offset to some degree by the fact that the new designs used what was then a very large instruction word of 32-bits, allowing small constants to be folded directly into the instruction instead of having to be loaded separately. Additionally, the results of one operation are often used soon after by another, so by skipping the write to memory and storing the result in a register, the program did not end up much larger, and could in theory run much faster. For instance, a string of instructions carrying out a series of mathematical operations might require only a few loads from memory, while the majority of the numbers being used are either loaded in the instructions themselves, or intermediate values in the registers.

But to the casual observer, it was not clear whether or not the RISC concept would improve performance, or even make it worse. The only way to be sure was to actually simulate it, and in test after test, every simulation showed an enormous overall benefit in performance from this design.

Where the two projects, RISC and MIPS, differed was in the handling of the registers. MIPS simply added lots of them and left it to the compilers to make use of them. RISC, on the other hand, added circuitry to the CPU to "help" the compiler. RISC used the concept of register windows, in which the entire "register file" was broken down into blocks, allowing the compiler to "see" one block for global variables, and another for local variables.

The idea was to make one particularly common instruction, the procedure call, extremely easy to implement in the compilers. Almost all computer languages use a system known as a activation record or stack frame that contains the address of who called it, the data that was passed in, and any results that need to be returned. In the vast majority of cases these frames are small, typically with three or less inputs and one or no outputs. In the Berkeley design, then, the entire procedure stack would most likely fit entirely within the register window.

In this case the call into and return from a procedure is simple and extremely fast. A single instruction is called to set up a new block of registers, operands passed in on the "low end" of the new frame, and then the code jumps into the procedure. On return, the results are placed in the frame at the same end, and the code exits. The register windows are set up to overlap at the ends, meaning that the results from the call simply "appear" in the window of the code that called it, with no data having to be copied. Thus the common procedure call did not have to interact with main memory, greatly speeding it up.

On the downside, this approach meant that procedures with large numbers of local variables were problematic, and ones with less led to registers -an expensive resource- being wasted. It was Stanford's work on compilers that led them to ignore the register window concept, believing that a smart compiler could make better use of the registers than a fixed system in hardware.

[edit] RISC I

The first attempt to productize the RISC concept was originally known as Gold. Work on the design started in 1980 as part of a VLSI design course, but the then-complex design proved to crash almost all existing design tools. The team had to spend considerable amounts of time improving or re-writing the tools, and even with these new tools it took just under an hour to extract the design on a VAX 11/780.

The final design, as RISC I, was published ACM in 1981. It had 44,500 transistors implementing 31 instructions and a register file containing 78 32-bit registers. This allowed for six register windows containing 14 registers each, with an additional 18 globals. The control and instruction decode section occupied only 6% of the die, whereas the typical design of the era used about 50% for the same role. The register file took up most of that space.

RISC I also featured a two-stage instruction pipeline for additional speed, but without the complex instruction re-ordering of modern designs. This left conditional branches as a problem, because the compiler had to fill the instruction following a conditional branch (the so-called "branch delay slot"), with something selected to be "safe" (ie, not dependent on the outcome of the conditional). Sometimes the only suitable instruction was a NOP.

After a month of validation and debugging, the design was sent to the innovative MOSIS fab for production on June 22, 1981, using a 2 μm process (2,000 nanometers, in modern measurements). Here a variety of delays forced them to abandon their current masks four separate times, and wafers with working examples didn't arrive back at Berkeley until May 1982. The first working "computer", actually a checkout board, ran on June 11th. In testing the chips proved to have disappointing performance. In general an instruction took 2 μS to complete, while the original design expected them to take about 400 ns, five times faster. The precise reasons for this problem were never fully explained, although throughout testing it was clear that certain instructions did run at the expected speed, suggesting the problem was physical, not logical.

Had the design "worked" at full speed, performance should have been excellent. Simulations using a variety of small programs compared the 4 MHz RISC I to the 5 MHz 32-bit VAX 11/780 and the 5 MHz 16-bit Zilog Z8000 showed this clearly. Program size was about 30% larger than the VAX but very close to that on the Z8000, validating the argument that the higher code density of CISC designs was not actually all that impressive in reality. In terms of overall performance, the RISC I was twice as fast as the VAX, and about four times that of the Z8000. More interestingly, the programs ended up performing about the same overall amount of memory access because the large register file dramatically improved the odds the needed operand was already on-chip.

It is important to put this performance in context. Even if the RISC design had run slower than the VAX, it would make no difference to the importance of the design. The key issue was that RISC allowed you to produce a true 32-bit processor on a real chip die using what was already an outdated fab. Traditional designs simply couldn't do this; with so much of the chip surface dedicated to decoder logic, a true 32-bit design like the Motorola 68020 required newer fabs before they became practical. On those same fabs, RISC I would have simply crushed the competition.

[edit] RISC II

While the RISC I design ran into delays, work at Berkeley had already turned to the new Blue design. Work on Blue progressed slower than Gold, due both to the lack of a pressing need now that Gold was going to fab, as well as changeovers in the classes and students staffing the effort. This pace also allowed them to add in several new features that would end up improving the design considerably.

The key difference was a simpler cache circuitry that eliminated one line per bit (from three to two), dramatically shrinking the register file size. The change also required much tighter bus timing, but this was a small price to pay and in order to meet the needs several other parts of the design were sped up as well.

The savings due to the new design were tremendous. Whereas Gold contained a total of 78 registers in 6 windows, Blue contained 138 registers broken into 7 larger windows of 32 registers each, with another 10 globals. This expansion of the register file increases the chance that a given procedure can fit all of its local storage in registers, as well as increasing the nesting depth. Nevertheless the larger register file used up less transistors, and the final Blue design, fabed as RISC II, implemented all of the RISC instruction set with only 39,000 transistors.

The other major change was to include an "instruction-format expander", which invisibly "up-converted" 16-bit instructions into a 32-bit format. This allowed smaller instructions, typically things with one or no operands, like NOP, to be stored in memory in a smaller 16-bit format and stuff two such instructions into a single machine word. The instructions would be invisibly expanded back to 32-bit versions before they reached the ALU, meaning that no changes were needed in the core logic. This simple technique yielded a surprising 30% improvement in code density, making an otherwise identical program on Blue run faster than on Gold due to the decreased number of memory accesses.

RISC II proved to be much more successful in silicon, and entered testing outperforming just about any minicomputer on just about every task. For instance, performance ranged from 85% of VAX speed to 256% on a variety of loads, that is, the RISC II often outperformed the VAX by two times. RISC II was also benched against the famous Motorola 68000, then considered to be the best commercial chip implementation, and outperformed it by 140% to 420%

[edit] Follow-ons

Work on the original RISC designs ended with RISC II, but the concept itself lived on at Berkeley. The basic core was re-used in SOAR in 1984, basically a RISC converted to run Smalltalk (in the same way that it could be claimed RISC ran C), and later in the similar VLSI-BAM that ran PROLOG instead of Smalltalk. Another effort was SPUR, which was a full set of chips needed to build a complete 32-bit workstation.

RISC is less famous, but more influential, for being the basis of the commercial SPARC processor design from Sun Microsystems. It was the SPARC that first clearly demonstrated the power of the RISC concept; when they shipped in the first SPARCstations they outperformed anything on the market. This led to virtually every Unix vendor scrambling for a RISC design of their own, leading to designs like the DEC Alpha and PA-RISC, while SGI purchased MIPS Computer Systems. By 1986 most large chip vendors followed, working on efforts like the Motorola 88000, Fairchild Clipper, AMD 29000 and the PowerPC.

Techniques developed for and alongside the idea of the reduced instruction set have also been adopted in successively more powerful implementations and extensions of the traditional "complex" x86 architecture. Much of a modern microprocessor's transistor count is devoted to large caches, numerous pipeline stages, superscalar instruction dispatch, branch prediction and other modern techniques which are applicable regardless of instruction architecture. The amount of silicon dedicated to instruction decoding on a modern x86 implementation is proportionately quite small, so the distinction between "complex" and RISC processor implementations has become blurred.

[edit] References

In other languages