Burroughs large systems
From Wikipedia, the free encyclopedia
The Burroughs large systems were the largest of three series of Burroughs Corporation mainframe computers. The first machine, the B5000, was designed in 1961. Computers using this architecture were still in production in 2005 as the Unisys ClearPath/MCP machines.
Contents |
[edit] B5000
The first member of the series, the B5000, was designed beginning in 1961 by a team under the leadership of Robert (Bob) Barton. It was a unique machine, well ahead of its time. It has been listed by the influential computer architect John Mashey as one of the architectures that he admires the most. "I always thought it was one of the most innovative examples of combined hardware/software design I've seen, and far ahead of its time."[1]
[edit] Unique features
- Hardware was designed to support software requirements
- Hardware designed to exclusively support high-level languages
- Support for Symmetrical multiprocessing
- No assembler (all system software written in an extended variety of ALGOL)
- Support for other languages such as COBOL
- Powerful string manipulation
- Stack architecture (to support high-level algorithmic languages)
- No programmer accessible registers
- Support for high-level operating system (MCP, Master Control Program)
- Data-driven tagged and descriptor-based architecture
- Secure architecture prohibiting unauthorized access of data or disruptions to operations
- Early error-detection supporting development and testing of software
- First commercial implementation of virtual memory (10 years before IBM "invented" it)
- Simplified instruction set
- Its successors still exist in the Unisys ClearPath/MCP machines
- Influential on many of today's computing techniques
In the following discussion, the machine designations, B5000, A Series, and ClearPath/MCP are used interchangeably.
[edit] Unique system design
The B5000 was revolutionary at the time in that the architecture and instruction set was designed with the needs of software taken into consideration. This was a large departure from the computer system design of the time, where a processor and its instruction set would be designed and then handed over to the software people.
[edit] Language support
The B5000 was designed to exclusively support high-level languages. This was at a time when such languages were just coming to prominence with FORTRAN and then COBOL. FORTRAN and COBOL are both deficient languages when it comes to modern software techniques, so a newer, mostly untried language was adopted, ALGOL-60. The ALGOL dialect chosen for the B5000 was Elliott ALGOL, first designed and implemented by C.A.R. Hoare on an Elliot 503. This was a practical extension of ALGOL with IO instructions (which ALGOL had ignored) and powerful string processing instructions. Hoare's famous Turing lecture was on this subject.
ALGOL-60 was designed by experts who from their experiences with designing FORTRAN and other languages had learned a lot. The design of ALGOL used the formal syntax description language BNF (Backus-Naur Form) which resulted in a very regular language with a clean syntax. It is the grandfather of most modern languages today, including Pascal, C, Simula, Ada, Eiffel, and many others.
Thus the B5000 was based on a very powerful language. Most other vendors could only dream of implementing an ALGOL compiler and most in the industry dismissed ALGOL as being unimplementable. However, a bright young student named Donald Knuth had previously implemented ALGOL-58 on an earlier Burroughs machine during the three months of his summer break. Many wrote ALGOL off, mistakenly believing that high-level languages could not have the same power as assembler, and thus not realizing ALGOL's potential as a systems programming language, an opinion not revised until the development of the programming language C.
The Burroughs ALGOL compiler was extremely fast—this impressed Edsger Dijkstra when he submitted a program to be compiled at the B5000 Pasadena plant. His deck of cards was compiled almost immediately and he immediately wanted several machines for his university back in Europe.
[edit] History
The first of the Burroughs large systems was the B5000. Designed in 1961, it was a second-generation computer using discrete transistor logic and core memory. The successor machines followed the hardware development trends to re-implenent the architecture in new logic over the next 25 years, with the B5500, B6500, B5700, B6700, B7700, B6800, B7800, and finally the Burroughs A series. After Burroughs Became part of Unisys, Unisys contiued to develop new machines based on the MCP CMOS ASIC. These machines were the Libra 100 through the Libra 500, With the Libra 590 being announced in 2005.Later Libras, including the 590, also incorporate Intel Xeon processors and can run the Burroughs large systems architecture in emulation as well as on the MCP CMOS processors. It is unclear if Unisys will continue development of new MCP CMOS ASICs.
[edit] ALGOL
The B-Series architecture (now A-Series) is an ALGOL stack architecture, unlike linear architectures such as PDP-11 and Motorola or segmented architectures such as Intel and Texas Instruments.
ALGOL is a systems-programming language, and while B5000 was designed specifically around ALGOL, this was only a starting point. Other business-oriented languages such as COBOL were also well supported, most notably by the powerful string operators which were included for the development of fast compilers.
The ALGOL used on the B5000 is actually an extended ALGOL, which includes powerful string manipulation instructions. It also has an elegant preprocessing DEFINE mechanism which is much neater than the #defines found in C. Another extension of note is the EVENT data type facilitating coordination between processes and integrated with INTERRUPTs which may be attached to events to handle them. Thus an interrupt can be coded as an ALGOL code block in order to handle exceptions such as numeric overflow and other user defined EVENTs.
The user-level of ALGOL does not include many of the more dangerous facilities needed by the operating system. There are two levels of more powerful facilities not required in the normal user level of algorithmic processing. The first level is writing the operating system.
[edit] ESPOL and NEWP
Originally, the B5000 MCP operating system was written in an extension of extended ALGOL called ESPOL (Executive Systems Programming Oriented Language). This was replaced in the mid-to-late 70s by a language called NEWP; it is not clear what the name of this language originally stood for, although a common (perhaps apocryphal) story around Burroughs at the time suggested it came from “No Executive Washroom Privileges.” Circa 1976, Bob Jardine of Burroughs allegedly named the language "NEWP" after being asked, yet again, "does it have a name yet" ... and answering "noooop", he adopted that as a name. NEWP, too, was an ALGOL extension, but it was more secure than ESPOL. In fact, all unsafe constructs are rejected by the NEWP compiler unless a block is specifically marked to allow those instructions. Such marking of blocks provide a multi-level protection mechanism.
NEWP programs that contain unsafe constructs are initially non-executable. The security administrator of a system is able to "bless" such programs and make them executable, but normal users (even privileged users) are not able to do this. While NEWP can be used to write general programs (and has a number of features designed for large software projects), it does not support everything ALGOL does.
NEWP has code control features beyond that of ALGOL, such as INLINE procedures. This feature goes beyond Algol’s DEFINE and allows short logical constructs to be logically defined as procedures, but physically be generated as in-line codestreams. This aids both program comprehensibility and code efficiency.
NEWP has a number of facilities to enable large-scale software projects, such as the operating system, including named interfaces (functions and data), groups of interfaces, modules, and super-modules. Modules group data and functions together, allowing easy access to the data as global within the module. Interfaces allow a module to import and export functions and data. Super-modules allow modules to be grouped.
[edit] DCALGOL and Message Control Systems (MCS)
The second intermediate level of security between operating system code (in NEWP) and user programs (in ALGOL) is for middleware programs, which are written in DCALGOL (data comms ALGOL). This is used for message reception and dispatching which remove messages from input queues and places them on queues for other processes in the system to handle. Middleware such as COMS (introduced around 1984) receive messages from around the network and dispatch these messages to specific handling processes or to an MCS (Message Control System) such as CANDE ("Compile AND Edit," the program development environment).
MCSs are items of software worth noting – they control user sessions and provide keeping track of user state without having to run per-user processes since a single MCS stack can be shared by many users. Load balancing can also be achieved at the MCS level. For example saying that you want to handle 30 users per stack, in which case if you have 31 to 60 users, you have two stacks, 61 to 90 users, three stacks, etc. This gives B5000 machines a great performance advantage in a server since you don't need to start up another user process and thus create a new stack each time a user attaches to the system. Thus you can efficiently service users (whether they require state or not) with MCSs. MCSs also provide the backbone of large-scale transaction processing.
[edit] DMALGOL and databases
Another variant of ALGOL is also worth noting – DMALGOL (Data Management ALGOL). DMALGOL is a language extended for compiling database systems and for generating code from database descriptions. Thus a database designer and administrator compiles their database description and this generates the DMALGOL code tailored for the tables and indexes required. An administrator never needs to write DMALGOL themselves. Database access is provided in normal user-level programs such as ALGOL, COBOL and others, extended with database instructions and transaction processing directives. The most notable feature of DMALGOL is that on top of ALGOL's elegant define mechanism as already mentioned, DMALGOL has an even more sophisticated preprocessing mechanism to generate tables and indexes.
DMALGOL has extensive preprocessing, allowing fairly sophisticated programs to be written/executed, just in the preprocessing phase. DMALGOL was used to provide tailored access routines for DMSII databases. When the user defined a database, the schema would be translated into DMALGOL tailored access routines and then compiled. This meant that, unlike other DBMS implementations, at run-time the need for database specific if/then/else code was often removed. Roy Guck of Burroughs was one of the main developers of DMSII.
[edit] Stack architecture
In many early systems and languages, programmers were often told not to make their routines too small – because procedure calls and returns were expensive operations, a number of operations had to be performed to maintain the stack. The B5000 was designed as a stack machine – all program data except for arrays (which include strings and objects) was kept on the stack. This meant that stack operations were optimized for efficiency. As a stack-oriented machine, there are no programmer addressable registers.
Multitasking is also very efficient on B5000 machines. There is one specific instruction to perform process switches – MVST (move stack). Each stack represents a process (task or thread) and tasks can become blocked waiting on resource requests (which includes waiting for a processor to run on if the task has been interrupted because of preemptive multitasking). User programs cannot issue a MVST, and there is only one line of code in the operating system where this is done.
So a process switch proceeds something like this – a process requests a resource that is not immediately available, maybe a read of a record of a file from a block which is not currently in memory, or the system timer has triggered an interrupt. The operating system code is entered and run on top of the user stack. It turns off user process timers. The current process is placed in the appropriate queue for the resource being requested, or the ready queue waiting for the processor if this is a preemptive context switch. The operating system determines the first process in the ready queue and invokes the instruction move_stack, which makes the process at the head of the ready queue active.
[edit] Stack speed and performance
Some of the detractors of the B5000 architecture believed that stack architecture was inherently slow compared to register-based architectures. The trick to system speed is to keep data as close to the processor as possible. In the B5000 stack, this was done by assigning the top two positions of the stack to two registers A and B. Most operations are performed on those two top of stack positions. On faster machines past the B5000, more of the stack may be kept in registers or cache near the processor.
Thus the designers of the current B5000 systems can optimize in whatever is the latest technique, and programmers do not have to adjust their code for it to run faster – they do not even need to recompile, thus protecting software investment. Some programs have been known to run for years over many processor upgrades. Such speed up is limited on register-based machines.
Another point for speed as promoted by the RISC designers was that processor speed is considerably faster if everything is on a single chip. It was a valid point in the 1970s when more complex architectures such as the B5000 required too many transistors to fit on a single chip. However, this is not the case today and every B5000 successor machine now fits on a single chip as well as the performance support techniques such as caches and instruction pipelines.
In fact, the A Series line of B5000 successors included the first single chip mainframe, the Micro-A of the late 1980s. This "mainframe" chip (named SCAMP for Single-Chip A-series Mainframe Processor) sat on an Intel-based plug-in PC board.
Here is an example of how programs map to the stack architecture
begin
— This is lexical level 2 (level zero is reserved for the operating system and level 1 for code segments).
— At level 2 we place global variables for our program.
integer i, j, k real f, g array a [0:9]
procedure p (real p1, p2) value p1 — p1 passed by value, p2 implicitly passed by reference. begin — This block is at lexical level 3 real r1, r2
r2 := p1 * 5 p2 := r2 — This sets 'g' to the value of r2 p1 := r2 — This set 'p2' to r2, but not 'f' — Since this overwrites the original value of f in p1 it most likely indicates — an error. Few of ALGOL's successors have corrected this situation by — making value parameters read only – most have not.
if r2 > 10 then begin — A variable declared here makes this lexical level 4 integer n
— The declaration of a variable makes this a block, which will invoke some — stack building code. Normally you won't declare variables here, in which — case this would be a compound statement, not a block.
... <== sample stack is executing somewhere here. end end
.....
p (f, g) end
Each stack frame corresponds to a lexical level in the current execution environment. As you can see, lexical level is the static textual nesting of a program, not the dynamic call nesting. The visibility rules of ALGOL, a language designed for single pass compilers, mean that only variables declared before the current position are visible at that part of the code except for forward declarations. All variables declared in enclosing blocks are visible. Another case is that variables of the same name may be declared in inner blocks and these effectively hide the outer variables which become inaccessible.
Since lexical nesting is static, it is extremely rare to find a program nested more than five levels deep, and it could be argued that such programs would be poorly structured. B5000 machines allow nesting of up to 32 levels. Procedures can be invoked in four ways – normal, call, process, and run.
The normal invocation invokes a procedure in the normal way any language invokes a routine, by suspending the calling routine until the invoked procedure returns.
The call mechanism invokes a procedure as a coroutine. Coroutines have partner tasks, where control is explicitly passed between the tasks by means of a CONTINUE instruction. These are synchronous processes.
The process mechanism invokes a procedure as an asynchronous task and in this case a separate stack is set up starting at the lexical level of the processed procedure. As an asynchronous task, there is no control over exactly when control will be passed between the tasks, unlike coroutines. Note also that the processed procedure still has access to the enclosing environment and this is a very efficient IPC (Inter Process Communication) mechanism. Since two or more tasks now have access to common variables, the tasks must be synchronized to prevent race conditions, which is handled by the EVENT data type, where processes can WAIT on an event until they are caused by another cooperating process. EVENTs also allow for mutual exclusion synchronization through the PROCURE and LIBERATE functions. If for any reason the child task dies, the calling task can continue – however, if the parent process dies, then all child processes are automatically terminated.
The last invocation type is run. This runs a procedure as an independent task which can continue on after the originating process terminates. For this reason, the child process cannot access variables in the parent's environment, and all parameters passed to the invoked procedure must be call-by-value.
Thus Burroughs Extended ALGOL had all of the multi-processing and synchronization features of later languages like Ada, with the added benefit that support for asynchronous processes was built into the hardware level.
One last possibility is that a procedure may be declared INLINE, that is when the compiler sees a reference to it the code for the procedure is generated inline to save the overhead of a procedure call. This is best done for small pieces of code and is like a define, except you don't get the problems with parameters that you can with defines. This facility is available in NEWP.
In the example program only normal calls are used, so all the information will be on a single stack. For asynchronous calls, the stack would be split into multiple stacks so that the processes share data but run asynchronously.
A stack hardware optimization is the provision of D (or "display") registers. These are registers that point to the start of each called stack frame. These registers are updated automatically as procedures are entered and exited and are not accessible by any software. There are 32 D registers, which is what limits to 32 levels of lexical nesting.
Consider how we would access a lexical level 2 (D[2]) global variable from lexical level 5 (D[5]). Suppose the variable is 6 words away from the base of lexical level 2. It is thus represented by the address couple (2, 6). If we don't have D registers, we have to look at the control word at the base of the D[5] frame, which points to the frame containing the D[4] environment. We then look at the control word at the base of this environment to find the D[3] environment, and continue in this fashion until we have followed all the links back to the required lexical level. Note this is not the same path as the return path back through the procedures which have been called in order to get to this point. (The architecture keeps both the data stack and the call stack in the same structure, but uses control words to tell them apart.)
As you can see, this is quite inefficient just to access a variable. With D registers, the D[2] register points at the base of the lexical level 2 environment, and all we need to do to generate the address of the variable is to add its offset from the stack frame base to the frame base address in the D register. (There is an efficient linked list search operator LLLU, which could search the stack in the above fashion, but the D register approach is still going to be faster.) With D registers, access to entities in outer and global environments is just as efficient as local variable access.
D Tag Data — Comments register 0| n | — The integer 'n' address couple (4, 1) |-----------| D[4] ==>3| MSCW | — The Mark Stack Control Word containing the link to D[3]. |===========| 0 | r2 | — The real 'r2' address couple (3, 5) |-----------| 0 | r1 | — The real 'r1' address couple (3, 4) |-----------| 1 | p2 | — An SIRW reference to 'g' at (2,6) |-----------| 0 | p1 | — The parameter 'p1' from value of 'f' address couple (3, 2) |-----------| 3| RCW | — A return control word |-----------| D[3] ==>3| MSCW | — The Mark Stack Control Word containing the link to D[2]. |===========| — The array 'a' address couple (2, 7) 1 | a | ====================>[ ten word memory block] |-----------| 0 | g | — The real 'g' address couple (2, 6) |-----------| 0 | f | — The real 'f' address couple (2, 5) |-----------| 0 | k | — The integer 'k' address couple (2, 4) |-----------| 0 | j | — The integer 'j' address couple (2, 3) |-----------| 0 | i | — The integer 'i' address couple (2, 2) |-----------| 3| RCW | — A return control word |-----------| D[2] ==>3| MSCW | — The Mark Stack Control Word containing the link to the previous stack frame. ============= — Stack bottom
If we had invoked the procedure p as a coroutine, or a process instruction, the D[3] environment would have become a separate D[3]-based stack. Note that this means that asynchronous processes still have access to the D[2] environment as implied in ALGOL program code. Taking this one step further, a totally different program could call another program’s code, creating a D[3] stack frame pointing to another process’ D[2] environment on top of its own process stack. At an instant the whole address space from the code’s execution environment changes, making the D[2] environment on the own process stack not directly addressable and instead make the D[2] environment in another process stack directly addressable. This is how library calls are implemented. At such a cross-stack call, the calling code and called code could even originate from programs written in different source languages and be compiled by different compilers.
Note that the D[1] and D[0] environments do not occur in the current process's stack. The D[1] environment is the code segment dictionary, which is shared by all processes running the same code. The D[0] environment represents entities exported by the operating system.
Stack frames actually don’t even have to exist in a process stack. This feature was used early on for file IO optimization, the FIB (file information block) was linked into the display registers at D[1] during IO operations. In the early nineties, this ability was implemented as a language feature as STRUCTURE BLOCKs and – combined with library technology - as CONNECTION BLOCKs. The ability to link a data structure into the display register address scope implemented object orientation. Thus, the B5000 actually used a form of object orientation long before the term was ever used.
One nice thing about the stack structure is that if a program does happen to fail, a stack dump is taken and it is very easy for a programmer to find out exactly what the state of a running program was. Compare that to core dumps and exchange packages of other systems.
Another thing about the stack structure is that programs are implicitly recursive. FORTRAN was not a recursive language and perhaps one stumbling block to people's understanding of how ALGOL was to be implemented was how to implement recursion. On the B5000, this was not a problem – in fact, they had the reverse problem, how to stop programs from being recursive. In the end they didn't bother, even the Burroughs FORTRAN compiler was recursive, since it was unproductive to stop it being so.
Thus Burroughs FORTRAN was better than any other implementation of FORTRAN.[citation needed] In fact, Burroughs became known for its superior compilers and implementation of languages, including the object-oriented Simula (a superset of ALGOL), and Iverson, the designer of APL declared that the Burroughs implementation of APL was the best he'd seen.[citation needed] John McCarthy, the language designer of LISP disagreed, since LISP was based on modifiable code[citation needed], he did not like the unmodifiable code of the B5000[citation needed], but most LISP implementations would run in an interpretive environment anyway.
Note also that stacks automatically used as much memory as was needed by a process. There was no having to do SYSGENs on Burroughs systems as with competing systems in order to preconfigure memory partitions in which to run tasks. In fact, Burroughs really championed "plug and play" in that extra peripherals could be plugged into the system without having to recompile the operating system with new peripheral tables. Thus these machines could be seen as the forerunners of today's USB and FireWire devices.
[edit] Tag-based architecture
In most people's minds, the most defining aspect of the B5000 is that it is a stack machine as treated above. However, two other very important features of the architecture is that it is tag-based and descriptor-based.
In the original B5000, a bit in each word was set aside to identify the word as a code or data word. This was a security mechanism to stop programs from being able to corrupt code, in the way that hackers do today.
An advantage to unmodifiable code is that B5000 code is fully reentrant: it does not matter how many users are running a program, there will only be one copy of the code in memory, thus saving substantial memory; these machines are actually very memory and disk efficient.
Later, when the B5500 was designed, it was realized that the 1-bit code/data distinction was a powerful idea and this was extended to three bits outside of the 48 bit word into a tag. The data bits are bits 0-47 and the tag is in bits 48-50. Bit 48 was the read-only bit, thus odd tags indicated control words that could not be written by a user-level program. Code words were given tag 3. Here is a list of the tags and their function:
Tag | Word kind | Description |
---|---|---|
0 | Data | All kinds of user and system data (text data and single precision numbers) |
2 | Double | Double Precision data |
4 | SIW | Step Index word (used in loops) |
6 | Uninitialized data | |
1 | IRW | Indirect Reference Word |
SIRW | Stuffed Indirect Reference Word | |
3 | Code | Program code word |
MSCW | Mark Stack Control Word | |
RCW | Return Control Word | |
TOSCW | Top of Stack Control Word | |
SD | Segment Descriptor | |
5 | Descriptor | Data block descriptors |
7 | PCW | Program Control Word |
Note: Internally, some of the machines had 60 bit words, with the extra bits being used for engineering purposes such as a Hamming code error-correction field, but these were never seen by programmers.
Note: The current incarnation of these machines, the Unisys ClearPath has extended tags further into a four bit tag. The microcode level that specified four bit tags was referred to as level Gamma.
Even-tagged words are user data which can be modified by a user program as user state. Odd-tagged words are created and used directly by the hardware and represent a program's execution state. Since these words are created and consumed by specific instructions or the hardware, the exact format of these words can change between hardware implementation and user programs do not need to be recompiled, since the same code stream will produce the same results, even though system word format may have changed.
Tag 1 words represent on-stack data addresses. The normal IRW simply stores an address couple to data on the current stack. The SIRW references data on any stack by including a stack number in the address.
Tag 5 words are descriptors, which are more fully described in the next section. Tag 5 words represent off-stack data addresses.
Tag 7 is the program control word which describes a procedure entry point. When operators hit a PCW, the procedure is entered. The ENTR operator explicitly enters a procedure (non-value-returning routine). Functions (value-returning routines) are implicitly entered by operators such as value call (VALC). Note that global routines are stored in the D[2] environment as SIRWs that point to a PCW stored in the code segment dictionary in the D[1] environment. The D[1] environment is not stored on the current stack because it can be referenced by all processes sharing this code. Thus code is reentrant and shared.
Tag 3 represents code words themselves, which won't occur on the stack. Tag 3 is also used for the stack control words MSCW, RCW, TOSCW.
[edit] Descriptor-based architecture
See Descriptors in Burroughs large systems
[edit] Instruction set
See Burroughs large systems instruction Set
[edit] Multiple processors
The B5000 line also were pioneers in having multiple processors connected together on a high-speed bus. The B7000 line could have up to 8 processors, as long as at least one was an IO module. Notable operators are:
HEYU — send an interrupt to another processor
RDLK — Low-level semaphore operator: Load the A register with the memory location given by the A register and place the value in the B register at that memory location in a single uninterruptible cycle
WHOI — Processor identification
IDLE — Idle until an interrupt is received
Note that RDLK is a very low-level way of synchronizing between processors. The high level used by user programs is the EVENT data type. The EVENT data type did have some system overhead. To avoid this overhead, a special locking technique called Dahm locks (named after a Burroughs software guru, Dave Dahm) can be used.
[edit] Influence of the B5000
Undoubtedly, the direct influence of the B5000 is the current Unisys ClearPath range of mainframes which are the direct descendants of the B5000 and still have the MCP operating system after 40 years of consistent development. This architecture is now called emode (for emulation mode) since the B5000 architecture can be implemented on many platforms. There was also going to be an nmode (native mode), but this was dropped, so you may often hear the B5000 successor machines being referred to as "emode machines".
B5000 machines are programmed exclusively in high-level languages, there is no assembler.
The B5000 stack architecture inspired Chuck Moore, the designer of the programming language FORTH, who encountered the B5500 while at MIT. In Forth - The Early Years, Moore described the influence, noting that FORTH's DUP, DROP and SWAP came from the corresponding B5500 instructions (DUPL, DLET, EXCH).
Hewlett Packard systems were influenced by the B5000, since some Burroughs engineers found later employment designing machines for HP and these also were stack machines. Bob Barton's work on reverse polish notation (RPN) found its way into HP calculators beginning with the 9100A, and notably the HP-35 and subsequent calculators.
Bob Barton was also very influential on Alan Kay. Kay was also impressed by the data-driven tagged architecture of the B5000 and this influenced his thinking in his developments in object-oriented programming and Smalltalk.
Another facet of the B5000 architecture is that it is a secure architecture that runs directly on hardware. This technique has descendants in the virtual machines of today in their attempts to provide secure environments. One notable such product is the Java JVM which provides a secure sandbox in which applications run.
[edit] References
- The Extended ALGOL Primer (Three Volumes), Donald J. Gregory.
- Computer System Organization: The B5700/B6700 Series, Elliot I Organick, Academic Press (1973).
- Computer Architecture: A Structured Approach, R. Doran, Academic Press (1979).
- Stack Computers: The New Wave, Philip J. Koopman, available at: [1]
- B5500, B6500, B6700, B6800, B6900, B7700 manuals at: bitsavers.org
- ^ John Mashey (2006-08-15). "Admired designs / designs to study". comp.arch. (Google Groups). Retrieved on 2006-11-25.