Stack machine
From Wikipedia, the free encyclopedia
In computer science, a stack machine is a model of computation in which the computer's memory takes the form of one or more stacks. The term also refers to an actual computer implementing or simulating the idealized stack machine.
In addition, it can also refer to a real or simulated machine with an instruction set with memory-reference instructions that push values in memory locations onto a stack and pop values from the stack into memory locations and arithmetic instructions that perform arithmetic on values at the top of the stack and replace those values with the result. Programs written for that type of stack machine generally have higher code density than equivalent programs written for other instruction sets.
[edit] Stack model of computation
A stack machine with 1 stack is the same as a deterministic pushdown automaton (DPDA), which is a relatively weak computation model. (Deterministic pushdown automata are strictly weaker than nondeterministic pushdown automata (NPDA), which are in turn strictly weaker than Turing machines.)
A stack machine with multiple stacks, on the other hand, is equivalent to a Turing machine. For example, a 2-stack machine can emulate a TM by using one stack for the tape portion to the left of the TM's current head position and the other stack for the portion to the right.
[edit] Stack machine instruction sets
Machines with a stack-based instruction set can have one or more stacks. The vast majority of stack machines are two-stack machines.[dubious — see talk page] The two stacks are usually the data stack and the return stack, the former being used for operations on data and the latter to hold the return addresses for procedure calls. There has been very little research on using more than two stacks.
A machine using processor registers for operands can easily simulate a stack machine. Such a simulation is sometimes called a virtual stack machine. The advantage of a stack-based instruction set architecture over an architecture using registers is that the instruction size is smaller since there is no need to specify operand addresses. This nearly always leads to dramatically smaller compiled programs. Stack machines, however, tend to execute instructions more slowly than register based machines when the CPU can execute instructions faster than they can be delivered from memory. This is because the results of partially computed values are always stored and retrieved from the stack - which is located in external memory - during any calculation.
Commercial implementations of stack machines generally include a small set of special purpose fixed-function registers for addressing the enclosing contexts. Perhaps this is not a "pure" stack machine in some sense, but this does allow a stack machine CPU to be entirely suitable for general purpose computing.
Examples of commercial use of a stack machine include
- instruction set architectures directly executed in hardware
- the Burroughs large systems architecture (since 1961)
- Unisys Clearpath/MCP systems (latest implementation of Burroughs large systems architecture as of 2006)
- Tandem Computers T/16.
- HP 3000 (Classic, not PA-RISC)
- the Atmel MARC4 microcontroller [1]
- Several "Forth chips"[2] such as the "F21"[3] and the PSC1000[4]
- The 4stack processor by Bernd Paysan has four stacks.
- This stack machine architected by Charles H. Moore holds a leading functional density benchmark.
- virtual machines interpreted in software
- the UCSD Pascal p-machine (which closely resembled Burroughs)
- the Java virtual machine instruction set
- the VES (Virtual Execution System) for the CIL (Common Intermediate Language) instruction set of the ECMA 335 (.NET environment)
- the Forth programming language, in particular the Forth virtual machine
- Adobe's PostScript
- Parakeet programming language
- Sun Microsystem's SwapDrop programming language for Sun Ray smartcard identification
Note that the Burroughs architecture combines a stack machine with tagged memory (a few bits in every memory word to describe the data type of the operands). Tagged memory requires fewer opcodes, e.g., a single "add" instruction works for any combination of integer and floating point operands. Requiring fewer opcodes means that the entire instruction set can fit into smaller opcodes, reducing the total instruction width.
[edit] External links
- Stack Computers: the new wave book by Philip J. Koopman, Jr. 1989
- Homebrew CPU in an FPGA - homebrew stack machine using FPGA
- Mark 1 FORTH Computer - homebrew stack machine using discrete logical circuits
- Mark 2 FORTH Comptuer - homebrew stack machine using bitslice/PLD