Decimal computer
From Wikipedia, the free encyclopedia
Decimal computers represent numbers and/or addresses in decimal, and provide instructions to operate on those numbers and/or addresses directly; examples of encoding used are BCD, Excess-3, two-out-of-five code, ASCII, and EBCDIC.
Many early computers, for example the ENIAC, IBM 702, IBM 705, IBM 650, IBM 1401, IBM 1620, IBM NORC, IBM 7070, IBM 7080, UNIVAC I, UNIVAC II and UNIVAC III used decimal arithmetic (IBM 1401 addresses were a combination of decimal and binary arithmetic).
Later, several microprocessors offered limited decimal support. For example, the 80x86 family of microprocessors provide instructions to convert one-byte BCD numbers (packed and unpacked) to binary format before or after arithmetic operations [1]. These operations were not extended to wider formats and hence are now slower than using 32-bit or wider BCD 'tricks' to compute in BCD (see [1]).
The 68000 provided instructions for BCD addition and subtractions [2], these instructions were removed when the Coldfire instruction set was defined, and all IBM mainframes also provide BCD integer arithmetic in hardware.
Decimal arithmetic is now becoming more common; for instance, three decimal floating-point types with two binary encodings have been added to the proposed IEEE_754r standard, with 7, 16, and 34-digit decimal significands.[3].
The IBM Power6 processor, the IBM System z9, and the IBM System z10 have implemented these types using the Densely Packed Decimal scheme for encoding the digits of the significand (binary encoding is used for the exponent).[4], the first and third in hardware and the second in microcode.
[edit] References
- ^ MASM Programmer's Guide (HTML). Microsoft (1992). Retrieved on 2007-07-01.
- ^ Motorola M68000 Family Programmer's Reference Manual. Retrieved on 2007-07-01.
- ^ DRAFT Standard for Floating Point Arithmetic P754 (2006-10-04). Retrieved on 2007-07-01.
- ^ [Mike]. General Decimal Arithmetic (HTML). IBM. Retrieved on 2008-04-08.