36-bit word length
From Wikipedia, the free encyclopedia
Processors | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
4-bit | 8-bit | 12-bit | 16-bit | 18-bit | 24-bit | 31-bit | 32-bit | 36-bit | 48-bit | 64-bit | 128-bit |
Applications | |||||||||||
8-bit | 16-bit | 31-bit | 32-bit | 64-bit | |||||||
Data Sizes | |||||||||||
4-bit | 8-bit | 16-bit | 32-bit | 64-bit | 128-bit | ||||||
nibble byte octet word dword qword |
In computer architecture, 36-bit integers, memory addresses, or other data units are those that are at most 36 bits (6 characters) wide. Also, 36-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.
Many early computers aimed at the scientific market had a 36-bit word length. This word length was just long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character encoding. Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such as those manufactured by Friden, Marchant and Monroe. These calculators had a column of keys for each digit and operators were trained to use all their fingers when entering numbers, so while some specialized calculators had more columns, ten was a practical limit. Computers, as the new competitor, had to match that accuracy. Decimal computers sold in that era, such as the IBM 650 and the IBM 7070, had a word length of ten digits, as did ENIAC, one of the earliest computers.
Computers with 36-bit words included the MIT Lincoln Laboratory TX-2, the IBM 701/704/709/7090/7040, the UNIVAC 1103/1103A/1105/1100/2200, the General Electric 600/Honeywell 6000, the Digital Equipment Corporation PDP-6/10 (as used in the DECsystem-10/DECSYSTEM-20), and the Symbolics 3600 series. Smaller machines like the PDP-1/9/15 used 18-bit words, so a double word would be 36 bits. EDSAC had a similar scheme.
These computers used 18-bit word addressing, not byte addressing, giving an address space of 218 36-bit words, approximately 1 megabyte of storage. Many of them were originally limited to a similar amount of physical memory as well. Architectures that survived evolved over time to support larger virtual address spaces using memory segmentation or other mechanisms.
The common character packings included
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention)
- four 8-bit characters (7-bit ASCII plus 1 unused bit or 8-bit EBCDIC) and 4 unused bits
- four 9-bit characters (the Multics convention).
Characters were extracted from words either using standard shift and mask operations or with special-purpose hardware supporting 6-bit, 9-bit, or variable-length characters. The Univac 1100/2200 used the partial word designator of the instruction or a "J" register to access characters. The GE-600 used special indirect words to access 6- and 9-bit characters; the PDP-6/10 had special instructions to access arbitrary-length byte fields. The C programming language requires that all memory be accessible as bytes, so C implementations on 36-bit machines use 9-bit bytes.
By the time IBM introduced System/360, scientific calculations had shifted to floating point and mechanical calculators were no longer a competitor. The 360s also included instructions for variable length decimal arithmetic for commercial applications, so the practice of using word lengths that were a power of two quickly became universal.