Units of information

In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the entropy of random variables and information contained in messages.

The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes). Information capacity is considered to be a dimensionless quantity.

Primary units

Comparison of units of information: bit, trit, nat, ban. Quantity of information is the height of bars. Dark green level is the "Nat" unit.

In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logb N of the number N of possible states of that system. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with N possible states.

When b is 2, the unit is the shannon, equal to the information content of one "bit" (a portmanteau of binary digit[2]). A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include:

The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.

Units derived from bit

Several conventional names are used for collections or groups of bits.

Byte

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits – that is, an octet. A byte can represent 256 (28) distinct values, such as the integers 0 to 255, or −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte. Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.

Nibble

A group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit.[7]

Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 8, 9, 12, 18, 24, 26, 32, 36, 39, 40, 48, 56, 60, 64, 80 bits or others.

Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").

Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.

Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.

Systematic multiples

Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (as in kilobit or kbit), mega- = 106 = 1000000 (as in megabit or Mbit) and giga = 109 = 1000000000 (as in gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (1 kB = 8000 bit), megabyte (1 MB = 8000000bit), and gigabyte (1 GB = 8000000000bit).

However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often misused the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of 228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.

Multiples of bits
Decimal
Value SI
1000 103 kbit kilobit
10002 106 Mbit megabit
10003 109 Gbit gigabit
10004 1012 Tbit terabit
10005 1015 Pbit petabit
10006 1018 Ebit exabit
10007 1021 Zbit zettabit
10008 1024 Ybit yottabit
Binary
Value IEC JEDEC
1024 210 Kibit kibibit Kbit kilobit
10242 220 Mibit mebibit Mbit megabit
10243 230 Gibit gibibit Gbit gigabit
10244 240 Tibit tebibit -
10245 250 Pibit pebibit -
10246 260 Eibit exbibit -
10247 270 Zibit zebibit -
10248 280 Yibit yobibit -
Symbol Prefix SI Meaning Binary meaning Size difference
k kilo 103   = 10001 210 = 10241 2.40%
M mega 106   = 10002 220 = 10242 4.86%
G giga 109   = 10003 230 = 10243 7.37%
T tera 1012 = 10004 240 = 10244 9.95%
P peta 1015 = 10005 250 = 10245 12.59%
E exa 1018 = 10006 260 = 10246 15.29%
Z zetta 1021 = 10007 270 = 10247 18.06%
Y yotta 1024 = 10008 280 = 10248 20.89%

In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied.

On the other hand, for external storage systems (such as optical discs), the SI prefixes were commonly used with their decimal values (powers of 10). There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission (IEC) issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix:[8]

Multiples of bytes
Decimal
Value Metric
1000 kB kilobyte
10002 MB megabyte
10003 GB gigabyte
10004 TB terabyte
10005 PB petabyte
10006 EB exabyte
10007 ZB zettabyte
10008 YB yottabyte
Binary
Value IEC JEDEC
1024 KiB kibibyte KB kilobyte
10242 MiB mebibyte MB megabyte
10243 GiB gibibyte GB gigabyte
10244 TiB tebibyte
10245 PiB pebibyte
10246 EiB exbibyte
10247 ZiB zebibyte
10248 YiB yobibyte
Symbol Prefix
Ki kibi, binary kilo 1 kibibyte (KiB) 210 bytes 1024 B
Mi mebi, binary mega 1 mebibyte (MiB) 220 bytes 1024 KiB
Gi gibi, binary giga 1 gibibyte (GiB) 230 bytes 1024 MiB
Ti tebi, binary tera 1 tebibyte (TiB) 240 bytes 1024 GiB
Pi pebi, binary peta 1 pebibyte (PiB) 250 bytes 1024 TiB
Ei exbi, binary exa 1 exbibyte (EiB) 260 bytes 1024 PiB

The JEDEC memory standards however define uppercase K, M, and G for the binary powers 210, 220 and 230 to reflect common usage.[9]

Size examples

Obsolete and unusual units

Several other units of information storage have been named:[7]

Some of these names are jargon, obsolete, or used only in very restricted contexts.

See also

References

  1. 1 2 3 Norman Abramson (1963), Information theory and coding. McGraw-Hill.
  2. Mackenzie, Charles E. (1980). Coded Character Sets, History and Development. The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. p. xii. ISBN 0-201-14460-3. LCCN 77-90165. Retrieved 2016-05-22.
  3. 1 2 Knuth, Donald Ervin. The Art of Computer Programming: Seminumerical algorithms. 2. Addison Wesley.
  4. Shanmugam (2006), Digital and Analog Computer Systems.
  5. Gregg Jaeger (2007), Quantum information: an overview
  6. I. Ravi Kumar (2001), Comprehensive Statistical Theory of Communication.
  7. 1 2 Nybble at dictionary reference.com; sourced from Jargon File 4.2.0, accessed 2007-08-12
  8. ISO/IEC standard is ISO/IEC 80000-13:2008. This standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005. The only significant change is the addition of explicit definitions for some quantities. ISO Online Catalogue
  9. JEDEC Solid State Technology Association (December 2002). "Terms, Definitions, and Letter Symbols for Microcomputers, Microprocessors, and Memory Integrated Circuits" (PDF). JESD 100B.01. Retrieved 2009-04-05
  10. 1 2 3 Horak, Ray (2007). Webster's New World Telecom Dictionary. John Wiley & Sons. p. 402. ISBN 9-78047022571-4.
  11. http://www.yourdictionary.com/unibit#computer
  12. 1 2 Steinbuch, Karl W.; Wagner, Siegfried W., eds. (1967) [1962]. Written at Karlsruhe, Germany. Taschenbuch der Nachrichtenverarbeitung (in German) (2 ed.). Berlin / Heidelberg / New York: Springer-Verlag OHG. pp. 835–836. LCCN 67-21079. Title No. 1036.
  13. 1 2 Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Written at Karlsruhe / Bochum. Taschenbuch der Informatik - Band III - Anwendungen und spezielle Systeme der Nachrichtenverarbeitung. Taschenbuch der Nachrichtenverarbeitung (in German). 3 (3 ed.). Berlin / Heidelberg / New York: Springer Verlag. pp. 357–358. ISBN 3-540-06242-4. LCCN 73-80607.
  14. Bertram, H. Neal (1994). Theory of magnetic recording (1 ed.). Cambridge University Press. ISBN 0-521-44973-1. 9-780521-449731. […] The writing of an impulse would involve writing a dibit or two transitions arbitrarily closely together. […]
  15. Weisstein, Eric. W. "Crumb". MathWorld. Retrieved 2015-08-02.
  16. Paul, Reinhold (2013). "Elektrotechnik und Elektronik für Informatiker - Grundgebiete der Elektronik". Leitfaden der Informatik. B.G. Teubner Stuttgart / Springer. ISBN 3-32296652-6. 9-78332296652-0. Retrieved 2015-08-03.
  17. Böhme, Gert; Born, Werner; Wagner, B.; Schwarze, G. (2013-07-02) [1969]. Reichenbach, Jürgen, ed. Programmierung von Prozeßrechnern. Reihe Automatisierungstechnik (in German). 79. VEB Verlag Technik Berlin, reprint: Springer Verlag. ISBN 978-3-663-00808-8. doi:10.1007/978-3-663-02721-8. 9/3/4185.
  18. 1 2 3 Speiser, Ambrosius Paul (1965) [1961]. Digitale Rechenanlagen - Grundlagen / Schaltungstechnik / Arbeitsweise / Betriebssicherheit [Digital computers - Basics / Circuits / Operation / Reliability] (in German) (2 ed.). ETH Zürich, Zürich, Switzerland: Springer-Verlag / IBM. pp. 6, 34, 165, 183, 208, 213, 215. LCCN 65-14624. 0978.
  19. Steinbuch, Karl W., ed. (1962). Written at Karlsruhe, Germany. Taschenbuch der Nachrichtenverarbeitung (in German) (1 ed.). Berlin / Göttingen / New York: Springer-Verlag OHG. p. 1076. LCCN 62-14511.
  20. Svoboda, Antonín; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (retyped electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 0-8240-7014-3. Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.
  21. IEEE 754-2008 - IEEE Standard for Floating-Point Arithmetic. 2008-08-29. ISBN 978-0-7381-5752-8. doi:10.1109/IEEESTD.2008.4610935. Retrieved 2016-02-10.
  22. Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. ISBN 978-0-8176-4704-9. LCCN 2009939668. doi:10.1007/978-0-8176-4705-6.
  23. Erle, Mark A. (2008-11-21). Algorithms and Hardware Designs for Decimal Multiplication (Thesis). Lehigh University: ProQuest (published 2009). ISBN 9781109042283. 1109042280. Retrieved 2016-02-10.
  24. Kneusel, Ronald T. (2015). Numbers and Computers. Springer. ISBN 9783319172606. 3319172603. Retrieved 2016-02-10.
  25. Zbiciak, Joe. "AS1600 Quick-and-Dirty Documentation". Retrieved 2013-04-28.
  26. "315 Electronic Data Processing System" (PDF). NCR. November 1965. NCR MPN ST-5008-15. Archived (PDF) from the original on 2016-05-24. Retrieved 2015-01-28.
  27. Bardin, Hillel (1963). "NCR 315 Seminar" (PDF). Computer Usage Communique. 2 (3). Archived (PDF) from the original on 2016-05-24.
  28. Schneider, Carl (2013) [1970]. Datenverarbeitungs-Lexikon [Lexicon of information technology] (in German) (softcover reprint of hardcover 1st ed.). Wiesbaden, Germany: Springer Fachmedien Wiesbaden GmbH / Betriebswirtschaftlicher Verlag Dr. Th. Gabler GmbH. pp. 201, 308. ISBN 978-3-409-31831-0. doi:10.1007/978-3-663-13618-7. Retrieved 2016-05-24. slab, Abk. aus syllable = Silbe, die kleinste adressierbare Informationseinheit für 12 bit zur Übertragung von zwei Alphazeichen oder drei numerischen Zeichen. (NCR) […] Hardware: Datenstruktur: NCR 315-100 / NCR 315-RMC; Wortlänge: Silbe; Bits: 12; Bytes: –; Dezimalziffern: 3; Zeichen: 2; Gleitkommadarstellung: fest verdrahtet; Mantisse: 4 Silben; Exponent: 1 Silbe (11 Stellen + 1 Vorzeichen) [slab, abbr. for syllable = syllable, smallest addressable information unit for 12 bits for the transfer of two alphabetical characters or three numerical characters. (NCR) […] Hardware: Data structure: NCR 315-100 / NCR 315-RMC; Word length: Syllable; Bits: 12; Bytes: –; Decimal digits: 3; Characters: 2; Floating point format: hard-wired; Significand: 4 syllables; Exponent: 1 syllable (11 digits + 1 prefix)]
  29. 1 2 3 4 IEEE Std 1754-1994 - IEEE Standard for a 32-bit Microcontroller Architecture. The Institute of Electrical and Electronic Engineers, Inc. pp. 5–7. ISBN 1-55937-428-4. doi:10.1109/IEEESTD.1995.79519. Retrieved 2016-02-10. (NB. The standard defines doublets, quadlets, octlets and hexlets as 2, 4, 8 and 16 bytes, giving the numbers of bits (16, 32, 64 and 128) only as a secondary meaning. This might be important given that bytes were not always understood to mean 8 bits (octets) historically.)
  30. 1 2 3 Knuth, Donald Ervin (2004-02-15) [1999]. Fascicle 1: MMIX (PDF). The Art of Computer Programming (0th printing, 15th ed.). Stanford University: Addison-Wesley. Archived (PDF) from the original on 2017-03-30. Retrieved 2017-03-30.
  31. Böszörményi, László; Hölzl, Günther; Pirker, Emaneul (February 1999). Written at Salzburg, Austria. Zinterhof, Peter; Vajteršic, Marian; Uhl, Andreas, eds. Parallel Cluster Computing with IEEE1394–1995. Parallel Computation: 4th International ACPC Conference including Special Tracks on Parallel Numerics (ParNum '99) and Parallel Computing in Image Processing, Video Processing, and Multimedia. Proceedings: Lecture Notes in Computer Science 1557. Berlin, Germany: Springer Verlag.
  32. Nicoud, Jean-Daniel (1986). Calculatrices. Traité d’électricité de l'École polytechnique fédérale de Lausanne (in French). 14 (2 ed.). Lausanne: Presses polytechniques romandes. ISBN 2880740541.
  33. Proceedings. Symposium on Experiences with Distributed and Multiprocessor Systems (SEDMS). 4. USENIX Association. 1993.
  34. Brousentsov, N. P.; Maslov, S. P.; Ramil Alvarez, J.; Zhogolev, E.A. "Development of ternary computers at Moscow State University". Retrieved 2010-01-20.
  35. US4319227, Malinowski, Christopher W.; Heinz Rinderle & Martin Siegle, "Three-state signaling system", issued 1982-03-09, assigned to Department of Research and Development, AEG-Telefunken, Heilbronn, Germany
  36. http://www.google.com/patents/US4319227
  37. http://patentimages.storage.googleapis.com/pdfs/US4319227.pdf

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.