UTF-16/UCS-2
From Wikipedia, the free encyclopedia
In computing, UTF-16 (16-bit Unicode Transformation Format) is a variable-length character encoding for Unicode, capable of encoding the entire Unicode repertoire. The encoding form maps code points (characters) into a sequence of 16-bit words, called code units. For characters in the Basic Multilingual Plane (BMP) the resulting encoding is a single 16-bit word. For characters in the other planes, the encoding will result in a pair of 16-bit words, together called a surrogate pair. All possible code points from U+0000 through U+10FFFF, except for the surrogate code points U+D800–U+DFFF (which are not characters), are uniquely mapped by UTF-16 regardless of the code point's current or future character assignment or use.
As many uses in computing require units of bytes (octets) there are three related encoding schemes which map to octet sequences instead of words: namely UTF-16, UTF-16BE, and UTF-16LE. They differ only in the byte order chosen to represent each 16-bit unit and whether they make use of a Byte Order Mark. All of the schemes will result in either a 2 or 4-byte sequence for any given character.
UTF-16 is officially defined in Annex Q of the international standard ISO/IEC 10646-1. It is also described in The Unicode Standard version 3.0 and higher, as well as in the IETF's RFC 2781.
UCS-2 (2-byte Universal Character Set) is an obsolete character encoding which is a predecessor to UTF-16. The UCS-2 encoding form is nearly identical to that of UTF-16, except that it does not support surrogate pairs and therefore can only encode characters in the BMP range U+0000 through U+FFFF. As a consequence it is a fixed-length encoding that always encodes characters into a single 16-bit value. As with UTF-16, there are three related encoding schemes (UCS-2, UCS-2BE, UCS-2LE) that map characters to a specific byte sequence.
Because of the technical similarities and upwards compatibility from UCS-2 to UTF-16, the two encodings are often erroneously conflated and used as if interchangeable, so that strings encoded in UTF-16 are sometimes misidentified as being encoded in UCS-2.
For both UTF-16 and UCS-2, all 65,536 code points contained within the BMP (Plane 0), excluding the 2,048 special surrogate code points, are assigned to code units in a one-to-one correspondence with the 16-bit non-negative integers with the same values. Thus code point U+0000 is encoded as the number 0, and U+FFFF is encoded as 65535 (which is FFFF16 in hexadecimal).
Contents |
[edit] Encoding of characters outside the BMP
The improvement that UTF-16 made over UCS-2 is its ability to encode characters in planes 1–16, not just those in plane 0 (BMP).
UTF-16 represents non-BMP characters (those from U+10000 through U+10FFFF) using a pair of 16-bit words, known as a surrogate pair. First 1000016 is subtracted from the code point to give a 20-bit value. This is then split into two separate 10-bit values each of which is represented as a surrogate with the most significant half placed in the first surrogate. To allow safe use of simple word-oriented string processing separate ranges of values are used for the two surrogates: 0xD800–0xDBFF for the first, most significant surrogate and 0xDC00-0xDFFF for the second, least significant surrogate.
For example, the character at code point U+10000 becomes the code unit sequence 0xD800 0xDC00, and the character at U+10FFFD, the upper limit of Unicode, becomes the sequence 0xDBFF 0xDFFD. Unicode and ISO/IEC 10646 do not, and will never, assign characters to any of the code points in the U+D800–U+DFFF range, so an individual code value from a surrogate pair does not ever represent a character.
[edit] Byte order encoding schemes
The UTF-16 and UCS-2 encoding forms produce a sequence of 16-bit words or code units. These are not directly usable as a byte or octet sequence because the endianness of these words varies according to the computer architecture; either big-endian or little-endian. To account for this choice of endianness each encoding form defines three related encoding schemes: for UTF-16 there are the schemes UTF-16, UTF-16BE, and UTF-16LE, and for UCS-2 there are the schemes UCS-2, UCS-2BE, and UCS-2LE.
The UTF-16 (and UCS-2) encoding scheme allows either endian representation to be used, but mandates that the byte order should be explicitly indicated by prepending a Byte Order Mark before the first serialized character. This BOM is the encoded version of the Zero-Width No-Break Space (ZWNBSP) character, codepoint U+FEFF, chosen because it should never legitimately appear at the beginning of any character data. This results in the byte sequence FE FF (in hexadecimal) for big-endian architectures, or FF FE for little-endian. The BOM at the beginning of a UTF-16 or UCS-2 encoded data is considered to be a signature separate from the text itself; it is for the benefit of the decoder. Technically, with the UTF-16 scheme the BOM prefix is optional, but omitting it is not recommended as UTF-16LE or UTF-16BE should be used instead. If the BOM is missing, barring any indication of byte order from higher-level protocols, big endian is to be used or assumed. The BOM is not optional in the UCS-2 scheme.
The UTF-16BE and UTF-16LE encoding schemes (and correspondingly UCS-2BE and UCS-2LE) are similar to the UTF-16 (or UCS-2) encoding scheme. However rather than using a BOM prepended to the data, the byte order used is implicit in the name of the encoding scheme (LE for little-endian, BE for big-endian). Since a BOM is specifically not to be prepended in these schemes, if an encoded ZWNBSP character is found at the beginning of any data encoded by these schemes it is not to be considered to be a BOM, but instead is considered part of the text itself. In practice most software will ignore these "accidental" BOMs.
The IANA has approved UTF-16, UTF-16BE, and UTF-16LE for use on the Internet, by those exact names (case insensitively). The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
[edit] Use in major operating systems and environments
UTF-16 is the native internal representation of text in the Microsoft Windows 2000/XP/2003/Vista/CE, Qualcomm BREW operating systems; the Java and .NET bytecode environments; Mac OS X's Cocoa and Core Foundation frameworks; and the Qt cross-platform graphical widget toolkit.[1][2][citation needed]
Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2.
Older Windows NT systems (prior to Windows 2000) only support UCS-2.[3] The Python language environment has used UCS-2 internally since version 2.1, although newer versions can use UCS-4 (UTF-32) to store supplementary characters (instead of UTF-16).
[edit] Examples
code point | character | UTF-16 code value(s) | glyph* |
---|---|---|---|
122 (hex 7A) | small Z (Latin) | 007A | z |
27700 (hex 6C34) | water (Chinese) | 6C34 | 水 |
119070 (hex 1D11E) | musical G clef | D834 DD1E | 𝄞 |
"水z𝄞" (water, z, G clef), UTF-16 encoded | ||
---|---|---|
labeled encoding | byte order | byte sequence |
UTF-16LE | little-endian | 34 6C, 7A 00, 34 D8 1E DD |
UTF-16BE | big-endian | 6C 34, 00 7A, D8 34 DD 1E |
UTF-16 | little-endian, with BOM | FF FE, 7A 00, 34 6C, 34 D8 1E DD |
UTF-16 | big-endian, with BOM | FE FF, 00 7A, 6C 34, D8 34 DD 1E |
* Appropriate font and software are required to see the correct glyphs.
[edit] Example UTF-16 encoding procedure
The character at code point U+64321 (hexadecimal) is to be encoded in UTF-16. Since it is above U+FFFF, it must be encoded with a surrogate pair, as follows:
v = 0x64321 v′ = v - 0x10000 = 0x54321 = 0101 0100 0011 0010 0001 vh = 0101010000 // higher 10 bits of v′ vl = 1100100001 // lower 10 bits of v′ w1 = 0xD800 // the resulting 1st word is initialized with the lower bracket w2 = 0xDC00 // the resulting 2nd word is initialized with the higher bracket w1 = w1 | vh = 1101 1000 0000 0000 | 01 0101 0000 = 1101 1001 0101 0000 = 0xD950 w2 = w2 | vl = 1101 1100 0000 0000 | 11 0010 0001 = 1101 1111 0010 0001 = 0xDF21
The correct UTF-16 encoding for this character is thus the following word sequence:
0xD950 0xDF21
Since the character is above U+FFFF, the character cannot be encoded in UCS-2.
[edit] See also
[edit] References
- ^ Unicode. microsoft.com. Retrieved on 2008-02-01.
- ^ Surrogates and Supplementary Characters. microsoft.com. Retrieved on 2008-02-01.
- ^ Description of storing UTF-8 data in SQL Server. microsoft.com (December 7, 2005). Retrieved on 2008-02-01.
[edit] External links
- Unicode Technical Note #12: UTF-16 for Processing
- Unicode FAQ: What is the difference between UCS-2 and UTF-16?
- Unicode Character Name Index
- RFC 2781: UTF-16, an encoding of ISO 10646