Japanese language and computers
From Wikipedia, the free encyclopedia
In relation to the Japanese language and computers many adaptation issues arise, some unique to Japanese and others common to languages which have a very large number of characters. The number of characters needed in order to write English is very small, and thus it is possible to use only one byte to encode one English character. However, the number of characters in Japanese is much more than 256, and hence Japanese cannot be encoded using only one byte, and Japanese is thus encoded using two or more bytes, in a so-called "double byte" or "multi-byte" encoding. Some problems relate to transliteration and romanization, some to character encoding, and some to the input of Japanese text.
Contents |
[edit] Character encodings
There are several standard methods to encode characters for use on a computer, including JIS, SJIS, EUC, and Unicode. While mapping the set of kana is a simple matter, kanji has proven more difficult. Despite efforts, none of the encoding schemes have become the de facto standard, and multiple encoding standards are still in use today. For example, most Japanese e-mails are in JIS encoding and web pages in Shift-JIS and yet mobile phones in Japan usually use some form of EUC. If a program fails to determine the encoding scheme employed, it can cause mojibake (misconverted characters, literally "transformed characters" from the combination of moji 文字 meaning character and the stem of bakeru 化ける meaning to change form) and thus unreadable text on computers.
To understand how this state of affairs has arisen, it is useful to learn a little about the history of the encodings. The first encoding to become widely used was JIS X 0201, which is a single-byte encoding that only covers standard 7-bit ASCII characters with half-width katakana extensions. This was widely used in systems that were neither powerful enough nor had the storage to handle kanji (including DOS and old embedded equipment such as cash registers). The development of kanji encodings was the beginning of the split. Shift JIS was developed to be completely backward compatible with JIS X 0201, and thus is used in Windows (for backwards compatibility with DOS), and in much embedded electronic equipment. However, Shift JIS has the unfortunate property that it often breaks any parser that is not specifically designed to handle it (thus causing mojibake on many forum-style websites). EUC, on the other hand, is not backwards compatible with JIS X 0201, but is handled much better by parsers that have been written for 7-bit ASCII (and thus EUC encodings are used on UNIX where much of the file-handling code was historically only written for English encodings). Further complications arise because the original Internet e-mail standards only support 7-bit transfer protocols. Thus JIS encoding was developed for sending and receiving e-mails.
Not all required characters may be included in a character set standard such as JIS, so gaiji (外字, external characters) are sometimes used to supplement the character set. Gaiji may come in the form of external font packs, where normal characters have been replaced with new characters, or the new characters have been added to unused character positions. However, gaiji are not practical in Internet environments since the font set must be transferred with text to use the gaiji. As a result, such characters are written with similar or simpler characters in place, or the text may need to be written using a larger character set (such as Unicode) that supports the required character.
[edit] Text input
- Main article: Japanese input methods
Typing Japanese text on a computer is a complicated matter because Japanese has far more characters than there are keys on most keyboards. On modern computers, the reading of characters is usually entered first, then an input method editor (IME), also sometimes known as a front-end processor, shows a list of candidate kanji that are a phonetic match, and allows the user to choose the correct characters. More-advanced IMEs work not by word but by phrase, thus increasing the likelihood of getting the desired characters as the first option presented. The input can be either via romanization (rōmaji nyūryoku) or direct kana input (kana nyūryoku). Direct kana input is not commonly used, but is widely supported.
There are two main systems for the romanization of Japanese, known as Kunrei-shiki and Hepburn, "keyboard romaji" (also known as wāpuro rōmaji or "word processor romaji") generally allows a loose combination of both; IME implementations may even handle keys for letters unused in any romanization scheme, such as L, converting them to the most appropriate equivalent. With kana input, each key on the keyboard directly corresponds to one kana. The JIS keyboard system is national standard, but some people use altenatives like Oyayubi shift system.
[edit] Direction of text
Japanese has two directions of writing, called yokogaki and tategaki. The yokogaki style is the same as English, but the tategaki style involves columns of text written downwards, right to left.
At present, handling of downward text is incomplete. For example, HTML has no support for tategaki and Japanese users must use HTML tables to simulate it. However, CSS level 3 includes a property "writing-mode" which can render tategaki when given the value "tb-rl" (i.e top to bottom, right to left). Word processors and DTP software have more complete support for it.
[edit] See also
[edit] External links
- A complete introduction to Japanese character encodings
- Chinese, Japanese, and Korean character set standards and encoding systems
- Japanese text encoding
[edit] Japanese text editors
- JWPce, a free Japanese Word Processor for Windows distributed under the GNU General Public License.