Japanese language and computers
In relation to the Japanese language and computers many adaptation issues arise, some unique to Japanese and others common to languages which have a very large number of characters. The number of characters needed in order to write English is very small, and thus it is possible to use only one byte to encode one English character. However, the number of characters in Japanese is much more than 256 and thus cannot be encoded using a single byte - Japanese is thus encoded using two or more bytes, in a so-called "double byte" or "multi-byte" encoding. Problems that arise relate to transliteration and romanization, character encoding, and input of Japanese text.
Character encodings
There are several standard methods to encode Japanese characters for use on a computer, including JIS, Shift-JIS, EUC, and Unicode. While mapping the set of kana is a simple matter, kanji has proven more difficult. Despite efforts, none of the encoding schemes have become the de facto standard, and multiple encoding standards were used by 2000s. As of 2017, the usage share of UTF-8 on the Internet has expanded to over 90 % worldwide, and rest of 1.2% used Shift-JIS and EUC. Yet, a few popular websites including 2channel and kakaku.com are still using Shift-JIS.Until 2000s, most Japanese emails were in ISO-2022-JP and web pages in Shift-JIS and mobile phones in Japan usually used some form of Extended Unix Code. If a program fails to determine the encoding scheme employed, it can cause and thus unreadable text on computers.
The first encoding to become widely used was JIS X 0201, which is a single-byte encoding that only covers standard 7-bit ASCII characters with half-width katakana extensions. This was widely used in systems that were neither powerful enough nor had the storage to handle kanji because Kana-Kanji conversion required a complicated process, and output in kanji required much memory and high resolution. This means that only katakana, not kanji, was supported using this technique. Some embedded displays still have this limitation.
The development of kanji encodings was the beginning of the split. Shift JIS supports kanji and was developed to be completely backward compatible with JIS X 0201, and thus is in much embedded electronic equipment. However, Shift JIS has the unfortunate property that it often breaks any parser that is not specifically designed to handle it.
For example, some Shift-JIS characters include a backslash in the second byte, which is used as an escape character.
A parser lacking support for Shift JIS will recognize 005C as an invalid escape sequence, and remove it. Therefore, the phrase cause mojibake.
This can happen for example in the C programming language, when having Shift-JIS in text strings. It does not happen in HTML since ASCII 0x00-0x3F do not appear as second byte in Shift-JIS.
EUC, on the other hand, is handled much better by parsers that have been written for 7-bit ASCII. But EUC is not backwards compatible with JIS X 0201, the first main Japanese encoding. Further complications arise because the original Internet e-mail standards only support 7-bit transfer protocols. Thus was developed for sending and receiving e-mails. is used in closed caption of Japanese TV broadcasting
In character set standards such as JIS, not all required characters are included, so gaiji are sometimes used to supplement the character set. Gaiji may come in the form of external font packs, where normal characters have been replaced with new characters, or the new characters have been added to unused character positions. However, gaiji are not practical in Internet environments since the font set must be transferred with text to use the gaiji. As a result, such characters are written with similar or simpler characters in place, or the text may need to be encoded using a larger character set that supports the required character.
Unicode was intended to solve all encoding problems over all languages. The UTF-8 encoding used to encode Unicode in web pages does not have the disadvantages that Shift-JIS has. Unicode is supported by international software, and it eliminates the need for gaiji. There are still controversies, however. For Japanese, the kanji characters have been unified with Chinese; that is, a character considered to be the same in both Japanese and Chinese is given a single number, even if the appearance is actually somewhat different, with the precise appearance left to the use of a locale-appropriate font. This process, called Han unification, has caused controversy. The previous encodings in Japan, Taiwan Area, Mainland China and Korea have only handled one language and Unicode should handle all. The handling of Kanji/Chinese have however been designed by a committee composed of representatives from all four countries/areas.
Text input
Written Japanese uses several different scripts: kanji, 2 sets of kana and roman letters. While kana and roman letters can be typed directly into a computer, entering kanji is a more complicated process as there are far more kanji than there are keys on most keyboards. To input kanji on modern computers, the reading of kanji is usually entered first, then an input method editor, also sometimes known as a front-end processor, shows a list of candidate kanji that are a phonetic match, and allows the user to choose the correct kanji. More-advanced IMEs work not by word but by phrase, thus increasing the likelihood of getting the desired characters as the first option presented. Kanji readings inputs can be either via romanization or direct kana input. Romaji input is more common on PCs and other full-size keyboards, whereas direct kana input is typically used on mobile phones and similar devices – each of the 10 digits corresponds to one of the 10 columns in the gojūon table of kana, and multiple presses select the row.There are two main systems for the romanization of Japanese, known as Kunrei-shiki and Hepburn; in practice, "keyboard romaji" generally allows a loose combination of both. IME implementations may even handle keys for letters unused in any romanization scheme, such as L, converting them to the most appropriate equivalent. With kana input, each key on the keyboard directly corresponds to one kana. The JIS keyboard system is the national standard, but there are alternatives, like the thumb-shift keyboard, commonly used among professional typists.
Direction of text
Japanese can be written in two directions. Yokogaki style writes left-to-right, top-to-bottom, as with English. Tategaki style writes first top-to-bottom, and then moves right-to-left.To compete with Ichitaro, Microsoft provided several updates for early Japanese versions of Microsoft Word including support for downward text, such as Word 5.0 Power Up Kit and Word 98.
QuarkXPress was the most popular DTP software in Japan in 1990s, even it had a long development cycle. However, due to lacking support for downward text, it was surpassed by Adobe InDesign which had strong support for downward text through several updates.
At present, handling of downward text is incomplete. For example, HTML has no support for tategaki and Japanese users must use HTML tables to simulate it. However, CSS level 3 includes a property "writing-mode" which can render tategaki when given the value "vertical-rl". Word processors and DTP software have more complete support for it.