UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding now known as UCS-2 (for 2-byte Universal Character Set),[2][3] once it became clear that more than 216 (65,536) code points were needed,[4] including most emoji and important CJK characters such as for personal and place names.
The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music.
Two groups worked on this in parallel, ISO/IEC JTC 1/SC 2 and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment.
The two groups attempted to synchronize their character assignments so that the developing encodings would be mutually compatible.
This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of memory and disk space, and because some manufacturers were already heavily invested in 2-byte-per-character technology.
The UTF-16 encoding scheme was developed as a compromise and introduced with version 2.0 of the Unicode standard in July 1996.
These two 16-bit code units are chosen from the UTF-16 surrogate range 0xD800–0xDFFF which had not previously been assigned to characters.
[citation needed] As of Unicode 9.0, some modern non-Latin Asian, Middle-Eastern, and African scripts fall outside this range, as do most emoji characters.
This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. CVE-2008-2938, CVE-2012-2135).
The official Unicode standard says that no UTF forms, including UTF-16, can encode the surrogate code points.
However, Windows allows unpaired surrogates in filenames[20] and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard.
The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type.
When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character.
The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
[citation needed] A more serious claim can be made for Devanagari and Bengali, which use multi-letter words and all the letters take 3 bytes in UTF-8 and only 2 in UTF-16.
[29] UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; and the Qt cross-platform graphical widget toolkit.
iPhone handsets use UTF-16 for Short Message Service instead of UCS-2 described in the 3GPP TS 23.038 (GSM) and IS-637 (CDMA) standards.
Python 3.3 switched internal storage to use one of ISO-8859-1, UCS-2, or UTF-32 depending on the largest code point in the string.
[31] Python 3.12 drops some functionality (for CPython extensions) to make it easier to migrate to UTF-8 for all strings.
Swift, Apple's preferred application language, used UTF-16 to store strings until version 5 which switched to UTF-8.
[38] A method to determine what encoding a system is using internally is to ask for the "length" of string containing a single non-BMP character.