As a result, the 8-bit byte became the de facto datatype for computer systems storing ASCII characters in memory.
8-bit extensions such as IBM code page 37, PETSCII and ISO 8859 became commonplace, offering terminal support for Greek, Cyrillic, and many others.
Early adoption of UCS-2 ("Unicode 1.0") led to common use of UTF-16 in a number of platforms, most notably Microsoft Windows, .NET and Java.
[4] The size of a wide character type does not dictate what kind of text encodings a system can process, as conversions are available.
Other systems such as the Unix-likes, however, tend to retain the 8-bit "narrow string" convention, using a multibyte encoding (almost universally UTF-8) to handle "wide" characters.
[5] The C and C++ standard libraries include a number of facilities for dealing with wide characters and strings composed of them.
The wide characters are defined using datatype wchar_t, which in the original C90 standard was defined as Both C and C++ introduced fixed-size character types char16_t and char32_t in the 2011 revisions of their respective standards to provide unambiguous representation of 16-bit and 32-bit Unicode transformation formats, leaving wchar_t implementation-defined.