But ... we would look at one name and I would tell you exactly a lot about that...[1] Hungarian notation was designed to be language-independent, and found its first major use with the BCPL programming language.
Hungarian notation aims to remedy this by providing the programmer with explicit knowledge of each variable's data type.
The original Hungarian notation was invented by Charles Simonyi, a programmer who worked at Xerox PARC circa 1972–1981, and who later became Chief Architect at Microsoft.
The name of the notation is a reference to Simonyi's nation of origin, and also, according to Andy Hertzfeld, because it made programs "look like they were written in some inscrutable foreign language".
The similar Smalltalk "type last" naming style (e.g. aPoint and lastPoint) was common at Xerox PARC during Simonyi's tenure there.
[citation needed] Simonyi's paper on the notation referred to prefixes used to indicate the "type" of information being stored.
[3][4] His proposal was largely concerned with decorating identifier names based upon the semantic information of what they store (in other words, the variable's purpose).
In Systems Hungarian notation, the prefix encodes the actual data type of the variable.
The mnemonics for pointers and arrays, which are not actual data types, are usually followed by the type of the data element itself: While Hungarian notation can be applied to any programming language and environment, it was widely adopted by Microsoft for use with the C language, in particular for Microsoft Windows, and its use remains largely confined to that area.
They make it harder to change the name or type of a variable, function, member or class.
[10]Although the Hungarian naming convention is no longer in widespread use, the basic idea of standardizing on terse, precise abbreviations continues to have value.