This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs.
More loosely, a HLLCA may simply be a general-purpose computer architecture with some features specifically to support a given HLL or several HLLs.
In the late 1990s, there were plans by Sun Microsystems and other companies to build CPUs that directly (or closely) implemented the stack-based Java virtual machine.
The HSA Intermediate Layer (HSAIL) of the Heterogeneous System Architecture (2012) provides a virtual instruction set to abstract away from the underlying ISAs, and has support for HLL features such as exceptions and virtual functions, and include debugging support.
Tagged architectures are frequently used to support types (as in the Burroughs Large Systems and Lisp machines).
More radical examples use a non-von Neumann architecture, though these are typically only hypothetical proposals, not actual implementations.
Some HLLCs have been particularly popular as developer machines (workstations), due to fast compiles and low-level control of the system with a high-level language.
[citation needed] The software those depend on, from OS to virtual machines, leverage native code with no protection.
One solution is to use a processor custom built to execute a safe high level language or at least understand types.
Academics are also developing languages with similar properties that might integrate with high level processors in the future.
The simplest reason for the lack of success of HLLCAs is that from 1980 optimizing compilers resulted in much faster code and were easier to develop than implementing a language in microcode.
Analogous performance problems have a long history with interpreted languages (dating to Lisp (1958)), only being resolved adequately for practical use by just-in-time compilation, pioneered in Self and commercialized in the HotSpot Java virtual machine (1999).
At the minimum tokenization is needed, and typically syntactic analysis and basic semantic checks (unbound variables) will still be performed – so there is no benefit to the front end – and optimization requires ahead-of-time analysis – so there is no benefit to the middle end.
A deeper problem, still an active area of development as of 2014[update],[5] is that providing HLL debugging information from machine code is quite difficult, basically because of the overhead of debugging information, and more subtly because compilation (particularly optimization) makes determining the original source for a machine instruction quite involved.
Thus the debugging information provided as an essential part of HLLCAs either severely limits implementation or adds significant overhead in ordinary use.
However, a similar issue arises even for many apparently language-neutral processors, which are well-supported by the language C, and where transpiling to C (rather than directly targeting the hardware) yields efficient programs and simple compilers.