Some early Soviet computer designers implemented systems based on ternary logic; that is, a bit could have three states: +1, 0, or -1, corresponding to positive, zero, or negative voltage.
An early project for the U.S. Air Force, BINAC attempted to make a lightweight, simple computer by using binary arithmetic.
It was a major revelation to designers of this period to realize that the data word should be a multiple of the character size.
As users' needs grew, they could move up to larger computers, and still keep all of their investment in programs, data and storage media.
A high-end machine would use a much more complex processor that could directly process more of the S/360 design, thus running a much simpler and faster emulator.
Even though the computer was complex, its control store holding the microprogram would stay relatively small and could be made with very fast memory.
Thus the computers would generally have to fetch fewer instructions from the main memory, which could be made slower, smaller and less costly for a given mix of speed and price.
In 1961, the B5000 had virtual memory, symmetric multiprocessing, a multiprogramming operating system (Master Control Program (MCP)), written in ALGOL 60, and the industry's first recursive-descent compilers as early as 1964.
However, the more able 8080 also became the original target CPU for an early de facto standard personal computer operating system called CP/M and was used for such demanding control tasks as cruise missiles, and many other uses.
A bit slice component is a piece of an arithmetic logic unit (ALU), register file or microsequencer.
Panafacom, a conglomerate formed by Japanese companies Fujitsu, Fuji Electric, and Matsushita, introduced the MN1610, a commercial 16-bit microprocessor.
The result was a very simple core CPU running at very high speed, supporting the sorts of operations the compilers were using anyway.
In a Harvard Architecture machine, the program and data occupy separate memory devices and can be accessed simultaneously.
In the early 1990s, engineers at Japan's Hitachi found ways to compress the reduced instruction sets so they fit in even smaller memory systems than CISCs.
[9] In applications that do not need to run older binary software, compressed RISCs are growing to dominate sales.
(Pipelining was originally developed in the late 1950s by International Business Machines (IBM) on their 7030 (Stretch) mainframe computer.)
A similar idea, introduced only a few years later, was to execute multiple instructions in parallel on separate arithmetic logic units (ALUs).
Such methods are limited by the degree of instruction-level parallelism (ILP), the number of non-dependent instructions in the program code.
To further the efficiency of multiple functional units which are available in superscalar designs, operand register dependencies were found to be another limiting factor.
However, as Intel has demonstrated, the concepts can be applied to a complex instruction set computing (CISC) design, given enough time and money.
Now, with just-in-time compilation (JIT) virtual machines being used for many languages, slow code generation affects users also.
This method combines the hardware simplicity, low power and speed of VLIW RISC with the compact main memory system and software reverse-compatibility provided by popular CISC.
Intel's Itanium chip is based on what they call an explicitly parallel instruction computing (EPIC) design.
However, it avoids some of the issues of scaling and complexity, by explicitly providing in each bundle of instructions information concerning their dependencies.
Loosely knit communities like OpenCores and RISC-V have recently announced fully open CPU architectures such as the OpenRISC which can be readily implemented on FPGAs or in custom produced chips, by anyone, with no license fees, and even established processor makers like Sun Microsystems have released processor designs (e.g., OpenSPARC) under open-source licenses.
Instead, stages of the CPU are coordinated using logic devices called pipe line controls or FIFO sequencers.
Even so, several asynchronous CPUs have been built, including In theory, an optical computer's components could directly connect through a holographic or phased open-air switching system.
Optical wavelength superposition could allow data lanes and logic many orders of magnitude higher than electronics, with no added space or copper wires.
[citation needed] The main problems with this approach are that, for the foreseeable future, electronic computing elements are faster, smaller, cheaper, and more reliable.
Early experimental work has been done on using ion-based chemical reactions instead of electronic or photonic actions to implement elements of a logic processor.