The history of computing hardware starting at 1960 is marked by the conversion from vacuum tube to solid-state devices such as transistors and then integrated circuit (IC) chips.
Around 1953 to 1959, discrete transistors started being considered sufficiently reliable and economical that they made further vacuum tube computers uncompetitive.
Metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology subsequently led to the development of semiconductor memory in the mid-to-late 1960s and then the microprocessor in the early 1970s.
The marketplace was dominated by IBM and the seven dwarfs: Some examples of 1960s second generation computers from those vendors are: However, some smaller companies made significant contributions.
Also, towards the end of the second generation Digital Equipment Corporation (DEC) was a serious contender in the small and medium machine marketplace.
The second generation saw both simpler, e.g., channels on the CDC 6000 series had no DMA, and more sophisticated designs, e.g., the 7909 on the IBM 7090 had limited computational, conditional branching and interrupt system.
[2] In 1959, Robert Noyce at Fairchild Semiconductor invented the monolithic integrated circuit (IC) chip.
Smaller, affordable hardware also brought about the development of important new operating systems such as Unix.In November 1966, Hewlett-Packard introduced the 2116A[24][25] minicomputer, one of the first commercial 16-bit computers.
The popularity of 16-bit computers, such as the Hewlett-Packard 21xx series and the Data General Nova, led the way toward word lengths that were multiples of the 8-bit byte.
The 1965 IBM System/360 mainframe computer family are sometimes called third-generation computers; however, their logic consisted primarily of SLT hybrid circuits, which contained discrete transistors and diodes interconnected on a substrate with printed wires and printed passive components; the S/360 M85 and M91 did use ICs for some of their circuits.
By 1971, the ILLIAC IV supercomputer was the fastest computer in the world, using about a quarter-million small-scale ECL logic gate integrated circuits to make up sixty-four parallel data processors.
[29] Third-generation computers were offered well into the 1990s; for example the IBM ES9000 9X2 announced April 1994[30] used 5,960 ECL chips to make a 10-way processor.
[31] Other third-generation computers offered in the 1990s included the DEC VAX 9000 (1989), built from ECL gate arrays and custom chips,[32] and the Cray T90 (1995).
Third-generation minicomputers were essentially scaled-down versions of mainframe computers, designed to perform similar tasks but on a smaller and more accessible scale.
[34] Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s.
[33] On November 15, 1971, Intel released the world's first single-chip microprocessor, the 4004, on a single MOS LSI chip.
Its development was led by Federico Faggin, using silicon-gate MOS technology, along with Ted Hoff, Stanley Mazor and Masatoshi Shima.
The powerful supercomputers of the era were at the other end of the computing spectrum from the microcomputers, and they also used integrated circuit technology.
Users were experienced specialists who did not usually interact with the machine itself, but instead prepared tasks for the computer on off-line equipment, such as card punches.
By today's standards, they were physically large (about the size of a refrigerator) and costly (typically tens of thousands of US dollars), and thus were rarely purchased by individuals.
However, they were much smaller, less expensive, and generally simpler to operate than the mainframe computers of the time, and thus affordable by individual laboratories and research projects.
Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.
The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers, because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software.
[41] Originally, the computer had been designed by Gernelle, Lacombe, Beckmann and Benchitrite for the Institut National de la Recherche Agronomique to automate hygrometric measurements.
The Altair and IMSAI were essentially scaled-down minicomputers and were incomplete: to connect a keyboard or teleprinter to them required heavy, expensive "peripherals".
The MITS Altair, the first commercially successful microprocessor kit, was featured on the cover of Popular Electronics magazine in January 1975.
CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II.
Out of these house meetings, the Homebrew Computer Club developed, where hobbyists met to talk about what they had done, exchange schematics and software, and demonstrate their systems.
By 1977 pre-assembled systems such as the Apple II, Commodore PET, and TRS-80 (later dubbed the "1977 Trinity" by Byte Magazine)[44] began the era of mass-market home computers; much less effort was required to obtain an operating computer, and applications such as games, word processing, and spreadsheets began to proliferate.