Computer architecture

[3] The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine.

While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept.

[4][5] Two other early and important examples are: The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959.

[8] Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, "Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.

"[9] Brooks went on to help develop the IBM System/360 line of computers, in which "architecture" became a noun defining "what the user needs to know".

[11] The earliest computer architectures were designed on paper and then directly built into the final hardware form.

Software tools, such as compilers, translate those high level languages into instructions that the processor can understand.

Besides instructions, the ISA defines items in the computer that are available to a program—e.g., data types, registers, addressing modes, and memory.

Modern emulators can measure size, cost, and speed to determine whether a particular ISA is meeting its goals.

[citation needed] Counting machine-language instructions would be misleading because they can do varying amounts of work in different ISAs.

Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data).

For example, one system might handle scientific applications quickly, while another might render video games more smoothly.

Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but do not offer similar advantages to general tasks.

The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).

Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.

[19] In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency.

Increases in clock frequency have grown more slowly over the past few years, compared to power reduction improvements.

This has been driven by the end of Moore's Law and demand for longer battery life and reductions in size for mobile technology.

Block diagram of a basic computer with uniprocessor CPU. Black lines indicate control flow, whereas red lines indicate data flow. Arrows indicate the direction of flow.