Energy proportional computing

The concept was first proposed in 2007 by Google engineers Luiz André Barroso and Urs Hölzle, who urged computer architects to design servers that would be much more energy efficient for the datacenter setting.

A critical issue is high static power,[1][4] which means that the computer consumes significant energy even when it is idle.

[1][4] This can be acceptable for traditional high-performance computing systems and workloads, which try to extract the maximum utilization possible out of the machines, where they are most efficient.

For workloads that have frequent and intermittent bursts of activity, such as web search queries, this prevents the use of deep lower power states without incurring significant latency penalties, which may be unacceptable for the application.

Owing to many innovations in low power technology, devices, circuits, microarchitecture, and electronic design automation, today's CPUs are now much improved in energy efficiency.

However, most of them have contributed some combination of the two broad types of power management mentioned above, namely, idle power-down and active performance scaling.

[1][4] Unlike CPUs, most other computer hardware components lack power management controls, especially those that enable active performance scaling.

Traditionally, dynamic voltage and frequency scaling on main memory DRAM has not been possible due to limitations in the DDR JEDEC standards.

[10] Thus, scaling voltage and frequency, which is commonly done in CPUs, is considered difficult, impractical, or too risky for data corruption to apply in memories.

[13] The same group also proposed redesigning the DDR3 interface to better support energy proportional server memory without sacrificing peak bandwidth.

[15] The main reason they are not energy proportional is because networking elements are conventionally always on[15] due to the way routing protocols are designed, and the unpredictability of message traffic.

[1][15] In recent years, efforts in green networking have targeted energy-efficient Ethernet (including the IEEE 802.3az standard), and many other wired and wireless technologies.

Some authors[15] have proposed that to make datacenter networks more energy proportional, the routing elements need greater power dynamic range.

For example, in hard drives, although the data is stored in a non-volatile magnetic state, the disk is typically kept spinning at constant RPM, which requires considerable power.

Even modern solid state drives (SSDs) made with flash memory have shown signs of energy disproportionality.

[20] Databases are a common type of workload for datacenters, and they have unique requirements that make use of idle low-power states difficult.

This is because improvements in aggregate energy proportionality can be accomplished largely with software reorganization, requiring minimal changes to the underlying hardware.

For this reason, energy proportionality can important across a wide range of hardware and software applications, not just in datacenter settings.