[3] Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity.
Operating systems borrow RAM capacity for caching so long as it is not needed by running software.
Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits.
It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind I computer in 1953.
[9] Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961.
In the same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor.
Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage.
[9] Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965.
[18][19] While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory.
In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory.
[17] In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology.
[27][28] In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971.
[29] EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972.
[35][36][37] Developments in technology and economies of scale have made possible so-called very large memory (VLM) computers.
SRAM retains its contents as long as the power is connected and may use a simpler interface, but commonly uses six transistors per bit.
Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs.
The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost.
[38] As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed.
Using small cells improves cost, power, and speed, but leads to semi-volatile behavior.
If power is off for an extended period of time, the battery may run out, resulting in data loss.
The operating system will place actively used data in RAM, which is much faster than hard disks.