A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM)[1] containing a bitmap that drives a video display.
The total amount of memory required for the framebuffer depends on the resolution of the output signal, and on the color depth or palette size.
had long discussed the theoretical advantages of a framebuffer but were unable to produce a machine with sufficient memory at an economically practicable cost.
[8] In 1969, A. Michael Noll of Bell Labs implemented a scanned display with a frame buffer, using magnetic-core memory.
It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit grayscale, and became a boon for graphics researchers who did not have the resources to build their own framebuffer.
[13] Each framebuffer was connected to an RGB color output (one for red, one for green and one for blue), with a Digital Equipment Corporation PDP 11/04 minicomputer controlling the three devices as one.
The rapid improvement of integrated-circuit technology made it possible for many of the home computers of the late 1970s to contain low-color-depth framebuffers.
Amiga computers, created in the 1980s, featured special design attention to graphics performance and included a unique Hold-And-Modify framebuffer capable of displaying 4096 colors.
SGI, Sun Microsystems, HP, DEC and IBM all released framebuffers for their workstation computers in this period.
These modes reconfigure the hardware to output different resolutions, color depths, memory layouts and refresh rate timings.
In the world of Unix machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings.
This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable – limited only by the memory available to the framebuffer.
The video processor on the card forms a picture of the screen image and stores it in the frame buffer as a large bitmap in RAM.
In a technique known generally as double buffering or more specifically as page flipping, the framebuffer uses half of its memory to display the current frame.
As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of CPU time required to fill the framebuffer.
These cards add a slight blur to the output signal that makes aliasing of the rasterized graphics much less obvious.
With a framebuffer, the electron beam (if the display technology uses one) is commanded to perform a raster scan, the way a television renders a broadcast signal.
The color information for each point thus displayed on the screen is pulled directly from the framebuffer during the scan, creating a set of discrete picture elements, i.e. pixels.
Likewise, framebuffers differ from the technology used in early text mode displays, where a buffer holds codes for characters, not individual pixels.
The video display device performs the same raster scan as with a framebuffer but generates the pixels of each character in the buffer as it directs the beam.