Applications of this type of rendering include digital illustration, graphic design, 2D animation, desktop publishing and the display of user interfaces.
[11][15] Simulated lens flare and bloom are sometimes added to make the image appear subjectively brighter (although the design of real cameras tries to reduce these effects).
[17]: 5.3 Games and other real-time applications may use simpler and less realistic rendering techniques as an artistic or design choice, or to allow higher frame rates on lower-end hardware.
[16]: 4.7 [17]: 3.7 Non-photorealistic rendering (NPR) uses techniques like edge detection and posterization to produce 3D images that resemble technical illustrations, cartoons, or other styles of drawing or painting.
The PostScript format (which is often credited with the rise of desktop publishing) provides a standardized, interoperable way to describe 2D graphics and page layout.
A realistic scene may require hundreds of items like household objects, vehicles, and trees, and 3D artists often utilize large libraries of models.
When rendering lower-resolution volumetric data without interpolation, the individual cubes or "voxels" may be visible, an effect sometimes used deliberately for game graphics.
)[16]: 13.3, 13.9 [18]: 1.3 A more recent, experimental approach is description of scenes using radiance fields which define the color, intensity, and direction of incoming light at each point in space.
Neural networks are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as "training data".
High-end rendering applications commonly use the OpenEXR file format, which can represent finer gradations of colors and high dynamic range lighting, allowing tone mapping or other adjustments to be applied afterwards without loss of quality.[31][32]: Ch.
Renderers such as Blender and Pixar RenderMan support a large variety of configurable values called Arbitrary Output Variables (AOVs).[31][32]: Ch.
The algorithms developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased.
An important special case of 2D rasterization is text rendering, which requires careful anti-aliasing and rounding of coordinates to avoid distorting the letterforms and preserve spacing, density, and sharpness.
The z-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes.
[48][49][36]: 553–570 [16]: 2.5.2 A drawback of the basic z-buffer algorithm is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image.
The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like shadows and reflections using only texture mapping and multiple passes.
He also tried rendering the density of illumination by casting random rays from the light source towards the object and plotting the intersection points (similar to the later technique called photon mapping).
K-d trees are a special case of binary space partitioning, which was frequently used in early computer graphics (it can also generate a rasterization order for the painter's algorithm).
However if this procedure is repeated recursively to simulate realistic indirect lighting, and if more than one sample is taken at each surface point, the tree of rays quickly becomes huge.
Radiosity is considered a physically-based method, meaning that it aims to simulate the flow of light in an environment using equations and experimental data from physics, however it often assumes that all surfaces are opaque and perfectly Lambertian, which reduces realism and limits its applicability.
[39][65][67] In its basic form, path tracing is inefficient (requiring too many samples) for rendering caustics and scenes where light enters indirectly through narrow spaces.
[65][14] This later work was summarized and expanded upon in Eric Veach's 1997 PhD thesis, which helped raise interest in path tracing in the computer graphics community.
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers.
A renderer can simulate a wide range of light brightness and color, but current displays – movie screen, computer monitor, etc.
Advanced DPUs such as Evans & Sutherland's Line Drawing System-1 (and later models produced into the 1980s) incorporated 3D coordinate transformation features to accelerate rendering of wire-frame images.
[52] In 1981, James H. Clark and Marc Hannah designed the Geometry Engine, a VLSI chip for performing some of the steps of the 3D rasterization pipeline, and started the company Silicon Graphics (SGI) to commercialize this technology.
[86][87] Home computers and game consoles in the 1980s contained graphics coprocessors that were capable of scrolling and filling areas of the display, and drawing sprites and lines, though they were not useful for rendering realistic images.
GPUs are general-purpose processors, like CPUs, but they are designed for tasks that can be broken into many small, similar, mostly independent sub-tasks (such as rendering individual pixels) and performed in parallel.
[16]: ch3 [91] Due to their origins, GPUs typically still provide specialized hardware acceleration for some steps of a traditional 3D rasterization pipeline, including hidden surface removal using a z-buffer, and texture mapping with mipmaps, but these features are no longer always used.
As an example of code that meets this requirement: when rendering a small square of pixels in a simple ray-traced image, all threads will likely be intersecting rays with the same object and performing the same lighting computations.