Reflection mapping

The first technique was sphere mapping, in which a single texture contains the image of the surroundings as reflected on a spherical mirror.

This technique often produces results that are superficially similar to those generated by raytracing, but is less computationally expensive since the radiance value of the reflection comes from calculating the angles of incidence and reflection, followed by a texture lookup, rather than followed by tracing a ray against the scene geometry and computing the radiance of the ray, simplifying the GPU workload.

This technique is used to make an otherwise flat surface appear textured, for example corrugated metal, or brushed aluminium.

The texture image can be created by approximating this ideal setup, or using a fisheye lens or via prerendering a scene with a spherical mapping.

Because spherical maps are stored as azimuthal projections of the environments they represent, an abrupt point of singularity (a "black hole" effect) is visible in the reflection on the object where texel colors at or near the edge of the map are distorted due to inadequate resolution to represent the points accurately.

This results in the reflected ray which is then passed to the cube map to get the texel which provides the radiance value used in the lighting calculation.

[5] In 1974, Edwin Catmull created an algorithm for "rendering images of bivariate surface patches"[6][7] which worked directly with their mathematical definition.

Further refinements were researched and documented by Bui-Tuong Phong in 1975, and later James Blinn and Martin Newell, who developed environment mapping in 1976; these developments which refined Catmull's original algorithms led them to conclude that "these generalizations result in improved techniques for generating patterns and texture".

An environment texture mapped onto models of spoons, to give the illusion that they are reflecting the world around them
A diagram depicting an apparent reflection being provided by cube-mapped reflection. The map is actually projected onto the surface from the point of view of the observer. Highlights which in raytracing would be provided by tracing the ray and determining the angle made with the normal, can be "fudged", if they are manually painted into the texture field (or if they already appear there depending on how the texture map was obtained), from where they will be projected onto the mapped object along with the rest of the texture detail.
Example of a three-dimensional model using cube-mapped reflection