Ray tracing (graphics)

In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.

Ray tracing is capable of simulating a variety of optical effects,[3] such as reflection, refraction, soft shadows, scattering, depth of field, motion blur, caustics, ambient occlusion and dispersion phenomena (such as chromatic aberration).

Later, in 1971, Goldstein and Nagel of MAGI (Mathematical Applications Group, Inc.)[9] published "3-D Visual Simulation", wherein ray tracing was used to make shaded pictures of solids.

The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A CDC 6600 computer was used.

[10] Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation in Bob Sproull's computer graphics course at Caltech.

Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors.

Roth extended the framework, introduced the term ray casting in the context of computer graphics and solid modeling, and in 1982 published his work while at GM Research Labs.

It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.

If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color).

One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres.

[21] A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen).

Path tracing is an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting.

Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface.

An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon.

[26] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm.

[27] Enclosing groups of objects in sets of bounding volume hierarchies (BVH) decreases the amount of computations required for ray tracing.

Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students.

The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing.

[29] The next interactive ray tracer, and the first known to have been labeled "real-time" was credited at the 2005 SIGGRAPH computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system.

[30] This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network.

[31] Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations.

These purposes include interactive 3-D graphics applications such as demoscene productions, computer and video games, and image rendering.

[33] The Open RT project included a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterization based approach for interactive 3-D graphics.

The idea that video games could ray trace their graphics in real time received media attention in the late 2000s.

[34] Intel, a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray traced graphics, which it saw as justifying increasing the number of its processors' cores.

The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc.

This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.

In 2014, a demo of the PlayStation 4 video game The Tomorrow Children, developed by Q-Games and Japan Studio, demonstrated new lighting techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections.

Apple reports up to a 4x performance increase over previous software-based ray tracing on the phone[63] and up to 2.5x faster comparing M3 to M1 chips.

[64] The hardware implementation includes acceleration structure traversal and dedicated ray-box intersections, and the API supports RayQuery (Inline Ray Tracing) as well as RayPipeline features.

This recursive ray tracing of reflective colored spheres on a white surface demonstrates the effects of shallow depth of field , "area" light sources, and diffuse interreflection . ( c. 2008 )
"Draughtsman Making a Perspective Drawing of a Reclining Woman" by Albrecht Dürer, possibly from 1532, shows a man using a grid layout to create an image. The German Renaissance artist is credited with first describing the technique.
Dürer woodcut of Jacob de Keyser's invention. With de Keyser's device, the artist's viewpoint was fixed by an eye hook inserted in the wall. This was joined by a silk string to a gun-sight style instrument, with a pointed vertical element at the front and a peephole at the back. The artist aimed at the object and traced its outline on the glass, keeping the eyepiece aligned with the string to maintain the correct angle of vision.
Flip book created in 1976 at Caltech
The ray-tracing algorithm builds an image by extending rays into a scene and bouncing them off surfaces and towards sources of light to approximate the color value of pixels.
Illustration of the ray-tracing algorithm for one pixel (up to the first bounce)
Visualization of SDF ray marching algorithm
Ray tracing can create photorealistic images.
In addition to the high degree of realism, ray tracing can simulate the effects of a camera due to depth of field and aperture shape (in this case a hexagon ).
The number of reflections, or bounces, a "ray" can make, and how it is affected each time it encounters a surface, is controlled by settings in the software. In this image, each ray was allowed to reflect up to 16 times. Multiple "reflections of reflections" can thus be seen in these spheres. (Image created with Cobalt .)
The number of refractions a “ray” can make, and how it is affected each time it encounters a surface that permits the transmission of light , is controlled by settings in the software. Here, each ray was set to refract or reflect (the "depth") up to 9 times . Fresnel reflections were used and caustics are visible. (Image created with V-Ray .)
Image showing recursively generated rays from the "eye" (and through an image plane) to a light source after encountering two diffuse surfaces
Quake Wars: Ray Traced