In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.
Ray tracing is capable of simulating a variety of optical effects,[3] such as reflection, refraction, soft shadows, scattering, depth of field, motion blur, caustics, ambient occlusion and dispersion phenomena (such as chromatic aberration).
Later, in 1971, Goldstein and Nagel of MAGI (Mathematical Applications Group, Inc.)[9] published "3-D Visual Simulation", wherein ray tracing was used to make shaded pictures of solids.
The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A CDC 6600 computer was used.
[10] Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation in Bob Sproull's computer graphics course at Caltech.
Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors.
Roth extended the framework, introduced the term ray casting in the context of computer graphics and solid modeling, and in 1982 published his work while at GM Research Labs.
It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color).
One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres.
[21] A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen).
Path tracing is an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting.
Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface.
An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon.
[26] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm.
[27] Enclosing groups of objects in sets of bounding volume hierarchies (BVH) decreases the amount of computations required for ray tracing.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students.
The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing.
[29] The next interactive ray tracer, and the first known to have been labeled "real-time" was credited at the 2005 SIGGRAPH computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system.
[30] This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network.
[31] Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations.
These purposes include interactive 3-D graphics applications such as demoscene productions, computer and video games, and image rendering.
[33] The Open RT project included a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterization based approach for interactive 3-D graphics.
The idea that video games could ray trace their graphics in real time received media attention in the late 2000s.
[34] Intel, a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray traced graphics, which it saw as justifying increasing the number of its processors' cores.
The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc.
This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.
In 2014, a demo of the PlayStation 4 video game The Tomorrow Children, developed by Q-Games and Japan Studio, demonstrated new lighting techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections.
Apple reports up to a 4x performance increase over previous software-based ray tracing on the phone[63] and up to 2.5x faster comparing M3 to M1 chips.
[64] The hardware implementation includes acceleration structure traversal and dedicated ray-box intersections, and the API supports RayQuery (Inline Ray Tracing) as well as RayPipeline features.