One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.
However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems.
Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input.
This means that the user can respond to rendered images in real time, producing an interactive experience.
In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame.
Real-time graphics optimizes image quality subject to time and hardware constraints.
When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making.
In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display.
Another important factor controlling real-time computer graphics is the combination of physics and animation.
Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed.
[3] Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time.
The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization.
This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input.
Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage.