Computer-generated imagery

[2] Other early films that incorporated CGI include Star Wars: Episode IV (1977),[2] Tron (1982), Star Trek II: The Wrath of Khan (1982),[2] Golgo 13: The Professional (1983),[3] The Last Starfighter (1984),[4] Young Sherlock Holmes (1985), The Abyss (1989), Terminator 2: Judgement Day (1991), Jurassic Park (1993) and Toy Story (1995).

Link's Digital Image Generator had architecture to provide a visual system that realistically corresponded with the view of the pilot.

[9] Combined with the need to pair virtual synthesis with military level training requirements, CGI technologies applied in flight simulation were often years ahead of what would have been available in commercial computing or even in high budget film.

[10] The evolution of CGI led to the emergence of virtual cinematography in the 1990s, where the vision of the simulated camera is not constrained by the laws of physics.

A simple way to generate fractal surfaces is to use an extension of the triangular mesh method, relying on the construction of some special case of a de Rham curve, e.g., midpoint displacement.

[23] In addition to their use in film, advertising and other modes of public display, computer generated images of clothing are now routinely used by top fashion design firms.

[27] As the user interacts with the system (e.g. by using joystick controls to change their position within the virtual world) the raw data is fed through the pipeline to create a new rendered image, often making real-time computational efficiency a key consideration in such applications.

Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology.

It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.

Text-to-image models began to be developed in the mid-2010s during the beginnings of the AI boom, as a result of advances in deep neural networks.

In 2022, the output of state-of-the-art text-to-image models—such as OpenAI's DALL-E 2, Google Brain's Imagen, Stability AI's Stable Diffusion, and Midjourney—began to be considered to approach the quality of real photographs and human-drawn art.

[30] These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible[31] (auditory[32] and touch sensations for example).

[33] However, a 1997 study showed that people are poor intuitive physicists and easily influenced by computer generated images.

Coupled with 3D graphics symbols and mapped to a common virtual geospatial model, these animated visualizations constitute the first true application of CGI to TV.

Sports and entertainment venues are provided with see-through and overlay content through tracked camera feeds for enhanced viewing by the audience.

CGI is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area.

Other examples include hockey puck tracking and annotations of racing car performance[35] and snooker ball trajectories.

Such ability is a fault with normal computer-generated imagery which, due to the complex anatomy of the human body, can often fail to replicate it perfectly.

Artists can use motion capture to get footage of a human performing an action and then replicate it perfectly with computer-generated imagery so that it looks normal.

Because computer-generated imagery reflects only the outside, or skin, of the object being rendered, it fails to capture the infinitesimally small interactions between interlocking muscle groups used in fine motor skills like speaking.

The constant motion of the face as it makes sounds with shaped lips and tongue movement, along with the facial expressions that go along with speaking are difficult to replicate by hand.

Morphogenetic Creations computer-generated digital art exhibition by Andy Lomas at Watermans Arts Centre , west London , in 2016
A computer-generated image featuring a house at sunset, made in Blender
A CT pulmonary angiogram image generated by a computer from a collection of x-rays
Computer-generated wet fur created in Autodesk Maya
Machinima films are, by nature, CGI films.
An image conditioned on the prompt an astronaut riding a horse, by Hiroshige , generated by Stable Diffusion 3.5, a large-scale text-to-image model first released in 2022
Metallic balls created in Blender