History of computer animation

The consulting firm Nordisk ADB, which was a provider of software for the Royal Swedish Road and Water Construction Agency realized that they had all the coordinates to be able to draw perspective from the driver's seat for a motorway from Stockholm towards Nacka.

Edward Zajac produced one of the first computer generated films at Bell Labs in 1963, titled A Two Gyro Gravity Gradient attitude control System, which demonstrated that a satellite could be stabilized to always have a side facing the Earth as it orbited.

[14] [15] Fetter's work included the 1964 development of ergonomic descriptions of the human body that are both accurate and adaptable to different environments, and this resulted in the first 3-D animated wire-frame figures.

He worked at the Lincoln Laboratory at MIT (Massachusetts Institute of Technology) in 1962, where he developed a program called Sketchpad I, which allowed the user to interact directly with the image on the screen.

[19] In the words of Robert Rivlin in his 1986 book The Algorithmic Image: Graphic Visions of the Computer Age, "almost every influential person in the modern computer-graphics community either passed through the University of Utah or came into contact with it in some way".

[21] One of the first successful approaches to this was published at the 1967 Fall Joint Computer Conference by Chris Wylie, David Evans, and Gordon Romney, and demonstrated shaded 3D objects such as cubes and tetrahedra.

[33] The capabilities of the "KEYFRAME" program were demonstrated in a short film, Not Just Reality, which featured walk cycles, lip syncing, facial expressions, and further movement of a shaded humanoid 3D character.

[35] Most of the employees were active or former students, and included Jim Clark, who started Silicon Graphics in 1981, Ed Catmull, co-founder of Pixar in 1979, and John Warnock of Adobe Systems in 1982.

The work at OSU revolved around animation languages, complex modeling environments, user-centric interfaces, human and creature motion descriptions, and other areas of interest to the discipline.

This film comprised drawings animated by gradually changing from one image to the next, a technique known as "interpolating" (also known as "inbetweening" or "morphing"), which also featured in a number of earlier art examples during the 1960s.

[53][54] The package was broadly based on conventional "cel" (celluloid) techniques, but with a wide range of tools including camera and graphics effects, interpolation ("inbetweening"/"morphing"), use of skeleton figures and grid overlays.

[57] John Whitney, Jr., and Gary Demos at Information International, Inc. digitally processed motion picture photography to appear pixelized to portray the Gunslinger android's point of view.

The Academy Award-winning 1975 short animated film Great, about the life of the Victorian engineer Isambard Kingdom Brunel, contains a brief sequence of a rotating wireframe model of Brunel's final project, the iron steam ship SS Great Eastern.The third film to use this technology was Star Wars (1977), written and directed by George Lucas, with wireframe imagery in the scenes with the Death Star plans, the targeting computers in the X-wing fighters, and the Millennium Falcon spacecraft.

In the same year, the science-fiction horror film Alien, directed by Ridley Scott, also used wire-frame model graphics, in this case to render the navigation monitors in the spaceship.

[63] Although Lawrence Livermore Labs in California is mainly known as a centre for high-level research in science, it continued producing significant advances in computer animation throughout this period.

In 1979, he recruited the top talent from NYIT, including Catmull, Smith and Guggenheim to start his division, which later spun off as Pixar, founded in 1986 with funding by Apple Inc. co-founder Steve Jobs.

[81] Carpenter was subsequently hired by Pixar to create the fractal planet in the Genesis Effect sequence of Star Trek II: The Wrath of Khan in June 1982.

Later in the 1980s, Blinn developed CGI animations for an Annenberg/CPB TV series, The Mechanical Universe, which consisted of over 500 scenes for 52 half-hour programs describing physics and mathematics concepts for college students.

Early forms of motion control go back to John Whitney's 1968 work on 2001: A Space Odyssey, and the effects on the 1977 film Star Wars Episode IV: A New Hope, by George Lucas' newly created company Industrial Light & Magic in California (ILM).

[101] Later developments included computer servers and workstations built on its own RISC-based processor architecture and a suite of software products such as the Solaris operating system, and the Java platform.

The film is celebrated as a milestone in the industry, though less than twenty minutes of this animation were actually used—mainly the scenes that show digital "terrain", or include vehicles such as Light Cycles, tanks and ships.

To create the CGI scenes, Disney turned to the four leading computer graphics firms of the day: Information International Inc, Robert Abel and Associates (both in California), MAGI, and Digital Effects (both in New York).

[111] In 1984, Tron was followed by The Last Starfighter, a Universal Pictures / Lorimar production, directed by Nick Castle, and was one of cinema's earliest films to use extensive CGI to depict its many starships, environments and battle scenes.

Inbetweening with solid-filled colors appeared in the early '70s, (e.g., Alan Kitching's Antics at Atlas Lab, 1973,[55] and Peter Foldes' La Faim at NFBC, 1974[50]), but these were still entirely vector-based.

[65] The first cinema feature film to use this technique was the 1986 Star Trek IV: The Voyage Home, directed by Leonard Nimoy, with visual effects by George Lucas's company Industrial Light & Magic (ILM).

The system also allowed easier combination of hand-drawn art with 3-D CGI material, notably in the "waltz sequence", where Belle and Beast dance through a computer-generated ballroom as the camera "dollies" around them in simulated 3-D space.

In 1993, J. Michael Straczynski's Babylon 5 became the first major television series to use CGI as the primary method for their visual effects (rather than using hand-built models), followed later the same year by Rockne S. O'Bannon's SeaQuest DSV.

[151] Motion-capture, or "Mo-cap", records the movement of external objects or people, and has applications for medicine, sports, robotics, and the military, as well as for animation in film, TV and games.

[184] In 2002, Peter Jackson's The Lord of the Rings: The Two Towers was the first feature film to use a realtime motion-capture system, which allowed the actions of actor Andy Serkis to be fed direct into the 3-D CGI model of Gollum as it was being performed.

Also a human actor could not have been used for the end showdown in Matrix Revolutions: Agent Smith's cheekbone gets punched in by Neo leaving the digital look-alike naturally unhurt.

An image of a cube generated at the University of Utah in 1967.
A color image of a church generated by the Watkins algorithm at the University of Utah in 1970.