Camera systems are used in video games where their purpose is to show the action at the best possible angle; more generally, they are used in 3D virtual worlds when a third-person view is required.
To implement camera systems, video game developers use techniques such as constraint solvers, artificial intelligence scripts, or autonomous agents.
In video games, "third-person" refers to a graphical perspective rendered from a fixed distance behind and slightly above the player character.
[2] One advantage of this camera system is that it allows the game designers to use the language of film, creating mood through camerawork and selection of shots.
This type of camera system was very common in early 3D games such as Crash Bandicoot or Tomb Raider since it is very simple to implement.
[9] The Legend of Zelda: The Wind Waker was more successful at it - IGN called the camera system "so smart that it rarely needs manual correction".
In other words, the constraint solver is given a requested shot composition such as "show this character and ensure that he covers at least 30 percent of the screen space".
Once a suitable shot is found, the solver outputs the coordinates and rotation of the camera, which can then be used by the graphic engine renderer to display the view.
[16] Subsequent research demonstrated how a script-based system could automatically switch cameras to view conversations between avatars in a realtime chat application.
Thus a happy camera will "cut more frequently, spend more time in close-up shots, move with a bouncy, swooping motion, and brightly illuminate the scene".
[19] In 2010, the Kinect was released by Microsoft as a 3D scanner/webcam hybrid peripheral device which provides full-body detection of Xbox 360 players and hands-free control of the user interfaces of video games and other software on the console.
This was later modified by Oliver Kreylos[20] of University of California, Davis in a series of YouTube videos which showed him combining the Kinect with a PC-based virtual camera.
[21] Because the Kinect is capable of detecting a full range of depth (through computer stereo vision and Structured light) within a captured scene, Kreylos demonstrated the capacity of the Kinect and the virtual camera to allow free-viewpoint navigation of the range of depth, although the camera could only allow video capture of the scene as shown to the front of the Kinect, resulting in fields of black, empty space where the camera was unable to capture video within the field of depth.
[28] In 1992, Michael McKenna of MIT's Media Lab demonstrated the earliest documented virtual camera rig when he fixed a Polhemus magnetic motion sensor and a 3.2 inch portable LCD TV to a wooden ruler.
[29] The Walkthrough Project at the University of North Carolina at Chapel Hill produced a number of physical input devices for virtual camera view control including dual three-axis joysticks and a billiard-ball shaped prop known as the UNC Eyeball that featured an embedded six-degree of freedom motion tracker and a digital button.