Whereas early techniques used images from multiple cameras to calculate 3D positions,[9] often the purpose of motion capture is to record only the movements of the actor, not their visual appearance.
A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set.
The most common are for video games, movies, and movement capture, however there is a research application for this technology being used at Purdue University in robotics development.
In outdoor spaces, it's possible to achieve accuracy to the centimeter by using the Global Navigation Satellite System (GNSS) together with Real-Time Kinematics (RTK).
Regulations on airspace usage limit how feasible outdoor experiments can be conducted with Unmanned Aerial Systems (UAS).
PURT is dedicated to UAS research, and provides tracking volume of 600,000 cubic feet using 60 motion capture cameras.
Warner Bros. had acquired motion capture technology from arcade video game company Acclaim Entertainment for use in the film's production.
The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices).
The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices.
In 2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in 2011, after a box office failure of Mars Needs Moms.
Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld [nl] in The Netherlands, and Headcases in the UK.
Techniques allow clinicians to evaluate human motion across several biomechanical factors, often while streaming this information live into analytical software.
Cameron was so proud of his results that he invited Steven Spielberg and George Lucas on set to view the system in action.
Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion.
[34] Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between two or more cameras calibrated to provide overlapping projections.
The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full-body spandex/lycra suit designed specifically for motion capture.
Emerging techniques and research in computer vision are leading to the rapid development of the markerless approach to motion capture.
Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking.
ESC entertainment, a subsidiary of Warner Brothers Pictures created especially to enable virtual cinematography, including photorealistic digital look-alikes for filming The Matrix Reloaded and The Matrix Revolutions movies, used a technique called Universal Capture that utilized 7 camera setup and the tracking the optical flow of all pixels over all the 2-D planes of the cameras for motion, gesture and facial expression capture leading to photorealistic results.
The image obtained from NASA's long-range tracking system on the space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident.
Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light.
One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites.
No external cameras, emitters or markers are needed for relative motions, although they are required to give the absolute position of the user if desired.
The popularity of inertial systems is rising amongst game developers,[10] mainly because of the quick and easy setup resulting in a fast pipeline.
[43] The relative intensity of the voltage or current of the three coils allows these systems to calculate both range and orientation by meticulously mapping the tracking volume.
The two main techniques are stationary systems with an array of cameras capturing the facial expressions from multiple angles and using software such as the stereo mesh solver from OpenCV to create a 3D surface mesh, or to use light arrays as well to calculate the surface normals from the variance in brightness as the light source, camera position or both are changed.
Recent work is focusing on increasing the frame rates and doing optical flow to allow the motions to be retargeted to other computer generated faces, rather than just making a 3D Mesh of the actor and their expressions.
Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances is not likely to be as high.
[44] An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a hamster ball, which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment.
Even though this technology could potentially lead to much lower costs for motion capture, the basic sphere is only capable of recording a single continuous direction.