Consumer-facing formats are numerous and the required motion capture techniques lean on computer graphics, photogrammetry, and other computation-based methods.
Through the growing advancements in the fields of computer graphics, optics, and data processing, this fiction has slowly evolved into a reality.
The ultimate goal is to imitate reality in minute detail while giving creatives the power to build worlds atop this foundation to match their vision.
Visual effects in movies and video games paved the way for advances in photogrammetry, scanning devices, and the computational backend to handle the data received from these new intensive methods.
Point clouds, being distinct samples of three-dimensional space with position and color, create a high fidelity representation of the real world with a huge amount of data.
In 2010 Microsoft brought the Kinect to the market, a consumer product that used structured light in the infrared spectrum to generate a 3D mesh from its camera.
Artists and hobbyists started to make tools and projects around the affordable device, sparking a growing interest in volumetric capture as a creative medium.
While this remains a very interesting setup for the high-end market, the affordable price of a single Kinect device led more experimental artists and independent directors to become active in the volumetric capture field.
EF EVE™ supports unlimited number of Azure Kinect sensors on one PC giving full volumetric capture with easy setup.
Depthkit is a software suite that allows the capture of geometry data with one structured light sensor including the Azure Kinect,[3] as well as high quality color detail from an attached witness camera.
Those synchronized cameras are then used frame-by-frame to generate a set of points or geometry that can be played back at speed, resulting in the full volumetric performance capture that can be composited into any environment.
As volumetric video developed into a commercially applicable approach to environment and performance capture, the ability to move about the results with six degrees of freedom and true stereoscopy necessitated a new type of display device.
The photographic nature of the captures combined with this immersion and the resulting interactivity is one giant step closer to being the holy grail of true virtual reality.
Volumetric video is currently being used to deliver virtual concerts via the Scenez application on Meta Quest and Apple Vision Pro Devices.
Fields can be captured inside-out in camera or outside-in from renderings of 3D geometry, representing a huge amount of information ready to be manipulated.
Here are some examples that show a couple of them: This approach generates a more traditional 3D triangle mesh similar to the geometry used for computer games and visual effects.
To extend beyond the physical world, CG techniques can be deployed to further enhance the captured data, employing artists to build onto and into the static mesh as necessary.
The playback is usually handled by a real-time engine and resembles a traditional game pipeline in implementation, allowing interactive lighting changes and creative and archivable ways of compositing static and animated meshes together.
After capturing and generating the data, editing and compositing is done within a realtime engine, connecting recorded actions to tell the intended story.
This groundbreaking in the world of sensory trickery will spark an evolution in the way we consume media, and while technologies for other senses like scent, smell, and proprioception are still in research and development stage, one day in the not-so-distant future we will travel convincingly to new locales, both real and imagined.
Once a capture is created and saved, it can be re-used and even possibly re-purposed ad nauseam for circumstances beyond the initial envisioned scope.
One area of concern with the growing field of volumetric capture is the shrinking of demand for traditional skillsets like modeling, lighting, animation, etc.
The onus will be on the artisan to ensure they keep up with the tools and workflows that best suit their skillsets, but the prudent will find that the production pipeline of the future will involve many opportunities to streamline the creation of the labor-intensive and allowing for investment in bigger creative challenges.
Being able to take part in visiting a museum without having to physically be there allows a broader audience and also enables institutions to show their entire inventory rather the subsection currently on display.