Spatial computing

[2] They further use computer vision to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more.

In the early 1990s, as field of Virtual reality was beginning to be commercialized beyond academic and military labs, a startup called Worldesign in Seattle used the term Spatial Computing[5] to describe the interaction between individual people and 3D spaces, operating more at the human end of the scale than previous GIS examples may have contemplated.

Robert Jacobson, CEO of Worldesign, attributes the origins of the term to experiments at the Human Interface Technology Lab, at the University of Washington, under the direction of Thomas A. Furness III.

MIT Media Lab alumnus John Underkoffler gave a TED talk in 2010[8] giving a live demo of the multi-screen, multi-user spatial computing systems being developed by Oblong Industries, which sought to bring to life the futuristic interfaces conceptualized by Underkoffler in the films Minority Report and Iron Man.

In computing, the word "spatial" has also been used to refer to the unrelated concept of moving data between processing elements that are arranged in a physical space.

It includes several features such as Spatial Audio, two 4K micro-OLED displays, the Apple R1 chip and eye tracking, and released in the United States on February 2, 2024.

Photo of an Apple Vision Pro, on a stand, with a battery. A man stands in the background in a blue shirt.
Apple Vision Pro is a spatial computing product developed by Apple