Skinput

The technology was developed by Chris Harrison, Desney Tan and Dan Morris, at Microsoft Research's Computational User Experiences Group.

[2] This allows the body to be annexed as an input surface without the need for the skin to be invasively instrumented with sensors, tracking markers, or other items.

This prototype contains ten small cantilevered Piezo elements configured to be highly resonant, sensitive to frequencies between 25 and 78 Hz.

[4] This configuration acts like a mechanical Fast Fourier transform and provides extreme out-of-band noise suppression, allowing the system to function even while the user is in motion.

The first public appearance was at Microsoft's TechFest 2010, where the recognition model was trained live on stage, during the presentation, followed by an interactive walkthrough of a simple mobile application with four modes: music player, email inbox, Tetris, and voice mail.

The Skinput system rendering a series of buttons on the arm. Users can press the buttons directly, with their fingers, much like a touch screen.
Ten channels of acoustic data generated by three finger taps on the forearm, followed by three taps on the wrist. The exponential average of the channels is shown in red. Segmented input windows are highlighted in green. Note how different sensing elements are actuated by the two locations.