Digital Image Processing with Sound

DIPS can be used to detect a performer’s motions and to track positions of subtle details, such as the face, mouth, and eyes.

It can also be used to measure the distance between objects and a Kinect sensor system, and offers powerful tools for realtime image processing of incoming video stream and stored movie files.

The DIPS consists of a library of more than 300 Max external objects and abstractions, a comprehensive set of sample patches, and a detailed tutorial.

Initially developed by Shu Matsuda in 1997, DIPS was a plug-in software for Max/FTS running on SGI Octane and O2 computers.

[1] Current active group members are Shu Matsuda, Yota Morimoto, Takuto Fukuda, and Keitaro Takahashi.