Those involved in MIR may have a background in academic musicology, psychoacoustics, psychology, signal processing, informatics, machine learning, optical music recognition, computational intelligence, or some combination of these.
Several recommender systems for music already exist, but surprisingly few are based upon MIR techniques, instead of making use of similarity between users or laborious data compilation.
Automatic music transcription is the process of converting an audio recording into symbolic notation, such as a score or a MIDI file.
Analysis can often require some summarising,[2] and for music (as with many other forms of data) this is achieved by feature extraction, especially when the audio content itself is analysed and machine learning is to be applied.
The purpose is to reduce the sheer quantity of data down to a manageable set of values so that learning can be performed within a reasonable time-frame.
Other features may be employed to represent the key, chords, harmonies, melody, main pitch, beats per minute or rhythm in the piece.