Music technology (electronic and digital)

These instruments vary, including computers, electronic effects units, software, and digital audio equipment.

In the late 19th century, Thaddeus Cahill introduced the Telharmonium, which is commonly considered the first electromechanical musical instrument.

In the mid-20th century, sampling emerged, with artists like Pierre Schaeffer and Karlheinz Stockhausen manipulating recorded sounds on tape to create entirely new compositions.

[6] Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in electrical engineering, computer programming, computer hardware design, acoustics, record producing or other fields.

Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities.

Some digital pianos provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals.

Classic digital synthesizers include the Fairlight CMI, PPG Wave, Nord Modular and Korg M1.

[10] At Bell Laboratories, Matthews conducted research to improve the telecommunications quality for long-distance phone calls.

Owing to long-distance and low-bandwidth, audio quality over phone calls across the United States was poor.

[11] The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared.

A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet 600 and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them.

The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.

This has created a large consumer market for software such as MIDI-equipped electronic keyboards, MIDI sequencers and digital audio workstations.

In the 1930s, an engineer named Holmer Dudley invented the VODER (Voice Operated Demonstrator), an electro-mechanical device which generated a sawtooth wave and white-noise.

In the late 1960s and early 1970s, bands and solo artists began using the VOCODER to blend speech with notes played on a synthesizer.

[15] Meanwhile, at Bell Laboratories, Max Matthews worked with researchers Kelly and Lochbaum to develop a model of the vocal tract to study how its prosperities contributed to speech generation.

[16] At IRCAM in France, researchers developed software called CHANT (French for "sing"), the first version of which ran between 1979 and 1983.

[19] In the 2010s, Singing synthesis technology has taken advantage of the recent advances in artificial intelligence—deep listening and machine learning to better represent the nuances of the human voice.

[25] In the late 1970s and 1980s, Japanese manufacturers, including Roland and Korg, assumed pivotal roles in the transformation of the musical landscape.

Over time, Japanese companies continued to innovate, producing increasingly sophisticated and user-friendly drum machines, such as the Roland TR-8 and Korg Volca Beats.

These instruments continue to influence contemporary music production and remain integral to the electronic music landscape worldwide.Sly and the Family Stone's 1971 album There's a Riot Goin' On helped to popularize the sound of early drum machines, along with Timmy Thomas' 1972 R&B hit "Why Can't We Live Together" and George McCrae's 1974 disco hit "Rock Your Baby" which used early Roland rhythm machines.

In the 1980s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the top recording studios and musicians.

Before affordable sampling technology was readily available, DJs would use a technique pioneered by Grandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables.

In turn, this turntablism technique originates from Jamaican dub music in the 1960s and was introduced to American hip hop in the 1970s.

Current developments in computer hardware and specialized software continue to expand MIDI applications.

Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music.

Digital audio workstation software, such as Pro Tools, Logic, and many others, have gained popularity among the vast array of contemporary music technology in recent years.

Such programs allow the user to record acoustic sounds with a microphone or software instrument, which may then be layered and organized along a timeline and edited on a flat-panel display of a computer.

Other examples of generative music technology include the use of sensors connected to computer and artificial intelligence to generate music based on captured data, such as environmental factors, the movements of dancers, or physical inputs from a digital device such as a mouse or game controller.

Software applications offering capabilities for generative and interactive music include SuperCollider, MaxMSP/Jitter, and Processing.

Music production using a digital audio workstation (DAW) with multi-monitor set-up
Early Minimoog synthesizer by R. A. Moog Inc. from 1970
Yamaha RY30 drum machine
Several rack-mounted synthesizers that share a single controller
MIDI allows multiple instruments to be played from a single controller (often a keyboard, as pictured here), which makes stage setups much more portable. This system fits into a single rack case, but prior to the advent of MIDI. it would have required four separate, heavy full-size keyboard instruments, plus outboard mixing and effects units .