Other animals, such as birds and reptiles, also use them but they may use them differently, and some also have localization cues which are absent in the human auditory system, such as the effects of ear movements.
In mammals, the sound waves vibrate the tympanic membrane (ear drum), causing the three bones of the middle ear to vibrate, which then sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of Corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain.
According to Jeffress,[1] this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths.
Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress's original ideas.
[5] Lower frequencies, with longer wavelengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source.
The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once.
[7] To determine the lateral input direction (left, front, right), the auditory system analyzes the following ear signal information:
In 1907, Lord Rayleigh utilized tuning forks to generate monophonic excitation and studied the lateral sound localization theory on a human head model without auricle.
For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 626 μs) are smaller than the half wavelength of the sound waves.
For example, if two acoustic sources are placed symmetrically at the front and back of the right side of the human head, they will generate equal ITDs and IIDs, in what is called the cone model effect.
These resonances implant direction-specific patterns into the frequency responses of the ears, which can be evaluated by the auditory system for sound localization.
Identical ITDs and ILDs can be produced by sounds at eye level or at any elevation, as long as the lateral direction is constant.
The auditory system can increase the signal-to-noise ratio by up to 15 dB, which means that interfering sound is perceived to be attenuated to half (or less) of its actual loudness.
[26] It utilizes "smart" manikins, such as KEMAR, to glean signals or use DSP methods to simulate the transmission process from sources to ears.
[26] They use HRTF to simulate the received acoustic signals at the ears from different directions with common binary-channel stereo reproduction.
That is because when the listening zone is relatively larger, simulation reproduction through HRTFs may cause invert acoustic images at symmetric positions.
If the ears are located at the side of the head, interaural level differences appear for higher frequencies and can be evaluated for localization tasks.
In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement.
The tiny parasitic fly Ormia ochracea has become a model organism in sound localization experiments because of its unique ear.
The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of sub-microsecond time differences[28][29] and requiring a new neural coding strategy.
[30] Ho[31] showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal's head.
Localization adaptations include pronounced asymmetry of the skull, nasal sacks, and specialized lipid structures in the forehead and jaws, as well as acoustically isolated middle and inner ears.
Discovered just over a decade ago, Prestin encodes a protein located in the inner ear's hair cells, facilitating rapid contractions and expansions.
This intricate mechanism operates akin to an antique phonograph horn, amplifying sound waves within the cochlea and elevating the overall sensitivity of hearing.
In 2014 Liu and others delved into the evolutionary adaptations of Prestin, unveiling its critical role in the ultrasonic hearing range essential for animal sonar, specifically in the context of echolocation.
This adaptation proves instrumental for dolphins navigating through turbid waters and bats seeking sustenance in nocturnal darkness.
A meticulous dissection of Prestin protein function in sonar-guided bats and bottlenose dolphins, juxtaposed with nonsonar mammals, sheds light on the intricacies of this process.
This research underscores the adaptability and evolutionary significance of Prestin, offering valuable insights into the genetic foundations of sound localization in bats and dolphins, particularly within the sophisticated realm of echolocation.
[34][35][page needed] Scientific consideration of binaural hearing began before the phenomenon was so named, with speculations published in 1792 by William Charles Wells (1757–1817) based on his research into binocular vision.
[36] Ernst Heinrich Weber (1795–1878) and August Seebeck (1805–1849) and William Charles Wells also attempted to compare and contrast what would become known as binaural hearing with the principles of binocular integration generally.