Selective auditory attention

As a result, the information given from the teacher is stored and encoded in the student's long term memory and the stimuli from the rowdy classroom is completely ignored as if it weren't present in the first place.

[4] Early researches on selective auditory attention can be traced back to 1953, when Colin Cherry introduced the "cocktail party problem".

[6] In Cherry's experiment, mimicking the problem faced by air traffic controllers, participants had to listen to two messages played simultaneously from one loudspeaker and repeat what they heard.

[7] Though introduced by Colin Cherry, Donald Broadbent is often regarded as the first to systematically apply dichotic listening tests in his research.

Words of low threshold, higher level of meaning and importance, such as one's name and "watch out", redirects one's attention to where it is urgently required.

Examining selective auditory attention has been known to be easier in children and adults compared to infants due to the limited ability to use and understand verbal commands.

[17] As through age, older children have an increased ability to detect and select auditory stimuli compared to their younger counterparts.

This suggests that selective auditory attention is an age dependent ability that increases based on improvements in automatic processing of information.

[22] In recent years, neuroimaging tools such as PET (Positron Emission Tomography) and fMRI (Functional Magnetic Resonance Imaging) have been very successful in neural operations with high spatial resolution.

Ida Zündorf, Hans-Otto Karnath and Jörg Lewald carried out a study in 2010 which investigated the advantages and abilities males have in the localization of auditory information.

Zündorf et al. suggested that there may be sex differences in the attention processes that helped locate the target sound from a multiple-source auditory field.

While men and women do have some differences when it comes to selective auditory hearing, they both struggle when presented with the challenge of multitasking, especially when tasks that are to be attempted concurrently are very similar in nature (Dittrich, and Stahl, 2012, p. 626).

[38] The core of the technology is a neural network optimized to process and analyze audio signals in real time (within one-hundredth of a second) on resource-limited headsets.

[38][39] The neural networks are embedded in noise-canceling headsets equipped with multiple microphones, resulting in a system capable of generating a sound bubble with a programmable radius ranging from 1 to 2 meters.