In the processing stage, connections with basic emotional knowledge is stored separately in memory network specific to associations.
Emotional meanings of speech are implicitly and automatically registered after the circumstances, importance and other surrounding details of an event have been analyzed.
[4] Vocal expressions of anger and sadness are perceived most easily, fear and happiness are only moderately well-perceived, and disgust has low perceptibility.
This channel of language conveys emotions felt by the speaker and gives us as listeners a better idea of the intended meaning.
Syntactic information is processed primarily in the frontal regions and a small part of the temporal lobe of the brain while semantic information is processed primarily in the temporal regions with a smaller part of the frontal lobes incorporated.
Neuroimaging studies using functional magnetic resonance imaging (fMRI) machines provide further support for this hemisphere lateralization and temporo-frontal activation.
In addition, people with right-hemisphere damage have been studied to be impaired when it comes to identifying the emotion in intoned sentences.
[10] Emotional states such as happiness, sadness, anger, and disgust can be determined solely based on the acoustic structure of a non-linguistic speech act.
There is some research that supports the notion that these non-linguistic acts are universal, eliciting the same assumptions even from speakers of different languages.
As Laukka et al. state: Speech requires highly precise and coordinated movement of the articulators (e.g., lips, tongue, and larynx) in order to transmit linguistic information, whereas non-linguistic vocalizations are not constrained by linguistic codes and thus do not require such precise articulations.
This entails that non-linguistic vocalizations can exhibit larger ranges for many acoustic features than prosodic expressions.
The study showed that listeners could identify a wide range of positive and negative emotions above chance.
Decipherment ability of this information was determined to be applicable across cultures and independent of the adult's level of experience with infants.
For example, "In a study of relationship of spectral and prosodic signs, it was established that the dependence of pitch and duration differed in men and women uttering the sentences in affirmative and inquisitive intonation.
In an fMRI study, men showed a stronger activation in more cortical areas than female subjects when processing the meaning or manner of an emotional phrase.
This result was interpreted to mean that men need to make conscious inferences about the acts and intentions of the speaker, while women may do this sub-consciously.