Phonetics

Language production consists of several interdependent processes which transform a non-linguistic message into a spoken or signed linguistic signal.

To perceive speech, the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes and words.

To correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories.

[4] His grammar formed the basis of modern linguistics and described several important phonetic principles, including voicing.

[9] As part of their training in practical phonetics, phoneticians were expected to learn to produce these cardinal vowels to anchor their perception and transcription of these phones during fieldwork.

[10] Language production consists of several interdependent processes which transform a nonlinguistic message into a spoken or signed linguistic signal.

In this stage of language production, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced.

Because of the close connection between the position of the tongue and the resulting sound, the place of articulation is an important concept in many subdisciplines of phonetics.

The coronal places of articulation represent the areas of the mouth where the tongue contacts or makes a constriction, and include dental, alveolar, and post-alveolar locations.

In this way, retroflex articulations can occur in several different locations on the roof of the mouth including alveolar, post-alveolar, and palatal regions.

If the underside of the tongue tip makes contact with the roof of the mouth, it is sub-apical though apical post-alveolar sounds are also described as retroflex.

[28] Because of individual anatomical variation, the precise articulation of palato-alveolar stops (and coronals in general) can vary widely within a speech community.

[53] Models that assume movements are planned in extrinsic space run into an inverse problem of explaining the muscle and joint locations which produce the observed path or acoustic signal.

Concerns about the inverse problem may be exaggerated, however, as speech is a highly learned skill using neurological structures which evolved for the purpose.

[55] The equilibrium-point model proposes a resolution to the inverse problem by arguing that movement targets be represented as the position of the muscle pairs acting on a joint.

The minimal unit is a gesture that represents a group of "functionally equivalent articulatory movement patterns that are actively controlled with reference to a given speech-relevant goal (e.g., a bilabial closure).

[62] The normal phonation pattern used in typical speech is modal voice, where the vocal folds are held close together with moderate tension.

[76] Sibilants are a special type of fricative where the turbulent airstream is directed towards the teeth,[78] creating a high-pitched hissing sound.

[92] Above 50 percent of vital capacity, the respiratory muscles are used to "check" the elastic forces of the thorax to maintain a stable pressure differential.

Because metabolic needs are relatively stable, the total volume of air moved in most cases of speech remains about the same as quiet tidal breathing.

[i] To perceive speech, the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes, and words.

[98] To correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories.

[102] To do this, listeners rapidly accommodate to new speakers and will shift their boundaries between categories to match the acoustic distinctions their conversational partner is making.

For example, the auditory impressions of volume, measured in decibels (dB), does not linearly match the difference in sound pressure.

[115] The mismatch between acoustic analyses and what the listener hears is especially noticeable in speech sounds that have a lot of high-frequency energy, such as certain fricatives.

Because the acoustics are a consequence of the articulation, both methods of description are sufficient to distinguish sounds with the choice between systems dependent on the phonetic feature being investigated.

The respiratory organs used to create and modify airflow are divided into three regions: the vocal tract (supralaryngeal), the larynx, and the subglottal system.

[120][121] The standardized nature of the IPA enables its users to transcribe accurately and consistently the phones of different languages, dialects, and idiolects.

Because peripheral vision is not as focused as the center of the visual field, signs articulated near the face allow for more subtle differences in finger movement and location to be perceived.

Due to universal neurological limitations, two-handed signs generally have the same kind of articulation in both hands; this is referred to as the Symmetry Condition.

See caption
A top-down view of the larynx
A waveform (top), spectrogram (middle), and transcription (bottom) of a woman saying "Wikipedia" displayed using the Praat software for linguistic analysis
How sounds make their way from the source to the brain