Information from the lips and face supports aural comprehension[3] and most fluent listeners of a language are sensitive to seen speech actions (see McGurk effect).
[7] Many factors affect the visibility of a speaking face, including illumination, movement of the head/camera, frame-rate of the moving image and distance from the viewer (see e.g.[8]).
Head movement that accompanies normal speech can also improve lip-reading, independently of oral actions.
[11] The 'phoneme equivalence class' measure takes into account the statistical structure of the lexicon and can also accommodate individual differences in lip-reading ability.
[12][13] In line with this, excellent lipreading is often associated with more broad-based cognitive skills including general language proficiency, executive function and working memory.
[14][15] Seeing the mouth plays a role in the very young infant's early sensitivity to speech, and prepares them to become speakers at 1 – 2 years.
In order to imitate, a baby must learn to shape their lips in accordance with the sounds they hear; seeing the speaker may help them to do this.
[20][21] These studies and many more point to a role for vision in the development of sensitivity to (auditory) speech in the first half-year of life.
Until around six months of age, most hearing infants are sensitive to a wide range of speech gestures - including ones that can be seen on the mouth - which may or may not later be part of the phonology of their native language.
But in the second six months of life, the hearing infant shows perceptual narrowing for the phonetic structure of their own language - and may lose the early sensitivity to mouth patterns that are not useful.
The speech sounds /v/ and /b/ which are visemically distinctive in English but not in Castilian Spanish are accurately distinguished in Spanish-exposed and English-exposed babies up to the age of around 6 months.
However, hearing a non-native language can shift the child's attention to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.
[32] Seeing the speaker helps at all levels of speech processing from phonetic feature discrimination to interpretation of pragmatic utterances.
[40] Specific Language Impairment: Children with SLI are also reported to show reduced lipreading sensitivity,[41] as are people with dyslexia.
Researchers now focus on which aspects of language and communication may be best delivered by what means and in which contexts, given the hearing status of the child and her family, and their educational plans.
In deaf people who have a cochlear implant, pre-implant lip-reading skill can predict post-implant (auditory or audiovisual) speech processing.
[52] In particular, reliable phoneme-grapheme mapping may be more difficult for deaf children, who need to be skilled speech-readers in order to master this necessary step in literacy acquisition.
Cued speech is said to be easier for hearing parents to learn than a sign language, and studies, primarily from Belgium, show that a deaf child exposed to cued speech in infancy can make more efficient progress in learning a spoken language than from lipreading alone.
[57] A similar approach, involving the use of handshapes accompanying seen speech, is Visual Phonics, which is used by some educators to support the learning of written and spoken language.
Lipreading classes have been shown to be of benefit in UK studies commissioned by the Action on Hearing Loss charity[60] (2012).
They are taught the lipreaders' alphabet, groups of sounds that look alike on the lips (visemes) like p, b, m, or f, v. The aim is to get the gist, so as to have the confidence to join in conversation and avoid the damaging social isolation that often accompanies hearing loss.
Lipreading tests have been used with relatively small groups in experimental settings, or as clinical indicators with individual patients and clients.
[66] Demonstration models, using machine-learning algorithms, have had some success in lipreading speech elements, such as specific words, from video[67] and for identifying hard-to-lipread phonemes from visemically similar seen mouth actions.