Lip reading
Lip reading, also known as lipreading or speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available, relying also on information provided by the context, knowledge of the language, and any residual hearing. Although primarily used by deaf and hard-of-hearing people, people with normal hearing generally process visual information from the moving mouth at a subconscious level.
Process
In everyday conversation, people with normal vision, hearing and social skills sub-consciously use information from the lips and face to aid aural comprehension and most fluent speakers of a language are able to speechread to some extent (see McGurk effect). This is because each speech sound (phoneme) has a particular facial and mouth position (viseme), and people can to some extent deduce what phoneme has been produced based on visual cues, even if the sound is unavailable or degraded (e.g. by background noise).
Lipreading while listening to spoken language provides the redundant audiovisual cues necessary to initially learn language, as evidenced by Lewkowicz who in his studies determined that babies between 4 and 8 months of age pay special attention to mouth movements when learning to speak both native and nonnative languages. While after 12 months of age enough audiovisual cues have been attained that they no longer have to look at the mouth when encountering a native language, hearing a nonnative language spoken again prompts this shift to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.[1]
Research has shown that, as expected, deaf adults are better at lipreading than hearing adults due to their increased practice and heavier reliance on lip reading in order to understand speech. However, when the same research team conducted a similar study with children it was determined that deaf and hearing children have similar lip reading skills. It is only after 14 years of age that skill levels between deaf and hearing children begin to differentiate significantly, indicating that lipreading skill in early life is independent of auditory capability. This may indicate a deterioration in lip reading ability with age for hearing individuals or an increased efficiency in lip reading ability with age for deaf individuals.[2]
Lipreading has been proven to activate not only the visual cortex of the brain, but also the auditory cortex in the same way when actual speech is heard. Research has shown that rather than have clearcut different regions of the brain dedicated to different senses, the brain works in a mutisensory fashion, thus making a coordinated effort to consider and combine all the different types of speech information it receives, regardless of modality. Therefore, as hearing captures more articulatory detail than sight or touch the brain uses speech and sound to compensate for other senses.[3]
Speechreading is limited, however, in that many phonemes share the same viseme and thus are impossible to distinguish from visual information alone. Sounds whose place of articulation is deep inside the mouth or throat are not detectable, such as glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). It has been estimated that only 30% to 40% of sounds in the English language are distinguishable from sight alone.
Thus, for example, the phrase "where there's life, there's hope" looks identical to "where's the lavender soap" in most English dialects. Author Henry Kisor titled his book What's That Pig Outdoors?: A Memoir of Deafness in reference to mishearing the question, "What's that big loud noise?" He used this example in the book to discuss the shortcomings of speechreading.[4]
As a result, a speechreader must depend heavily on cues from the environment, from the context of the communication, and a knowledge of what is likely to be said. It is much easier to speechread customary phrases such as greetings or a connected discourse on a familiar topic than utterances that appear in isolation and without supporting information, such as the name of a person never met before.
Difficult scenarios in which to speechread include:
- Lack of a clear view of the speaker's lips. This includes:
- obstructions such as moustaches or hands in front of the mouth
- the speaker's head turned aside or away
- dark environment
- a bright back-lighting source such as a window behind the speaker, darkening the face.
- Group discussions, especially when multiple people are talking in quick succession. The challenge here is to know where to look.
- use of an unusual tone or rhythm of speech by the speaker
Tips for Lip Reading
Lip reading, also known as speechreading, is difficult because only 30% of the speech is visible, the other 70% is inferred by context clues. Thus, there are some things that can be done to make the process a little easier. Learning to lip read is like learning to read a book. A novice lip reader will concentrate on each sound, and may miss the meaning. Lip reading will be more effective if you receive the message as a whole rather than each individual sound.
- Make sure you can see the speaker’s face clearly.
- Hold the conversation in a quiet environment, with good lighting, and not a lot of visual distractions.
- Make sure that light is behind you, not the person you are trying to lip read.
- Gently remind people that you need to see their face when they forget and look down or away from you.
- Ask for the topic of the conversation, if you are not sure.
- If the speaker exaggerates, or talks too loudly, aggressively request that they speak normally.
- Remind speakers to move their hands or other objects away from their face.
- If you still don’t understand after a repetition, ask the speaker to rephrase.[5]
Lipreading is a skill that is easier to develop in those who have experience with spoken language. In one study by Tonya R.Bergeson adults who progressively became deaf, are able to read lips much better than those who suddenly became deaf.[6]
Learning to Lip Read
Lip reading can be taught, but initially infants begin to lip read between the age of 6 and 12 months. In order to imitate, a baby must learn to shape their lips in accordance with the sounds they are hearing.[7] Even newborns have been shown to imitate adult mouth movements such as sticking out the tongue or opening the mouth, which could be a precursor to further imitation and lip reading abilities.[8] Infants as young as 4 months have the ability to connect visual and auditory information, which is helpful when learning to lip read. For example, one study showed that infants tend to look longer at a visual stimulus that corresponds to an auditory stimulus they hear from a recording.[9]
New studies have shown that it is possible that aspects of lip reading may indicate signs of autism. Research from Florida Atlantic University compared groups of infants (ages four to 12 months) to a group of adults in a test of lip reading abilities. The study discusses the significance of the shift babies make between watching the eyes and mouth of people speaking at different developmental stages. At age of four months, they typically focus their attention on the eyes for understanding. Between ages of six to eight months, during the "babbling" stage of language acquisition, they shift their focus to the mouth of the speaker. They continue lip reading until about 10 months of age, at which they switch their attention back to the eyes. Researchers suggest that the second stage relates to the emergence of speech and ability to better understand "social cues, shared meanings, beliefs and desires", according to professor of Psychology David J Lewkowicz.[10] When hearing a language different from their native language, babies revert their attention back to the mouth, despite what stage of learning acquisition they are at; they continue to lip read up to about 12 months of age. Although, greater research is needed to support their claim, their data suggest that "the infants who continue to focus most of their attention on the mouth past 12 months of age are probably not developing the age-appropriate perceptual and cognitive skills and thus may be at risk for disorders like autism".
While lip reading is a natural ability that develops in babies at a young age, people can be taught to lip read and to become better lip readers. There are even trainers and teachers who can aid people when they are learning to lip read and help them to focus on certain context cues. Here are several ways lip reading can be taught or improved:[11]
- training your eyes to help your ears
- watching the movements of the mouth, teeth and tongue
- reading the expression on the face
- noticing body language and gestures
- using residual hearing
- anticipation
Use of speechreading by deaf people
Speechreaders who have grown up deaf may never have heard the spoken language and are unlikely to be fluent users of it, which makes speechreading much more difficult. They must also learn the individual visemes by conscious training in an educational setting. In addition, speechreading takes a lot of focus, and can be extremely tiring. For these and other reasons, many deaf people prefer to use other means of communication with non-signers, such as mime and gesture, writing, and sign language interpreters.
To quote from Dorothy Clegg's 1953 book The Listening Eye,[12] "When you are deaf you live inside a well-corked glass bottle. You see the entrancing outside world, but it does not reach you. After learning to lip read, you are still inside the bottle, but the cork has come out and the outside world slowly but surely comes in to you." This view—that speechreading, though difficult, can be successful—is relatively controversial within the deaf world; for an incomplete history of this debate, see manualism and oralism.
When talking with a deaf person who uses speechreading, exaggerated mouthing of words is not considered to be helpful and may in fact obscure useful clues. However, it is possible to learn to emphasize useful clues; this is known as "lip speaking".
Speechreading may be combined with cued speech—movements of the hands that visually represent otherwise invisible details of pronunciation. One of the arguments in favor of the use of cued speech is that it helps develop lip-reading skills that may be useful even when cues are absent, i.e., when communicating with non-deaf, non-hard of hearing people.
Cued speech helps to relieve speechreading ambiguities; ultimately a combined practice of lipreading and use of cued speech brings greater clarity and accuracy to understanding spoken sentences. Dr. R.Orin Cornett was the inventor of cued speech; before his passing in 2002 he is known for his work at Gallaudet University in Washington DC. During his research he did a study with 18 profoundly deaf children to test their understanding of language with different ways to improve clarity of sentences ( i.e. cued speech, lipreading, cued speech and lipreading etc.).[13] These children had at least four years of cued speech instruction. His research showed that the clarity of language can be improved by up to 95% with the combination of lipreading and cued speech for those who are deaf. Just the same, a person who was listening, lipreading and was exposed to cued speech had increased understanding of the sentences. This is a significant increase compared to the 30% of words understood solely by lipreading. Thus, in order to augment one's understanding of sentences spoken, if one is deaf, one needs to rely on lipreading and cued speech; the combination of both will bring greater clarity of language.
See also
- Audio-visual speech recognition
- Forensic speechreading
- Motor theory of speech perception
- Mouthing
- Read My Lips (disambiguation)
- Reading (process)
- Silent speech interface
- Ventriloquism
- Visual capture
- Cued Speech
References
Notes
- ↑ Lewkowicz, David J.; Hansen-Tift, Amy M. (2011). "Infants deploy selective attention to the mouth of a talking face when learning speech". Proceedings of the National Academy of Sciences of the United States of America 109: 1431–1436. doi:10.1073/pnas.1114783109.
- ↑ "Read my lips - Advances in speechreading research with deaf children". ESRC Deafness Cognition and Language Research Centre.
- ↑ Calvert, Gemma A.; Edward T. Bullmore; Michael J. Brammer; Ruth Campbell; Steven C. R. Williams; Philip K. McGuire; Peter W. R. Woodruff; Susan D. Iversen; Anthony S. David (25 April 1997). "Activation of Auditory Cortex During Silent Lipreading". Science 276 (5312): 593–596. doi:10.1126/science.276.5312.593. Retrieved 23 September 2013.
- ↑ Kisor, Henry (2010), What's That Pig Outdoors?: A Memoir of Deafness, University of Illinois Press
- ↑ "e-Michigan Deaf and Hard of Hearing". Speechreading. Retrieved 18 September 2013.
- ↑ Bergeson, TR; Pisoni, DB; Reese, L; Kirk, KI (2003). "Audiovisual Speech Perception in Adult Cochlear Implant Users: Effects of Sudden vs. Progressive Hearing Loss". Daytona Beach, Florida: MidWinter Meeting of the Association for Research in Otolaryngology.
- ↑ "Babies Learn To Talk By Reading Lips, New Research Suggests". The Huffington Post. January 16, 2012.
- ↑ Meltzoff, AN; Moore, MK (June 1983). "Newborn infants imitate adult facial gestures". Child Development 54 (3): 702–709. doi:10.1111/j.1467-8624.1983.tb00496.x. JSTOR 1130058.
- ↑ Spelke, Elizabeth (October 1976). "Infants' intermodal perception of events". Cognitive Psychology 8 (4): 553–560. doi:10.1016/0010-0285(76)90018-9.
- ↑ http://www.nbcnews.com/health/babies-learn-speak-lip-reading-could-offer-autism-clues-1C6436520
- ↑ "Lipreading". Hearing Link.
- ↑ Clegg, Dorothy (1953), The Listening Eye: A Simple Introduction to the Art of Lip-reading, Methuen & Company
- ↑ http://www.cuedspeech.org.uk/cued-speech-history
Bibliography
- M. Aharon and R. Kimmel. (2006) Representation analysis and synthesis of lip images using dimensionality reduction, International Journal of Computer Vision, 67(3):297–312.
External links
- CSAIL: Articulatory Feature Based Visual Speech Recognition - To develop a visual speech recognition system that models visual speech in terms of the underlying articulatory processes.
- an MRI video of a person speaking, showing tongue movements not visible to a lip reader
|