Spatial hearing loss
Spatial hearing loss, also known as spatial processing deficit, refers to a form of deafness that is an inability to use spatial cues, i.e. where a sound originates from in space, to understand speech in the presence of background noise. (Cameron & Dillon, 2008)[1]
Overview
People with spatial hearing loss have difficulty processing speech that arrives from one direction while simultaneously filtering out noise arriving from other directions. Spatial hearing loss is not caused by peripheral hearing loss and is thought to occur in the auditory pathways of the brain. Research has shown spatial hearing loss to be a leading cause of central auditory processing disorder (CAPD) in children. Children with spatial hearing loss commonly present with difficulties understanding speech in the classroom (Cameron & Dillon, 2008).[1] Spatial hearing loss is found in most people over 70 years of age, and is independent of other types of age related hearing loss.[2] As with presbyacusis, spatial hearing loss varies with age. Through childhood and into adulthood it can be viewed as spatial hearing gain (with it becoming easier to hear speech in noise), and then with middle age and beyond the spatial hearing loss begins (with it becoming harder again to hear speech in noise).
Those with no spatial hearing loss are able to use the signals arriving at the two ears so that noise seems to originate from a different location to the speech being listened to. The central auditory processing of normal listeners is able to create an auditory scene through the use of signal phase and level differences.[3] It is thought that auditory streams are first located in space, with one of the streams then selected. A gain mechanism can then be employed involving the enhancement of the speech stream being attended to, and the suppression of the speech stream(s) being ignored (Kerlin et al.).[4] Those with spatial hearing loss are unable to separate the auditory streams and employ the gain mechanism that is essential to make sense of speech in noise.
Role of the corpus callosum
Many neuroscience studies have facilitated the development and refinement of a speech processing model. This model shows cooperation between the two hemispheres of the brain, with asymmetric inter-hemispheric and intrahemispheric connectivity consistent with the left hemisphere specialization for phonological processing.[5] The right hemisphere is more specialized for sound localization,[6] while auditory space representation in the brain requires the integration of information from both hemispheres.[7]
The corpus collosum (CC) is the major route of communication between the two hemispheres. At maturity it is a large mass of white matter and consists of bundles of fibres linking the white matter of the two cerebral hemispheres. Its caudal and splenium portions contain fibres that originate from the primary and second auditory cortices, and from other auditory responsive areas.[8] Transcallosal interhemispheric transfer of auditory information plays a significant role in spatial hearing functions that depend on binaural cues.[9] Various studies have shown that despite normal audiograms, children with known auditory interhemispheric transfer deficits have particular difficulty localizing sound and understanding speech in noise.[10]
The CC of the human brain is relatively slow to mature with its size continuing to increase until the fourth decade of life. From this point it then slowly begins to shrink.[11] LiSN-S SRT scores show that the ability to understand speech in noisy environments develops with age, is beginning to be adult like by 18 years and starts to decline between 40 and 50 years of age. [12]
Diagnosis
Spatial hearing loss can be diagnosed using the Listening in Spatialized Noise – Sentences test (LiSN-S),[13] which was designed to assess the ability of children with central auditory processing disorder (CAPD) to understand speech in background noise. The LiSN-S allows audiologists to measure how well a person uses spatial and pitch information to understand speech in noise. Inability to use spatial information has been found to be a leading cause of CAPD in children and is referred to as Spatial Hearing Loss or spatial processing disorder (Cameron & Dillon, 2008).[1]
Test participants repeat a series of target sentences which are presented simultaneously with competing speech. The listener's speech reception threshold (SRT) for target sentences is calculated using an adaptive procedure. The targets are perceived as coming from in front of the listener whereas the distracters vary according to where they are perceived spatially (either directly in front or either side of the listener). The vocal identity of the distracters also varies (either the same as, or different to, the speaker of the target sentences) (Cameron & Dillon, 2009).[13]
Performance on the LISN-S is evaluated by comparing listeners' performances across four listening conditions, generating two SRT measures and three "advantage" measures. The advantage measures represent the benefit in dB gained when either talker, spatial, or both talker and spatial cues are available to the listener. The use of advantage measures minimizes the influence of higher order skills on test performance (Cameron & Dillon, 2008).[1] This serves to control for the inevitable differences that exist between individuals in functions such as language or memory.
Dichotic listening tests can be used to measure the inter-hemispheric transfer of auditory information. Dichotic listening performance increases (and the right-ear advantage decreases) with the development of the CC, peaking before the third decade. During middle age and older the CC reduces in size and dichotic listening becomes worse, primarily in the left ear. Dichotic listening tests typically involve two different auditory stimuli (usually speech) presented simultaneously, one to each ear, using a set of headphones. Participants are asked to attend to one or (in a divided-attention test) both of the messages.[14]
Research
Research has shown that PC based spatial hearing training software can help some of the children identified as failing to develop their spatial hearing skills (Cameron & Dillon, 2011).[15] Further research is needed to discover if a similar approach would help those over 60 to recover the loss of their spatial hearing. Related research into the plasticity of white-matter (see Lövdén et al. for example)[16] suggests some recovery may be possible.
Music training leads to superior understanding of speech in noise across age groups and musical experience protects against age-related degradation in neural timing (Parbery-Clark et al., 2012).[17] Further research is needed to explore the ability of music to promote neural resilience across the lifespan.
Bilateral digital hearing aids do not preserve localization cues (see, for example, Van den Bogaert et al., 2006)[18] This means that audiologists when fitting hearing aids to patients (with a mild to moderate age related loss) risk negatively impacting their spatial hearing capability. With those patients who feel that their lack of understanding of speech in background noise is their primary hearing difficulty then hearing aids may simply make their problem even worse - their spatial hearing gain will be reduced by in the region of 10dB. Although further research is needed, there is a growing number of studies which have shown that open-fit hearing aids are better able to preserve localisation cues (see, for example, Alworth 2011)[19]
See also
Wikimedia Commons has media related to Hearing. |
- Audism, discrimination against Deaf and hard-of-hearing people
- Cocktail party effect
- Deafblind
- Hearing aid
- National Association for the Deaf (NAD)
- Spatial hearing
References
- ↑ 1.0 1.1 1.2 1.3 Cameron, S & Dillon, H (2008). The Listening in Spatialized Noise – Sentences Test: Comparison to prototype LISN test and results from children with either a suspected (central) auditory processing disorder of a confirmed language disorder. Journal of the American Academy of Audiology, 19(5).
- ↑ D.Robert Frisina, Robert D Frisina, Speech recognition in noise and presbycusis: relations to possible neural mechanisms, Hearing Research, Volume 106, Issues 1-2, April 1997
- ↑ Adelbert W. Bronkhorst, The Cocktail Party Phenomenon: A Review of Research on Speech Intelligibility in Multiple-Talker Conditions, Acta Acustica united with Acustica (January 2000), pp. 117-128.
- ↑ Kerlin J, Shahin A and Miller L, Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”, Journal of Neuroscience 13; 30(2), 2010.
- ↑ Bidirectional connectivity between hemispheres occurs at multiple levels in language processing, but depends on sex. Bitan et al. Journal of Neuroscience, 2010, 30(35)
- ↑ Hemispheric competence for auditory spatial representation. Spierer et al. Brain 2009,132
- ↑ Mechanisms of Sound Localization in Mammals. Grothe et al. Physiol Rev 2010, 90.
- ↑ Age-related regional variations of the CC identified by diffusion tensor tractography. Lebel C, Caverhill-Godkewitsch S, Beaulieu C., Neuroimage. 2010 Aug 1;52(1):20-31.
- ↑ Sound lateralization in subjects with callosotomy, callosal agenesis, or hemispherectomy. Hausmann M, Corballis MC, Fabri M, Paggi A, Lewald J., Brain Res Cogn Brain Res. 2005 Oct;25(2):537-46.
- ↑ Auditory interhemispheric transfer deficits, hearing difficulties, and brain magnetic resonance imaging abnormalities in children with congenital aniridia due to PAX6 mutations. Bamiou DE et al., Arch Pediatr Adolesc Med. 2007 May;161(5).
- ↑ Microstructural changes and atrophy in brain white matter tracts with aging. Sala S, Agosta F, Pagani E, Copetti M, Comi G, Filippi M. Neurobiology of Aging, 2012 Mar;33(3):488-498.
- ↑ The effects of hearing impairment and aging on spatial processing. Glyde H, Cameron S, Dillon H, Hickson L, Seeto M. Ear & Hearing. 34(1):15-28, January/February 2013.
- ↑ 13.0 13.1 "LiSN-S, Cameron & Dillon, 2009". Nal.gov.au. 2011-05-02. Retrieved 2011-07-02.
- ↑ Perspectives on dichotic listening and the corpus callosum, Musiek FE, Weihing J., Brain Cogn. 2011 Jul;76(2):225-32.
- ↑ Development and Evaluation of the LiSN & Learn Auditory Training Software for Deficit-Specific Remediation of Binaural Processing Deficits in Children: Preliminary Findings. Cameron S, Dillon H., J Am Acad Audiol. 2011 Nov;22(10).
- ↑ Experience-dependent plasticity of white-matter microstructure extends into old age. Lövdén et al., Neuropsychologia. 2010 Nov;48(13).
- ↑ Musical experience offsets age-related delays in neural timing, Parbery-Clark et al., Neurobiol Aging 33(7), July 2012
- ↑ Horizontal localization with bilateral hearing aids: Without is better than with, Van den Bogaert et al., J. Acoust. Soc. Am. 119 (1), January 2006.
- ↑ Effect of Occlusion, Directionality and Age on Horizontal Localization, Alworth L., Doctoral Dissertation, 2011
External links
- http://www.nal.gov.au
- Binaural Hearing Aids https://www.youtube.com/watch?v=c8h2LBPjRSk
- Hearing in Noisy Places - The Inside Story https://www.youtube.com/watch?v=kuKMjvRjQFs
- The Interaural Time Difference (ITD) - How it determines the direction of sound http://www.youtube.com/watch?v=CuYNFv2Oc08
|