Augmentative and alternative communication

refer to caption
An AAC user indicates a series of numbers on an eye gaze communication board in order to convey a word.

Augmentative and alternative communication (AAC) is an umbrella term that encompasses the communication methods used to supplement or replace speech or writing for those with impairments in the production or comprehension of spoken or written language. AAC is used by those with a wide range of speech and language impairments, including congenital impairments such as cerebral palsy, intellectual impairment and autism, and acquired conditions such as amyotrophic lateral sclerosis and Parkinson's disease. AAC can be a permanent addition to a person's communication or a temporary aid.

Modern use of AAC began in the 1950s with systems for those who had lost the ability to speak following surgical procedures. During the 1960s and 1970s, spurred by an increasing commitment in the West towards the inclusion of disabled individuals in mainstream society and developing the skills required for independence, the use of manual sign language and then graphic symbol communication grew greatly. It was not until the 1980s that AAC began to emerge as a field in its own right. Rapid progress in technology, including microcomputers and speech synthesis, have paved the way for communication devices with speech output and multiple options for access to communication for those with physical disabilities.

AAC systems are diverse: unaided communication uses no equipment and includes signing and body language, while aided approaches use external tools.[1] Aided communication methods can range from paper and pencil to communication books or boards to devices that produce voice output (speech generating devices or SGD's)and/or written output. The symbols used in AAC include gestures, photographs, pictures, line drawings, letters and words, which can be used alone or in combination. Body parts, pointers, adapted mice, or eye tracking can be used to select target symbols directly, and switch access scanning is often used for indirect selection. Message generation is generally much slower than spoken communication, and as a result rate enhancement techniques may be used to reduce the number of selections required. These techniques include "prediction", in which the user is offered guesses of the word/phrase being composed, and "encoding", in which longer messages are retrieved using a prestored code.

The evaluation of a user's abilities and requirements for AAC will include the individual's motor, visual, cognitive, language and communication strengths and weaknesses. The evaluation requires the input of family members, particularly for early intervention. Respecting ethnicity and family beliefs are key to a family-centered and ethnically competent approach. Studies show that AAC use does not impede the development of speech, and may result in a modest increase in speech production. Users who have grown up with AAC report satisfying relationships and life activities; however, they may have poor literacy and are unlikely to be in employment.

The Convention on the Rights of Persons with Disabilities defines augmentative and alternative communication one of communications as well as display of text, large-print, tactile communication, plain language, accessible multimedia and accessible information and communications technology.[2]

Scope

Augmentative and alternative communication is use by individuals to compensate for severe speech-language impairments in the expression or comprehension of spoken or written language.[3][4] People making use of AAC include individuals with a variety of congenital conditions such as cerebral palsy, autism, intellectual disability, and acquired conditions such as amyotrophic lateral sclerosis, traumatic brain injury and aphasia.[5] Prevalence data vary depending on the country and age/disabilities surveyed, but typically between 0.1 to 1.5% of the population are considered to have such severe speech-language impairments that they have difficulty making themselves understood, and thus could benefit from AAC.[5][6] An estimated 0.05% of children and young people require high technology AAC.[7] Well-known AAC users include physicist Stephen Hawking, broadcaster Roger Ebert and poet Christopher Nolan.[8][9] Award-winning films such as My Left Foot and The Diving Bell and the Butterfly, based on books by AAC users Christy Brown and Jean-Dominique Bauby respectively, have brought the lives of those who use AAC to a wider audience.[10][11][12]

The field was originally called "Augmentative Communication"; the term served to indicate that such communication systems were to supplement natural speech rather than to replace it. The addition of "alternative" followed later, when it became clear that for some individuals non-speech systems were their only means of communication.[13] AAC users typically utilize a variety of aided and unaided communication strategies depending on the communication partners and the context.[14]

Forms of AAC

Unaided AAC

refer to caption
Sign for "interpreter" in Quebec Sign Language

Unaided AAC systems are those that do not require an external tool, and include facial expression, vocalizations, gestures, and sign languages and systems.[15][16] Informal vocalizations and gestures such as body language and facial expressions are part of natural communication, and such signals may be used by those with profound disabilities.[17] More formalized gestural codes exist that lack a base in a naturally occurring language. For example, the Amer-Ind code is based on Plains Indian Sign Language, and has been used with children with severe-profound disabilities, and adults with a variety of diagnoses including dementia, aphasia and dysarthria.[18] The benefits of gestures and pantomime are that they are always available to the user, usually understood by an educated listener, and are efficient means of communicating.[19]

In contrast, sign languages have a linguistic base and permit the expression of an unlimited number of messages.[18] Approaches to signing can be divided into two major categories, those that encode an existing language, and those that are languages in their own right.[20] Signing Exact English may be considered the most widely used example of the former and American Sign Language as a common example of the latter.[20] Signing is used alone or in conjunction with speech to support communication with individuals with a variety of disorders.[21] The specific hand shapes and movements of sign and gesture require an individual to have adequate fine motor and motor planning skills.[15][22] Sign languages require more fine-motor coordination and are less transparent in meaning than gestural codes such as Amer-Ind; the latter limits the number of people able to understand the person's communication without training.[23][24][25]

Aided AAC

A table of four rows and five column. It has a printed word and a icon in each cell.
This communication board, showing a food category, is a low-tech AAC aid.
an electronic device about 20cm across, has a touchscreen showing communication symbols but no keyboard.
This speech generating device, showing available categories in a grid layout, is a high-tech AAC aid.

An AAC aid is any "device, either electronic or non-electronic, that is used to transmit or receive messages";[5] such aids range from communication books to speech generating devices.[15] Since the skills, areas of difficulty and communication needs of AAC users vary greatly, an equally diverse range of communication aids and devices is required.[26]

Low-tech

Low-tech communication aids are defined as those that do not need batteries, electricity or electronics. These are often very simple communication boards or books, from which the user selects letters, words, phrases, pictures, and/or symbols to communicate a message.[27] Depending on physical abilities and limitations, users may indicate the appropriate message with a body part, light pointer, eye-gaze direction, or a head/mouth stick. Alternatively, they may indicate yes or no while a listener scans through possible options.[28]

High-tech

High-tech AAC aids permit the storage and retrieval of electronic messages, with most allowing the user to communicate using speech output.[29] Such devices are known as speech generating devices (SGD) or voice output communication aids (VOCA).[30] A device's speech output may be digitized and/or synthesized: digitized systems play recorded words or phrases and are generally more intelligible while synthesized speech uses text-to-speech software that can be harder to understand but that permits the user to spell words and speak novel messages.[30][31]

High-tech systems may be dedicated devices developed solely for AAC, or non-dedicated devices such as computers that run additional software to allow them to function as AAC devices.[29][32] They may be static or dynamic in form. Static communication devices have symbols in fixed positions on paper overlays, which are changed manually. To increase the vocabulary available, some static devices have multiple levels, with different words appearing on different levels.[33] On dynamic AAC devices, the user can change the symbols available using page links to navigate to appropriate pages of vocabulary and messages.[34][35]

High-tech devices vary in the amount of information that they can store, as well as their size, weight and thus their portability.[36] Access methods depend on the abilities of the user, and may include the use of direct selection of symbols on the screen or keyboard with a body part, pointer, adapted mice or joysticks, or indirect selection using switches and scanning.[32][37] Devices with voice output offer its user the advantage of more communicative power, including the ability to initiate conversation with communication partners who are at a distance.[38] However, they typically require programming,[38] and tend to be unreliable. Because of the latter, low tech systems often recommended as a backup in case of device failure.[32][39][40]

Symbols

This Blissymbol represents the word "govern". The inverted "v" indicates that the symbol is a verb.

Symbols used on high and low-tech AAC systems include graphic, auditory, gestural and textural symbols to represent objects, actions and concepts.[5][41] For users with literacy skills, both low and high-tech devices may use alphabet-based symbols including individual letters, whole words, or parts thereof.[42] With low-tech devices, the communication partner must interpret the symbols chosen whereas a high-tech device can speak the created message aloud.[42] Several large graphic symbol sets have been developed; these include Blissymbols, which possess linguistic characteristics such as grammatical indicators, and the more iconic Picture Communication Symbols (PCS) which do not.[43][44] Tactile symbols are textured objects, real objects or parts of real objects that are used as a communication symbols particularly for individuals with visual impairments and/or significant intellectual impairments.[45][46] Auditory symbols such as choices of spoken words or Morse code can also be integrated with assistive technology for the visually impaired.[45][47][48]

The choice of symbols and aspects of their presentation, such as size and background, depend on an individual's preferences as well as their linguistic, visual, and cognitive skills.[45][46][49]

Access and selection methods

 a user in a wheelchair faces a communication partner who holds a transparent piece of plastic between them, the plastic has the letters of the alphabet printed around the edges
A subject uses eye-gaze to indicate choices on a transparent letter-based communication board. This is a form of "Direct Selection".

Technological advances have dramatically increased the types of selection methods available for individuals with communication impairments.[50] In "Direct Selection", the selection is made by pointing to the desired symbol using a finger or an alternative pointer, such as eye gaze, a head stick, head- or eye-controlled mouse. To accommodate motor control difficulties some users use alternative activation strategies; for example in "timed activation", the user maintains selection of the symbol for a predetermined period of time until it is recognized by the system. With the "release activation", the selection of the item is only made when the person releases contact from the display.[51]

Direct activation of an AAC system is generally the first choice of access method as it is faster and cognitively easier.[52] Those unable to do so may use indirect selection or "scanning". In this method, items displayed for selection are scanned; the scanning may be visual using indicators such as lights, highlighting, and/or contrasting borders, or auditory using spoken prompts from a communication partner or device. When the desired message is reached, the AAC user indicates the choice using an alternative selection technique such as a switch, vocalization or gesture.[53][54] Several different patterns for switch access scanning are available: in "circular scanning", the items are displayed in a circle and then scanned one at a time. It is often introduced first to children or beginning AAC users because it is the easiest to understand. In "linear scanning", items are organized in rows and are scanned one at a time until a choice is made. Although more demanding than circular scanning, it is still easy to learn. Finally, in "group-item scanning", items are grouped and the groups scanned consecutively. Once a particular group is selected, items within the group are scanned. One of the most common group-item strategies is row-column scanning in which each row forms a group. The rows of items are scanned and when a row is selected, the items in the row are scanned one at a time until a message is selected.[53]

There are three main selection control techniques in scanning. In "automatic scanning", the scan proceeds at a pre-determined speed and pattern until the user selects an item. In "inverse scanning", the switch is held down to advance the scan, and released to choose the desired iem. In "step scanning", the AAC user activates one switch to move the indicator through the items, and another switch to select the item.[53]

Vocabulary organization

Vocabulary organization refers to the way pictures, words, phrases, and sentences are displayed on the communication system.[55] In general, the goal is to facilitate efficient and effective communication, especially when the individual's AAC system contains a large number of symbols.[25]

an electronic device with speakers and a slot to insert cards that have new sets of symbols on them. The current card has 25 symbols that relate to book reading
A static digitized speech communication device with a book reading (activity-based) overlay

Communication books and devices are often presented in a grid format;[56] the vocabulary items displayed within them may be organized by spoken word order, frequency of usage or category. In the Fitzgerald Key organization, symbols from different semantic and syntactic classes are organized in groups from left to right to facilitate sentence construction.[57] Since research has shown that children and adults use a small number of words frequently,[58][59] in a core-fringe vocabulary organization, the words and messages that are communicated most frequently appear on a "main page". The fringe vocabulary—words and messages used more rarely and that are specific to an individual—appear on other pages.[60] Symbols may also be organized by category, grouping people, places, feelings, foods, drinks, and action words together.[56] Another form of grid organization groups vocabulary according to specific activities.[61] Each display contains symbols for the people, places, objects, feelings, actions, and other relevant vocabulary items for a specific activity or routine.[62]

A screen showing a stylised version of a bedroom, mounted on top of the screen is the sensor for a head mouse
Speech generating device using a visual scene display, accessed using a head mouse

Visual scene displays are a different method of organizing and presenting symbols. These are depictions of events, people, objects, and related actions in a picture, photograph, or virtual environment representing a situation, place, or specific experience.[63][64] They are similar to activity displays in that they contain vocabulary that is associated with specific activities or routines. For example, a photo of a child's room may be included in the child's AAC system. Objects and events within the photograph are then used as symbols for communication.[62] Research suggests that visual scene displays are easier than grid displays for young children or those with cognitive impairments to learn and use.[50][65]

Rate enhancement strategies

Augmentative and alternative communication is typically much slower than speech,[66] with users generally producing 8–10 words per minute.[50] Rate enhancement strategies can increase the user's rate of output to around 12–15 words per minute,[50] and as a result enhance the efficiency of communication. There are two main options for increasing the rate of communication: encoding and prediction.[66]

Encoding is a technique permitting an AAC user to produce an entire word, sentence or phrase using only one or two activations of their AAC system.[66] In numeric, alpha-numeric, and letter encoding (also known as abbreviation-expansion), words and sentences are coded as sequences of letters and numbers. For example, typing "HH" may retrieve "Hello, how are you?".[67] In iconic encoding strategies, such as Semantic compaction, icons (picture symbols) are combined in a sequence to produce words or phrases.[67]

Prediction is a rate enhancement strategy in which the device attempts to predict the letter, word or phrase being written by the user. The user can then select the correct prediction without needing to write the full word. Word prediction software may determine the words predicted based on their frequency in language, association with other words, past choices of the user, or grammatical suitability.[66][67]

Assessment and system implementation

I stiffened my body and put my left foot out again, for the third time. I drew one side of the letter. I drew half the other side... I set my teeth so hard that I nearly pierced my lower lip. But – I drew it – the letter 'A'... Shaky, with awkward, wobbly sides and a very uneven centre line... I had done it! I had started – the thing that was to give my mind its chance of expressing itself. True, I couldn't speak with my lips, but now I would speak through something more lasting than spoken words – written words. That one letter, scrawled on the floor with a broken bit of yellow chalk gripped between my toes, was my road to a new world, my key to mental freedom.

The poet and author Christy Brown describes his communication breakthrough at 5 years in the book My Left Foot.[68]

An evaluation of an individual's abilities, limitations and communication needs is necessary to select appropriate AAC techniques. The purpose of the assessment is to identify potential AAC approaches that can bridge discrepancies between a potential user's current communication and their present and future communication needs.[69] AAC evaluations are often conducted by specialized teams which may include a speech-language pathologist, occupational therapist, rehabilitation engineer, physiotherapist, social worker and a physician.[70][71] Users, family members and teachers are also key members of the decision making team.[71][72] Sensitivity to and respect of cultural diversity contributes to ongoing family involvement and to the selection of the most appropriate AAC system.[73][74][75][76][77] For members of some cultural groups the presence of an AAC device increases the visibility of disability and is thus viewed as stigmatizing.[74][78][79]

A user's motor abilities, communication skills and needs, cognition and vision are assessed in order to determine the most appropriate match to a communication system.[71] Depending on the individual's physical status, recommendations of an alternative access method, a change in seating/positioning, a mounting system and/or communication aid adaptations may be needed. For example, someone with spastic arm movements may require a key guard on top of the keyboard or touchscreen to reduce the selection of non-target items. The person's needs and abilities determine the symbols chosen and their organization, with the goal being that the communication system can be used as efficiently as possible in different contexts, with different communication partners, and for different social purposes.[8][70] Researcher Janice Light identified four social purposes of communicative interaction in AAC: the expression of needs and wants to a listener, the transfer of information as in more general conversation, the development of social closeness through such things as jokes and cheering, and finally social etiquette practices such as "please" and "thank you". These four purposes vary in terms of the relative importance of the content, rate, duration and the focus of the interaction. It is important that the AAC systems selected also reflect the priorities of the individual and their family.[80][81] In Western cultures, professionals may see a communication device as helping to promote an individual's self-determination, i.e., the ability to make one's own decisions and choices.[82] However, cultural and religious factors may affect the degree to which individual autonomy is a valued construct,[83][84] and influence family attitudes towards AAC.[84]

Training can help the user make use of their AAC system to communicate effectively with others, to control their environment through communication, and to make choices, decisions and mistakes.[85] Skilled users of AAC show communicative competence in four interrelated areas: linguistic, operational, social and strategic.[86] Linguistic competence refers to language skills in the person's native language as well as the linguistic code of the symbol system selected. Operational competence involves the skills in the use and maintenance of the tool of communication, while social competence and strategic competence reflect knowledge and judgment in communicative interactions, including the compensations required for a slow speaking rate, communication breakdowns and those unfamiliar with AAC. An AAC user may require specific device programming and/or training to achieve competency in these areas.[87][88]

Communication partners may also require training to notice and consistently interpret the communication signals of a severely disabled individual, particularly because there is a danger that learned helplessness can be the result of repeated failure to communicate successfully.[89] Parties may need assistance to avoid the directive communication style that can lead a child user of AAC not to develop a full range of communication skills such as initiating or taking the lead in conversation, using complex syntax, asking questions, making commands or adding new information.[90][91] Young AAC users benefit from rich language and literacy experiences to foster vocabulary development, discourse skills, and phonological awareness, all of which supports successful literacy learning.[92] Communication partners are encouraged to provide augmented input with the child, such as signing or pointing to symbols and codes as they communicate, including using the individual's communication system themselves.[87] They also benefit from focussed and explicit reading instruction.[93][94]

Outcomes

Speech

Several reviews have found that the use of AAC does not impede the development of speech in individuals with autism or developmental disabilities, and in fact may result in modest gains being observed.[95][96] A 2006 research review of 23 AAC intervention studies found gains in speech production in 89% of the cases studied, with the remainder showing no change.[96] A descriptive review looking specifically at Picture Exchange Communication System (PECS) intervention studies found that several studies reported an increase in speech, often during later phases, while one noted little or no effect.[97]

Researchers hypothesize that using an AAC device relieves the pressure of having to speak, allowing the individual to focus on communication, and that the reduction in psychological stress makes speech production easier.[98] Others speculate that in the case of speech generating devices, the model of spoken output leads to an increase in speech production.[99]

Language and literacy

Language and literacy have far reaching effects as they facilitate self-expression and social interaction in a variety of settings.[100] Furthermore, literacy fosters independence by providing access to educational and vocational opportunities.[101] Children whose disabilities require AAC often experience developmental delays in language skills such as vocabulary knowledge, length of sentences, syntax, and impaired pragmatic skills.[92] These delays may be due in part to the fact that expressive language is limited by more than the children's own language knowledge. Unlike speaking children, children who use AAC do not always have access to their AAC system, and don't select the content available on the device. These external characteristics may impact language learning opportunities.[102] Most children in this category do not achieve literacy skills beyond that of a typically developing 7–8 year old. [103] Cognitive, language and learning delays contribute to difficulty with literacy development,[92] but environmental factors also play a role. The most literate AAC users often report having access to abundant reading and writing material at home as well as in school during childhood.[104] Studies have shown that many children who use AAC have literacy experiences that are reduced quality, quantity, and opportunity at home and at school as compared to children without disabilities.[101][105][106] Research suggests that with explicit reading instruction, AAC users can develop good literacy skills.[93][94]

Employment

According to a 1997 U.S. Census Bureau report less than 10% of severely disabled individuals were employed. Despite the various barriers to employment, some AAC users achieve success in educational endeavours and employment, though often in lower paying jobs.[107][108] Factors that have been found to be related to employment are a strong work ethic and access to AAC technology, the support of family and friends, education, and work skills.[109] Individuals with ALS who use AAC may continue working; factors supporting continued employment include access to AAC, support from employers, governmental programs and others.[110] Employers of AAC users report that skills in time management, problem solving, communication, technology and a good education are important to employers.[111]

Quality of life

Several studies of young adults who had used AAC since childhood report a generally good quality of life, though few lived independently, or were in paid employment.[107][112] The young adults used multiple modes of communication including aided and unaided AAC approaches.[107][113] More positive quality of life outcomes often correlated with better quality of communication and interaction, as well as personal characteristics, family and community support, and excellent AAC services.[107][112][114] Poorer outcomes were related to lack of access to appropriate AAC supports and resources, problems with technology and negative attitudes.[107][114]

Specific groups of AAC users

Cerebral palsy

Cerebral palsy is a term referring to a non-progressive developmental neuromotor disorder with an upper motor neuron lesion origin.[115] Depending on the location of the brain lesion, individuals with cerebral palsy can have a wide variety of gross and fine motor challenges, including different forms and areas of the body affected. Fine motor planning, control and coordination are often affected.[116] Dysarthria, a speech disorder resulting from neurological damage to the motor-speech system, occurs in an estimated 31% to 88% of those with cerebral palsy. Such individuals may require AAC support for communication. Approximately one half to one third have some degree of intellectual impairment, and visual and hearing problems are also common.[117][118] Gross and fine motor challenges are often of particular concern in accessing an AAC device.[116] Appropriate seating and positioning are important to facilitate optimum stability and movement.[119] Extensive motor training and practice may be required to develop efficient AAC access and use.[120]

Intellectual impairment

 a small electronic device with several buttons, there is a cable going from the device to an activation button.
Phrases can be recorded onto simple switch-operated AAC devices by a user or carer so that the recording is played when the switch is pressed.

Individuals with intellectual impairments face challenges in developing communication skills, including problems with generalization (the transfer of learned skills into daily activities). They may lack communication opportunities in their daily lives, and responsive communicators who understand their communication methods.[121] AAC intervention for this population emphasizes partner training as well as opportunities for integrated, natural communication.[121] Studies have shown that appropriate use of AAC techniques with children and adults with intellectual impairments can enhance communication skills,[122][123] increase participation in activities,[124] choice-making,[125] and even influence the perceptions and stereotypes of communication partners.[124]

While most individuals with intellectual disabilities do not have concomitant behavioural issues, problems in this area are typically more prevalent in this population than others.[126] AAC approaches may be used as part of teaching functional communication skills to non-speaking individuals as an alternative to "acting out" for the purpose of exerting independence, taking control, or informing preferences.[121]

Autism

Autism is a disorder of neural development characterized by impaired social interaction and communication, and by restricted and repetitive behaviour. Typically there is particular difficulty acquiring expressive communication skills.[95][127] Children with autism have been found to have strong visual processing skills, making them good candidates for an AAC approach.[128] AAC intervention in this population is directed towards the linguistic and social abilities of the child,[129] including providing the child with a concrete means of communication, as well as facilitating the development of interactional skills.[95][130]

Explains percentage of people who require AAC Devices. [131][132][133]

AAC systems for this population generally begin with communication boards and/or object or picture exchanges such as the Picture Exchange Communication System (PECS).[130] A 2009 descriptive review provided preliminary evidence that PECS is easily learned by most individuals with autism, provides communication to those with little or no functional speech, and has some limited positive impact on social interaction and challenging behaviours.[97] A study that compared the use of a speech generating device to a picture exchange system found that both were reasonable options for children with autism, as the ease and speed of acquisition of each system was similar.[134]

Developmental verbal dyspraxia

Developmental verbal dyspraxia, also known as Childhood apraxia of speech, is a developmental motor speech disorder involving impairments in the motor control of speech production.[135] The speech of a child with developmental verbal dyspraxia may be unintelligible to the point that daily communication needs cannot be met. A child with developmental verbal dyspraxia often experiences great amounts of frustration, so AAC can be a strategy to support communication alongside more traditional speech therapy to improve speech production.[136]

A wide variety of AAC systems have been used with children with developmental verbal dyspraxia.[137] Manual signs or gestures are frequent introduced to these children, and can include the use of fingerspelling alongside speech. Manual signs have been shown to decrease errors in articulation.[136] Aided AAC systems typically include communication boards and speech generating devices.[137] A multimodal approach is often used, with several AAC approaches introduced so that the child can take advantage of the most effective method for a particular situation.[136]

Traumatic brain injury

Traumatic brain injury can result in severe motor speech disorders; dysarthria is the most common such disorder, accounting for roughly a third of all cases.[138] Depending on the stage of recovery, AAC intervention may involve identifying consistent communication signals, the facilitation of reliable yes/no responses to questions, and the ability to express basic needs and answer questions. Individuals who do not recover natural speech to a degree sufficient to meet their communication needs typically suffer from severe impairments related to cognition.[139] Difficulties with memory and learning new skills may influence AAC choices; well-established competencies such as spelling may be more effective than AAC systems that require navigation through multiple pages to access information.[139]

Aphasia

Aphasia is the result of damage to the brain's language centres affecting production, comprehension, or both, and can cause severe, chronic language impairment.[140] Individuals with aphasia often communicate using a combination of speech, gestures, and aided communication; the proportion of each may change as the person recovers, and depends on the context and the individual's skills.[141]

Depending on their language and cognitive skills, those with aphasia may use AAC interventions such as communication and memory books, drawing, photography, written words, speech generating devices and keyboards.[140] Visual scene displays have been used on communication devices with adults who have chronic, severe aphasia; these feature photos of people, places or events that are meaningful to the individual and facilitative of communicative interaction.[140] Approaches such as "Supported Conversation for Adults with Aphasia" train the communication partners to use resources such as writing key words, providing written choices, drawing, and using items such as photographs and maps to help the individual with aphasia produce and comprehend conversation.[142][143] Communication boards can be very helpful for patients with aphasia, especially with patients who are very severe. They can be produced at a very low tech level, and can be utilized by patients to point to pictures/words they are trying to say. Communication boards are extremely functional and help patients with aphasia communicate their needs.

Locked-in syndrome

Strokes that occur in the brainstem may cause profound deficits, including locked-in syndrome,[144] in which cognitive, emotional and linguistic abilities remain intact but all or almost all voluntary motor abilities are lost.[145] Most people affected by this type of stroke rely on AAC strategies to communicate, since few recover intelligible speech or functional voice.[146] The AAC strategies used vary with the individual's preferences and motor capabilities which may change over time. As eye movements are most likely to be preserved, eye blinks are frequently used for communication.[145] Low-tech alphabet boards are often introduced immediately to provide the individual with basic communication. Partner-assisted scanning may be used, in which the AAC user signals when the desired letter is named by a communication partner. When vertical and horizontal eye movements are functional, a transparent alphabet board may be used in which the AAC user looks at the desired letter and this is acknowledged by the communication partner.[145] Individuals with locked-in syndrome have difficulty using high-tech devices due to issues with motor control, vision, memory, alertness and linguistic ability.[145] In particular, a voluntary, reliable and easily controlled muscle movement is necessary to access such a device, such as head, jaw, hand or finger movements.[145] In some individuals, intensive practice, even long after the initial stroke, has been shown to increase the accuracy and consistency of head movements,[146] which can be used to access a communication device.[145]

Amyotrophic lateral sclerosis

as caption
Stephen Hawking, a noted physicist with ALS who uses AAC

Amyotrophic lateral sclerosis (ALS) or Motor Neurone Disease (MND) is a progressive condition which leads to weakness and eventual paralysis. Approximately 75% of people with ALS are unable to speak by the time of their death.[147] In a procedure known as voice banking, people with ALS may digitally record words and phrases while still able to do so, for later inclusion in a communication device.[148] AAC systems used typically change over time depending on severity of speech impairment, physical status, and the individual's communication needs.[149] Use of augmentative communication strategies generally begins when speaking rate drops to 100 words per minute.[150] In the early stages, AAC may consist of using an alphabet board to cue the listener to the first letter of the word being spoken, and may be used with those less familiar with the individual. In the later stages, AAC often becomes the main communicative method, although familiar conversation partners may still understand some spoken words.[151] Since cognition and vision are typically unaffected in ALS, writing-based systems are preferred to graphic symbols, as they allow the unlimited expression of all words in a language.[151]

The method of access to a communication device depends on the type and severity of the disease. In the spinal form of ALS, the limbs are affected from the onset of the disease; in these cases a head mouse or eye tracking access may be used initially.[152] In the bulbar form, speech is affected before the limbs; here handwriting and typing on keyboard-style devices are frequently the first forms of AAC.[153] AAC users may change access methods as the disease progresses. Low-tech systems, such as eye gazing or partner assisted scanning, are used in situations when electronic devices are unavailable (for example, during bathing) and in the final stages of the disease.[153][153]

Parkinson's disease

 a communication device resembling a computer keyboard with a small liquid-crystal display attached to the top
Keyboard text-to-speech generating device for literate users with adequate fine motor control

Parkinson's disease is a progressive neurological condition[154] in which dysarthria may develop later in the progression of the disease.[155] Some individuals eventually lose all functional speech.[156] AAC approaches are generally used to supplement and support natural speech.[156] A portable amplifier, for example, may be used to increase the volume of speech and thus its intelligibility. The individual may be taught to point to the first letter of each word they say on an alphabet board, leading to a reduced speech rate and visual cues for the listener to compensate for impaired articulation. Entire words can be spelled out if necessary.[156] In users that have reduced range and speed of movement, a smaller than usual selection display may be preferred.[156] High-tech AAC keyboard speech-generating devices are also used; keyguards may be required to prevent accidental keystrokes caused by the tremor typical of the disease.[156] Factors affecting AAC use in Parkinson's disease include motor deficits and cognitive changes; the latter may result in unawareness of their problems with spoken communication.[154][157]

Multiple sclerosis

Dysarthria is the most common communication problem in individuals with multiple sclerosis (MS), however, significant difficulties with speech and intelligibility are uncommon.[158][159] Individuals with MS vary widely in their motor control capacity and the presence of intention tremor, and methods of access to AAC technology are adapted accordingly. Visual impairments are common in MS and may necessitate approaches using auditory scanning systems, large-print text, or synthetic speech feedback that plays back words and letters as they are typed.[160]

Dementia

Dementia is an acquired, chronic, cognitive impairment characterized by deficits in memory and other cognitive domains.[140][161] Communication impairments are partly attributed to memory deficits,[161] and AAC intervention may be used to compensate for deficits and to capitalize on the person's strengths, such as the ability to recognize material they cannot recall.[162] Low-tech devices are generally preferred, such as memory books that include autobiographical information, daily schedules, photographs, and reminders or labels.[162] Several studies have shown positive outcomes in the amount of on-topic conversation and the length of interaction with these approaches.[163][164][165] The gains were maintained four months after the training in the use of the memory aids had ceased.[165] High-tech devices with voice output have been found to be less effective; in one study devices resulted in limited topic elaboration/initiation, reduced output and heightened distraction.[161] AAC is also used to enhance the comprehension of those with dementia. The use of augmented listening strategies, such as identifying topics of conversation with pictures, improves the conversational skills of individuals with dementia.[162]

History

a scan of the cover of a historic book, it shows some Spanish text and a drawing of a hand forming a sign
A page from a 1620 Spanish treatise on finger spelling

The history of AAC can be traced to the days of classical Rome and Greece, with the first recorded use of augmentative strategies with the deaf.[10][166] The use of manual alphabets and signs was recorded in Europe from the 16th century, as was the gestural system of Hand Talk used by Native Americans to facilitate communication between different linguistic groups.[10][166] The first known widely available communication aid was a letter and word-based communication board developed for, and with, F. Hall Roe, who had cerebral palsy. This communication board was distributed in the 1920s by a men's group in Minneapolis.[13]

 a black and white photograph showing a user sitting in front of a modified typewrite, there is grid of letters above the typewriter that appear to be lighting up in sequence, a tube goes from the users mouth to the machine
The Patient Operated Selector Mechanism (POSM or POSSUM) was developed in the United Kingdom in the early 1960s.

The modern era of AAC began in the 1950s in Europe and North America, spurred by several societal changes; these included an increased awareness of individuals with communication and other disabilities, and a growing commitment, often backed by government legislation and funding, to develop their education, independence and rights.[166][167] In the early years, AAC was primarily used with laryngectomy and glossectomy cases, and later with individuals with cerebral palsy and aphasia.[168] It was typically only employed after traditional speech therapy had failed, as many felt hesitant to provide non-speech intervention to those who might be able to learn to speak.[166][167] Individuals with intellectual impairment were not provided with AAC support because it was believed that they did not possess the prerequisite skills for AAC.[124][167] The main systems used were manual signs, communication boards and Morse code, though in the early 1960s, an electric communication device in the form of a sip-and-puff typewriter controller named the Patient Operated Selector Mechanism (POSM or POSSUM) was developed in the United Kingdom.[13][166]

From the 1960s onward, sign language increased in acceptance and use in the Deaf community, and AAC also came to be viewed as acceptable for those with other diagnoses.[167][168] Manual sign languages, such as Makaton, were advocated for those with both hearing and cognitive impairments, and later for those with intellectual impairment or autism with normal hearing.[166][167][169] Research into whether primates could learn to sign or use graphic symbols spurred further interest the use of AAC with those with cognitive impairments.[166][168][169] The use of Amer-Ind hand signals opened the field to AAC techniques specifically for adult users.[166]

Blissymbols were first used in Canada in 1971 to provide communication to those not able to use traditional orthography; their use quickly spread to other countries.[13][166] With improved technology, keyboard communication devices developed in Denmark, the Netherlands and the US increased in portability; the typed messages were displayed on a screen or strip of paper. By the end of the 1970s, communication devices were being commercially produced, and a few, such as the HandiVoice, had voice output.[13] Countries such as Sweden, Canada and the United Kingdom initiated government-funded services for those with severe communication impairments, including developing centres of clinical and research expertise.[166]

The late 1970s and 1980s saw a massive increase of AAC-related research, publications, and training as well the first national and international conferences.[166] The International Society for Alternative and Augmentative Communication (ISAAC) was founded in 1983; its members included clinicians, teachers, rehabilitation engineers, researchers, and AAC users themselves. The organization has since played an important role in developing the field through its peer-reviewed journal, conferences, national chapters and its focus on AAC in developing countries.[13][166] AAC became an area of professional specialization; a 1981 American Speech-Language-Hearing Association position paper, for example, recognized AAC as a field of practice for speech-language pathologists.[170] At the same time, AAC users and family members played an increasing prominent role in the development of knowledge of AAC through their writing and presentations, by serving on committees and founding advocacy organizations.[166]

"Knowing that most of y'all do not know the HandiVoice, I will describe it...It was operated with a numeric keyboard...Each word, or sentence, or phrase, or phoneme was stored and accessed by a three digit code, for example, "hello" was 010...It took three codes to say "Rick", that was nine numbers. Now if you think that's bad, let's go for the simple sentence, "Hello, this is Rick Creech speaking." This would have taken fifteen 3 digit codes, for a total of forty-five numbers. Looking back, I am not surprised that very few professionals thought a person could successfully use the HandiVoice 120. But I did. I did, because being able to communicate with people was so empowering to me."

Rick Creech describes the HandiVoice 120 speech generating device, which he received in 1977.[171]

From the 1980s, improvements in technology led to a greatly increased number, variety, and performance of commercially available communication devices, and a reduction in their size and price. Alternative methods of access such eye pointing or scanning became available on communication devices.[166][167] Speech output possibilities included digitized and synthesized speech, with text-to-speech options available in German, French, Italian, Spanish, Swedish and Ewe.[166] AAC services became more holistic, seeking to develop a balance of aided and unaided strategies with the goal of improving functioning in the person's daily life, and greater involvement of the family.[167] Increasingly, individuals with acquired conditions such as amyotrophic lateral sclerosis, Parkinson's disease, head injury, and locked-in syndrome, received AAC services.[166] In addition, with the challenge to the notion of AAC prerequisites, those with severe to profound intellectual impairments began to be served.[167] Courses on AAC were developed for professional training programs, and literature such as textbooks and guides were written to support students, clinicians and parents.[166]

The 1990s brought a focus on greater independence for people with disabilities, and more inclusion in mainstream society .[166] In schools, students with special needs were placed in regular classrooms rather than segregated settings, which led to an increased use of AAC as a means of improving student participation in class.[167] Interventions became more collaborative and naturalistic, taking place in the classroom with the teacher, rather than in a therapy room.[166][167] Facilitated communication – a method by which a facilitator physically and emotionally supports a person with severe communication needs as they type on a keyboard or letter board – received wide attention in the media and in the field. The question of the authorship made the approach controversial; most of the subsequent research indicated that the facilitators were unknowingly influencing the messages typed.[166][169] As a result, professional organizations and AAC researchers and clinicians have not typically accepted facilitated communication.[169]

Rapid progress in hardware and software development continued, including projects funded by the European Community. The first commercially available dynamic screen speech generating devices were developed in the 1990s. At the same time synthesized speech was becoming available in more languages. Software programs were developed that allowed the computer-based production of communication boards.[166][167] High-tech devices have continued to reduce in size and weight,[167] while increasing accessibility and capacities. Modern communication devices can also enable users to access the internet and some can be used as environmental control devices for independent access of TV, radio, telephone etc.[172]

Future directions for AAC focus on improving device interfaces, reducing the cognitive and linguistic demands of AAC, and the barriers to effective social interaction.[50] AAC researchers have challenged manufacturers to develop communication devices that are more appealing aesthetically, with greater options for leisure and play and that are easier to use.[173][174] The rapid advances in smartphone and tablet computer technologies has the potential to radically change the availability of economical, accessible, flexible communication devices;[175] however, the user interfaces are needed that meet the various physical and cognitive challenges of AAC users.[174][175][176] Android and other open source operating systems, provide opportunities for small communities, such as AAC, to develop the accessibility features and software required.[176] Other promising areas of development include the access of communication devices using signals from movement recognition technologies that interpret body motions, or electrodes measuring brain activity, and the automatic transcription of dysarthric speech using speech recognition systems.[50] Utterance-based systems, in which frequent utterances are organized in sets to improve the speed of communication exchange, are also in development.[50] Similarly, research has focussed on the provision of timely access to vocabulary and conversation appropriate for specific interactions.[50][177] Natural language generation techniques have been investigated,[178][179] including the use of logs of past conversations with conversational partners,[180] data from a user's schedule[178] and from real-time Internet vocabulary searches,[50][181] as well as information about location from global positioning systems and other sensors.[50][178][182] However, despite the frequent focus on technological advances in AAC, practitioners are urged to retain the focus on the communication needs of the AAC users: "The future for AAC will not be driven by advances in technology, but rather by how well we can take advantage of those advancements for the enhancement of communicative opportunities for individuals who have complex communication needs".[174]

Notes

  1. http://www.asha.org/public/speech/disorders/AAC/
  2. Convention on the Rights of Persons with Disabilities: Article 2: Definitions
  3. ASHA.
  4. Fossett & Mirenda, p. 330.
  5. 1 2 3 4 Beukelman & Mirenda, pp. 4–5.
  6. Lindsay et al.
  7. Gross.
  8. 1 2 Beukelman & Mirenda, pp. 6–8.
  9. Ebert.
  10. 1 2 3 Glennen, p. 7.
  11. Lapointe.
  12. Walling.
  13. 1 2 3 4 5 6 Vanderheiden.
  14. Fossett & Mirenda, p. 331.
  15. 1 2 3 Mirenda.
  16. Beukelman & Mirenda, p. 36.
  17. Beukelman & Mirenda, pp. 38–42, 283–284.
  18. 1 2 Beukelman & Mirenda, pp. 42–44.
  19. Lloyd et al.
  20. 1 2 Millikin, pp. 103–107.
  21. Beukelman & Mirenda, pp. 43–51.
  22. Cumley & Swanson.
  23. Daniloff.
  24. Daniloff & Vergara.
  25. 1 2 Beukelman & Mirenda, pp. 107–110.
  26. Gillam et al., pp. 356–357.
  27. Millar & Scott, p. 4.
  28. Scott, pp. 13–15.
  29. 1 2 Glennen, pp. 62–63.
  30. 1 2 Schlosser, Blischak & Koul
  31. Beukelman & Mirenda, pp. 105–106.
  32. 1 2 3 Jans & Clark, pp. 37–38.
  33. Beukelman & Mirenda, p. 84.
  34. Hochstein.
  35. Beukelman & Mirenda, p. 85.
  36. Glennen, pp. 59–60.
  37. Glennen, pp. 69–74.
  38. 1 2 Decoste, p. 382.
  39. Smith & Connolly.
  40. Shepherd et al.
  41. Fossett & Mirenda, p. 334.
  42. 1 2 Hazel, pp. 166–167, 172–173.
  43. Huer (2000).
  44. Glennen, pp. 118–120.
  45. 1 2 3 Beukelman & Mirenda, pp. 51–53.
  46. 1 2 Hazel, pp. 169.
  47. Fleming et al.
  48. Silverman, p. 98.
  49. Williams, pp. 199–200.
  50. 1 2 3 4 5 6 7 8 9 10 Higginbotham et al.
  51. Beukelman & Mirenda, pp. 96–97.
  52. Beukelman & Mirenda, pp. 102–103.
  53. 1 2 3 Beukelman & Mirenda, pp. 97–101.
  54. Glennen, p. 73.
  55. Blackstone (1993).
  56. 1 2 Beukelman & Mirenda, p. 336.
  57. Brandenberg & Vanderheiden, pp. 84–135.
  58. Beukelman (1989).
  59. Marvin et al. (1994).
  60. Beukelman & Mirenda, pp. 30–32.
  61. Beukelman & Mirenda, pp. 336–338.
  62. 1 2 Drager, Light & Speltz.
  63. Beukelman & Mirenda, pp. 338–339.
  64. Blackstone (2004).
  65. Drager et al. (2004).
  66. 1 2 3 4 University of Washington (2009)
  67. 1 2 3 Venkatagiri.
  68. Brown.
  69. Fossett & Mirenda, pp. 331–332.
  70. 1 2 Beukelman & Mirenda, pp. 118–125.
  71. 1 2 3 Mineo.
  72. Parette et al. (2000).
  73. Angelo, pp. 523–524.
  74. 1 2 Beukelman & Mirenda, pp. 152–154.
  75. Parette & Angelo.
  76. Crais.
  77. Kalyanpur & Harry, pp. 246–249.
  78. Parette & Brotherson.
  79. Dinnebeil.
  80. Beukelman & Mirenda, pp. 8–10.
  81. Light, 1988.
  82. Turnbull.
  83. Wehmeyer & Palmer.
  84. 1 2 Beukelman & Mirenda, pp. 151–52.
  85. Beukelman & Mirenda, pp. 227–231.
  86. Light & McNaughton, 2014
  87. 1 2 Beukelman & Mirenda, pp. 10–14.
  88. Light, 1989.
  89. Beukelman & Mirenda, p. 187.
  90. Pennington & McConachie.
  91. Clarke & Wilkinson.
  92. 1 2 3 Sturm & Clendon.
  93. 1 2 Fallon et al. (2004).
  94. 1 2 Blischak.
  95. 1 2 3 Schlosser & Wendt.
  96. 1 2 Millar et al..
  97. 1 2 Preston (2009).
  98. Schlosser & Lloyd.
  99. Cress & Marvin.
  100. Koppenhaver et al. (1991b).
  101. 1 2 Light & McNaughton.
  102. Sutton, Soto & Blockberger 2002
  103. Sturm et al. (2006).
  104. Koppenhaver et al. (1991a).
  105. Erickson & Koppenhaver.
  106. Kopperhaver & Yoder.
  107. 1 2 3 4 5 Hamm & Mirenda.
  108. Mank et al.
  109. McNaughton et al.
  110. McNaughton & Light.
  111. Bryen et al.
  112. 1 2 Lund & Light (2006).
  113. Lund & Light (2007a).
  114. 1 2 Lund & Light (2007b).
  115. Beukelman & Mirenda, p. 235.
  116. 1 2 Treviranus & Roberts
  117. Beukelman & Mirenda, pp. 236–237.
  118. Utley
  119. Beukelman & Mirenda, pp. 237–238.
  120. Beukelman & Mirenda, pp. 238–239.
  121. 1 2 3 Beukelman & Mirenda, pp. 241–243.
  122. Wilkinson & McIlvane, pp. 273–322.
  123. Cheslock et al.
  124. 1 2 3 Wilkinson & Hennig.
  125. Cosbey & Johnston.
  126. Batshaw, pp. 287–305.
  127. Chiang & Lin.
  128. Cafiero, p. 8.
  129. Beukelman & Mirenda, pp. 246–248.
  130. 1 2 Mirenda (2001).
  131. http://www.cdc.gov/ncbddd/autism/data.html
  132. http://cerebralpalsy.org/about-cerebral-palsy/prevalence-and-incidence/
  133. http://www.parkinson.org/understanding-parkinsons
  134. Son et al.
  135. Bornman et al.
  136. 1 2 3 Beukelman & Mirenda, pp. 250–254.
  137. 1 2 Weitz et al.
  138. Theodoros et al.
  139. 1 2 Fager et al. (2006b).
  140. 1 2 3 4 Beukelman et al. (2007).
  141. Fox & Fried-Oken.
  142. Garrett & Lasker, pp. 475–481.
  143. Kagan et al.
  144. Teasel et al.
  145. 1 2 3 4 5 6 Soderholm et al.
  146. 1 2 Fager et al. (2006a).
  147. Doyle & Phillips.
  148. Stern.
  149. Ball, p. 441.
  150. Duffy, pp. 245–246.
  151. 1 2 Ball, pp. 445–447.
  152. Mathy, p 198.
  153. 1 2 3 Ball, pp. 444–445.
  154. 1 2 Armstrong.
  155. Duffy, p. 167.
  156. 1 2 3 4 5 Ball, pp. 455–460.
  157. Tjaden.
  158. Ball, pp. 448–449.
  159. Beukelman (1985).
  160. Ball, pp. 449–454.
  161. 1 2 3 Fried-Oken (2009).
  162. 1 2 3 Garrett & Lasker, pp. 508–515.
  163. Andrews-Salvia.
  164. Bourgeois (1993).
  165. 1 2 Bourgeois (2001).
  166. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Zangari.
  167. 1 2 3 4 5 6 7 8 9 10 11 12 Hourcade.
  168. 1 2 3 Glennen, p. 8.
  169. 1 2 3 4 Mirenda pp. 5–9.
  170. Glennen, p. 10.
  171. Creech.
  172. Robitaille, pp. 151–153.
  173. Light & Drager
  174. 1 2 3 Cook
  175. 1 2 McNaughton & Light, 2013
  176. 1 2 Higginbotham & Jacobs
  177. Reddington & Tintarev
  178. 1 2 3 Newell, p. 67.
  179. Black et al.
  180. Ashraf et al.
  181. Luo et al.
  182. Patel & Radhakrishnan

References

External links

This article is issued from Wikipedia - version of the Wednesday, February 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.