Keywords
Aphasia - therapy - treatment - technology - apps - applications
Learning Outcomes: As a result of this activity, the reader will be able to (1) explain the steps that
should be taken when selecting technology/apps to be used in treatment; (2) discuss
motor, sensory, and cognitive skills that should be assessed when considering the
use of technology; and (3) discuss examples of how apps not specifically designed
for aphasia therapy were successfully integrated into the treatment plans for the
three cases presented in the article.
It is hard to believe that it was only 7 years ago (2007) that smart tablet and smartphone
technology became possible for the masses. Today, more than a billion people are using
these devices,[1] and half of all computing devices sold are mobile.[2] This has resulted in a cultural shift from computers that run a small set of large
software packages (Word and Excel [Microsoft Corp., Redmond, WA], etc.) to computers
that move with us and hold tens if not hundreds of individually selected applications.
Consequently, what is on one person's smartphone is quite different from what is on
the phone of the person sitting next to them. The popularity of smart devices is mainly
due to the proliferation of apps that provide useful tools and appealing forms of
entertainment. There are apps for just about every purpose imaginable. The Apple iTunes
App Store alone (which only sells apps for iOS devices such as iPads, iPhones, and
iPod touches [all Apple Inc., Cupertino, CA]) reached the 50 billion downloads milestone
in 2013.[3] With the addition of downloads for Android (Google Inc., Mountain View, CA) devices,
the actual number of downloaded apps more than doubles.[4]
[5]
There is a considerable history in the aphasia literature about the use of software
applications that run on personal computers and subscription-based Web site services,
but these applications have not been recreated as apps on smart devices.[6]
[7]
[8] Many of the available apps contain tasks that could be used for treatment but are
too childish to place in front of adult clients.[9] There are small numbers of adult-focused apps for aphasia rehabilitation but these
generally offer a limited variety of treatment activities—for example, Tactus Therapy
Solutions (Tactus, Vancouver, BC) (tactustherapy.com), Lingraphica, (Lingraphica,
Princeton, NJ) (www.aphasia.com), and Virtual Speech Center (Virtual Speech Center, Burbank, CA) (www.virtualspeechcenter.com). These apps are fairly easy to find using any Web search engine (with the search
terms “aphasia” and “treatment”) or blogs and Web sites created by speech-language
pathologists.
Not to be ignored are the huge number of apps that, although not specifically designed
for aphasia rehabilitation, offer unique options for the development of treatments.
These apps are generally inexpensive (if not free) and publically available. They
often have greater flexibility in terms of multipurpose utilization in therapy, although
successful integration into a treatment program may require a bit more creativity
on the part of the speech-language pathologist. In the three cases that follow, we
will illustrate how we have used this kind of app in the development of treatments.
Our approach is rooted in the desire to find ways to integrate technology into therapy
so that the treatment plan drives the decisions about which app is used rather than
the app driving treatment. Our approach treats technology as a tool to enhance treatment,
especially in situations where independent therapeutic practice is a goal. We choose
apps only after (1) careful assessment of the client and the app, (2) selection of
a functional treatment focus and desired outcomes, and (3) analysis of the evidence
to support specific treatment approaches and tasks ([Fig. 1]). For an app to be incorporated into a treatment plan, it must be able to meet the
therapy task requirements and be usable by the patient in terms of both his or her
nonlinguistic capabilities and hardware (and sometimes Internet) availability.
Figure 1 Recommended process for integrating technology/apps into aphasia treatment.
This approach requires clinicians to stay current on evidence-based treatment approaches
and be well informed about the constantly changing new technology options available
to them. Knowledge of technology and apps builds across time; we never truly approach
a given client with a tabula rasa of ideas about which apps to use. Still, our point
is that the decision about treatment approach should serve as a filter through which
apps are passed for each client, each time. The patient's cognitive and motor capabilities
should work as yet another filter for selecting apps. Some clinicians will stay abreast
of developments in both arenas; however, we use a “technology consultant” model. Our
technology consultant monitors and tests apps with the needs of the user in mind and
works with the speech-language pathologist on finding good matches between treatment
approach and app functionality. Technology consultants may come from a wide variety
of nonclinical or clinical disciplines including assistive technology, rehabilitation
engineering, computer science, psychology, speech-language pathology, or occupational
therapy. Regardless of their discipline, they need to have an understanding of the
speech-language, motor, sensory, and cognitive functions most likely to be encountered
when consulting on cases with specific diagnoses. This model already is widely utilized
in the field of augmentative and alternative communication, and we see a growing need
for a similar model in others areas of speech-language pathology, including aphasia
rehabilitation.
Determining the usability of technology is comparable to the capability assessment
in the augmentative and alternative communication world.[10] This involves determining the patient's sensory, motor, and cognitive capabilities
in light of the demands of both technology devices and apps. For example, many free
apps have pop-up ads that are problematic for a patient who is easily distracted by
extraneous visual stimuli. Other apps require the user to swipe the screen and this
may be difficult for a patient with motor impairments that impact the use of the arms
and hands. The decision to use a tablet or a smartphone may depend upon visual acuity
and/or the fine motor skills needed to use the keyboard. A full list of considerations
is far too extensive to summarize here. We suggest that you choose a few simple apps
and observe clients using them. One great app for quick evaluation of device use is
Bitsboard (free and pro versions $2.99; grasshopperapps.com). The paid version includes
numerous games each requiring different skills. Observe the client touching and swiping
the screen and observe whether he or she is able to adapt when given additional guidance.
For example, we worked with one client who had a continual problem with pressing the
screen as if it was pressure sensitive. This continued despite multiple explanations
and demonstrations. We were able to provide him with apps that required tapping but
he could not master the distinction between touching and swiping.
The following cases are offered as examples of how we apply these concepts in aphasia
rehabilitation.
Case 1: Anne
Anne was 33-years-old and 9 years postonset of aphasia subsequent to a left cerebrovascular
accident (CVA) when we worked with her in our laboratory. She presented with a nonfluent
aphasia of moderate severity with a Western Aphasia Battery (WAB) aphasia quotient
of 75.2. Auditory comprehension was stronger than spoken output and written language
was poorer than spoken although she was readily able to identify many of her written
language errors. In addition to her aphasia, Anne was hemiparetic on the right side
with greater involvement of her arm than leg. She ambulated with a foot brace and
used her hemiparetic arm only occasionally for tasks such as holding a paper on the
table as she wrote with her left hand. She was able to use her nondominant left hand
for most functional activities (e.g., using utensils for eating, writing, cutting,
etc.), although these movements were slower and less precise than would have been
observed prior to her stroke with her dominant hand. She had no dysarthria or limb
apraxia, but she did exhibit mild apraxia of speech. She passed hearing and vision
screenings and reported no premorbid speech, language, hearing, or learning disabilities.
We interviewed Anne and her family and learned that she lived with her parents and
two preteen children. Her family encouraged independence appropriately and she reported
no issues related to mood. She enjoyed family activities, shopping, and cooking and
was very motivated to explore possible use of technology to enhance her communication
skills, especially distance communication with her children and friends. Anne expressed
a specific interest in being able to use e-mail, social media, and texting.
At the conclusion of the evaluation and interview, the following had been established:
-
Strengths: good auditory comprehension, good error awareness, clear articulation,
better spoken output than written output, good procedural memory, enthusiasm, and
strong support system
-
Weaknesses: poor written language skills, apraxia of speech
-
Treatment focus/desired outcome: the ability to use social media for distant communication
with friends and family
We hypothesized that we might be able to leverage Anne's spoken language strengths
and error awareness skills to improve her written output, thus enabling her to write
short text and e-mail messages. The aphasia treatment evidence tables on the Academy
of Neurogenic Communication Disorders and Sciences (ANCDS) Web site are always the
first place we search for research to support selection of a treatment approach.[12] Unfortunately we found no studies to support this idea, but a PubMed search using
the search terms “voice recognition software” and “aphasia” yielded one study that
provided level III (weak) evidence supporting our treatment idea.[13]
[14] The case study involved a patient with fluent aphasia, but the approach of using
voice-to-text software resulted in remarkable improvements in written language for
this patient, and the authors suggested that other people with aphasia and writing
difficulties might also benefit from this approach.
From a technology perspective, Anne had regular access to Wi-Fi, an iPad, and a desktop
computer with a mouse. She previously had used the desktop computer for clinician-directed
teletherapy and self-administered aphasia therapy software but was not using any technology
at the beginning of our work together. Because she had no direct experience with an
iPad, we asked her to complete some basic operations to determine if there were any
limitations more specific to a touch screen interface. For example, with limb apraxia,
some clients have a tendency to allow an unintended finger to touch the screen as
they make a selection. This can often be avoided though the use of a stylus. Limited
hearing can also be a factor because the speakers are not particularly loud, can be
occluded by cases, and lack range and clarity when set to high levels. Cognitive issues
may also interfere with the learning of steps to access and use an application. In
Anne's case, she learned how to touch the screen, turn on and reawaken the iPad, and
work through the steps to use a variety of apps after only a few minutes of instruction.
To summarize, our technology evaluation indicated that Anne had the motor, sensory,
and cognitive capabilities to operate a smartphone, tablet, and/or a desktop computer
as long as the device could be set down during use and operated with a single (nondominant)
hand. Her speech was clearly articulated without distortion, increasing the likelihood
that she would be able to use a speech recognition system.
We then turned to our technology consultant to identify iPad technology and apps that
would allow Anne to access social media using speech recognition. Ultimately, we decided
to explore the possibility of Anne using speech-to-text software (Dragon Dictation;
Nuance Communication, Inc., Burlington, MA) to create a written draft that she could
subsequently edit using the same procedure. We were encouraged by the fact that Bruce
and colleagues had also used Dragon Dictation successfully with their client.[11] Using the speech-to-text plus editing approach, Ann spoke whatever part of the message
she was able to orally produce. This typically consisted of several words that were
part of what would ultimately become a complete sentence. She then studied the text
she had produced, selected words that were incorrect, and made a second attempt on
producing just those words. This process was repeated until she was satisfied with
what she had generated in text. She then could expand the utterance or begin a new
one. It took only two sessions to achieve a basic level of competence in using this
approach, thanks to Anne's strong procedural memory skills.
We then assisted in the creation of Anne's Gmail account and a list of contacts for
e-mail and text messaging. We also showed Anne how to take photos using the iPad and
add these to e-mail and text messages. [Fig. 2] shows a sample of Anne's spoken language skills and one of the e-mail messages she
sent to us using the speech-to-text plus editing approach.
Figure 2 Spoken language sample and e-mail message Anne wrote using the speech-to-text plus
editing approach.
Admittedly, this e-mail message took Anne several hours to generate. However, the
sense of satisfaction she got from producing e-mail messages of this quality far outweighed
her perceived effort. Having observed the success of this approach, we now want to
shift our focus to enhancing the quality of messages Anne produces so that she can
write several sentences that more fully develop an idea.
Case 2: Geraldo
Geraldo graduated from high school only months before sustaining a traumatic brain
injury as a result of falling from a slowly moving car. He received acute medical
care that included a craniotomy to evacuate a left hemisphere hematoma and a tracheostomy
due to breathing difficulty. He then was transferred for inpatient rehabilitation
and was approximately 1 year postonset when he was seen in our laboratory. At that
time, he presented with severe Broca-like aphasia with a WAB aphasia quotient of 16.2.
He had recently completed Melodic Intonation Therapy and was subsequently able to
produce a small set of commonly used two-to-three-word phrases.[15] Geraldo's auditory comprehension was a relative strength and his reading and writing
were more impaired than speaking and listening. He lived at home with his parents
and siblings, who provided strong supports. Prior to his accident, Geraldo's primary
leisure time activities were playing sports and weight lifting and he had recently
been able to return to the gym to work out.
At the conclusion of the evaluation and interview, the following had been established:
-
Strengths: relatively good auditory comprehension, enthusiasm for technology, and
strong family support system
-
Weaknesses: concomitant nonlinguistic cognitive impairments, poor written language
skills, apraxia of speech
-
Treatment focus/desired outcome: increased quantity and quality of spoken language
The ANCDS treatment evidence tables include several studies that report on the effects
of script training for patients with nonfluent aphasia.[12]
[16]
[17]
[18] A key step in this treatment approach is practice listening and then repeating spoken
utterances. Although script training involves practicing utterances within the functional
context of a conversation, it is generally applied to patients with more spoken language
than Geraldo. We, therefore, decided to try listening and repeating meaningful, personally
relevant spoken utterances as an intermediary step between Melodic Intonation Therapy
and full-blown script training.
From a technology perspective, Geraldo had access to an iPad and he regularly played
games using a Microsoft Xbox. Geraldo had no difficulty holding the iPad, opening
the cover, and launching applications. His strong vision and hearing allowed for the
selection of applications that rely on sound and/or visual input and did not necessitate
the use of a headset or larger screen device. Based on this, we judged him as having
sufficient sensory and motor skills to use many iPad apps.
With respect to cognitive status, Geraldo's ability to acquire task set and maintain
attention were reduced. For this reason, we felt it was important to find an application
that led him to the task in a sequential fashion and did not require hopping around
across different screens and settings. His limited reading skills implied that the
applications must either not involve large amounts of text and/or provide auditory
instruction.
We chose to use the Little Story Maker app (Grasshopper Manufacture, Inc., Suginami,
Tokyo, Japan) (free, grasshopperapps.com) to depict common phrases that Geraldo would
use in everyday conversations. Although intended for use with children, this app is
not overly childlike in appearance. We incorporated pictures and recordings in Geraldo's
own voice. We asked him to play the sentence then repeat it back multiple times before
continuing to the next image. The app does not have ads and the design is uncluttered.
Geraldo was successful at using the app during therapy and participated in deciding
which phrases to include, how to word them, and which pictures to use to represent
the concepts. He did not use the application as part of home practice. We suspect
this happened because of his diminished self-initiation skills and because the activity
was not sufficiently engaging. In summary, the application supported audio cuing of
personally chosen and functionally relevant target phrases within therapy sessions
with the speech-language pathologist. In the future, we hope to identify an app that
supports both audio and visual (video) modeling (over audio alone) because there is
some evidence to suggest this may be more effective for patients with Broca's aphasia.[19]
Case 3: Bob
When we worked with Bob he was 79-years-old and 3 years post–left CVA that resulted
in severe Wernicke aphasia. He had no hemiparesis, dysarthria, or apraxia (limb, buccofacial,
or speech), and he passed both hearing and vision screenings. He was uncooperative
with efforts at formal assessment, but we were able to determine through informal
means that his ability to read was significantly better than his auditory comprehension.
His spoken output was often decipherable despite many verbal paraphasias and paragrammatic
errors. He exhibited an excessive press of speech in which he would talk at the same
time as his conversational partner and dominate the conversation in terms of turn
taking. This, combined with a loud voice, a large physique, and full mobility, resulted
in an overbearing presence that we were confident contributed to his being socially
isolated in his assisted living facility. He clearly let us know that he felt lonely
and he expressed a strong desire for greater social connections. He was not willing
to engage in structured therapy tasks, but was highly motivated to participate in
conversations about current events and sports.
At the conclusion of our informal evaluation, the following had been established:
-
Strengths: reading comprehension, spoken output was decipherable if the topic was
known to the listener, motivated to talk with people and has many things to say, had
learned to use a computer since his stroke, and was able to stay current on local/national/international
events and sports by reading USA Today and watching TV
-
Weaknesses: poor auditory comprehension, severe press of speech, overwhelming presence,
very difficult for others to carry on a conversation with him, extremely resistive
to anything that is childlike or beneath his perceived level of intelligence
-
Treatment focus/desired outcome: improve Bob's ability to engage in satisfying conversations
to lessen his feelings of isolation
Given the dearth of evidence for treatment of auditory comprehension in Wernicke aphasia
and the fact that Bob was not receptive to the types of tasks that might comprise
a traditional restorative therapy approach, we decided to take a compensatory approach
instead. We had observed in our informal assessment that augmenting spoken language
with the corresponding written words was extremely effective in enhancing his comprehension
and subsequently allowing him to better engage in a conversation. Although we found
no evidence in the literature to back up this approach, we did find support for the
idea when we consulted with several experienced aphasia therapists. The challenges
to successful implementation of this approach were that Bob quickly grew impatient
when we stopped to write, and, under pressure to keep the conversation moving, we
were limited to writing a few key words that were often insufficient to cue his understanding.
From a technology perspective, Bob had acquired a basic understanding of a Windows
laptop computer (with a touch pad, no mouse) since his stroke, but his use was limited
to playing solitaire. Although Bob's vision and hearing were normal, he interpreted
his lack of auditory comprehension as being due to hearing issues and frequently attempted
to turn up the volume. His intact motor skills allowed him to hold a 7-inch mobile
device with a single hand and to open and operate it using the touch screen without
a stylus. Bob's press of speech made it difficult to get him to engage with the device
while someone was present. From this, we decided that his conversational partner would
need to be the person who drove the use of the device.
Combined with treatment goals of improved conversations, we chose an Android application
called Notepad Pro (Alibaba.com, China) (U.S. $1.99 in Google Play Store). This app
was particularly appropriate because it supported larger fonts, put each utterance
on a separate line, had simple to use controls, and had an uncluttered screen. There
is a free “lite” version of the App with more limited functionality but, because it
displayed ads, we chose to purchase the full version. During use, the conversation
partner would tap the microphone icon, speak into the device, and then show Bob the
screen. Bob would read the screen and respond orally. Sometimes the partner needed
to firmly enforce the reading of the screen as Bob would respond to what he heard
as the conversation partner spoke into the device. We found that the conversational
exchanges were more complete and balanced because the use of the device forced better
turn taking and allowed the conversation partner to complete a full thought before
relinquishing a turn.
Conclusion
Clinical practice is benefiting from the use of smart technologies and apps. This
article presents best-practice guidelines for integrating apps in aphasia rehabilitation.
It extends evidence-based practice guidelines with recommended steps for determining
which apps are appropriate given the sensory, motor, and cognitive capacities of the
individual client. The app selection process involves a series of filters beginning
with assessment of the client's speech and language, selection of treatment focus,
and identification of evidenced-based approaches and selection of treatment approach.
Next, potential apps are assessed in terms of sensory, motor, and cognitive requirements
and the client's ability to use the app. Finally, apps that meet the speech-language
and nonlinguistic capabilities parameters must be assessed in terms of hardware and
Internet demands and availability. We describe our use of a technology consultant
and our three cases provide detailed examples of how apps that were not specifically
designed for aphasia can be effectively used to deliver evidence-based treatments.
We demonstrate the importance of finding ways to integrate technology into therapy
without the app driving the treatment.