Amyotrophic lateral sclerosis (ALS) is a degenerative neurological condition that
causes muscle weakness. In advanced stages, it progresses to tetraplegia accompanied
by loss of speech and inability to communicate.[1] Brain–computer interfaces (BCIs) can help individuals with ALS by restoring their
ability to speak. Such devices work by decoding intracortical brain signals into intended
speech.
For this, in 2024, a clinical demonstration was conducted in which a 45-year-old man
with advanced-stage ALS and severe dysarthria was surgically implanted with four microelectrode
arrays. Just 25 days later, this system showed promising results. On the first day,
testing showed that the system achieved 99.6% accuracy with a limited vocabulary and
90.2% on the second day with a much larger vocabulary. This shows that with little
training, a patient will be able to communicate like other normal people.[2]
However, these text-based BCIs miss important aspects of human speech, such as tone,
rhythm, and the inability to hear one's voice simultaneously. For this, in 2025, scientists
implanted 256 microelectrodes in the ventral precentral gyrus of a man with ALS who
can no longer speak. This system could instantly decode his neural activity, generating
speech sounds while also giving real-time audio feedback identical to his voice. Even
though we had no clear sample of his true speech, researchers succeeded in not only
decoding exact phonemes but also paralinguistic features, including emotions, tone,
and even melodies. This groundbreaking discovery can enable such ALS patients to express
themselves with their natural speech and emotions.[3]
BCIs also pose certain ethical challenges. These include users' autonomy, privacy,
and responsibility. This raises concerns about user control, sensitive brain data,
and a lack of accountability. Such ethical implications can be overcome by introducing
varying levels of user control.[4] While most research focuses on training BCI systems to interpret brain signals better,
a key question remains: can users also learn to control their brain activity to improve
BCI performance? For this, in a recent study, 15 healthy people were taught how to
use a BCI system that reads EEG signals. Participants had to imagine syllables daily
for 5 consecutive days. Over time, they got better at using BCI, but at different
paces. A control experiment was also run, and found that giving participants real-time
feedback on how well they are doing is very crucial. People only improved when they
knew how the system was responding to their brain signals. As they improved, visible
changes were seen in their brain waves. Theta activity increased in their frontal,
and gamma activity decreased in their temporal part of the brain.[5]
Another overlooked aspect is the underreported user experience. Knowing how the user
feels using BCI devices from soon after surgery to long-term use can help develop
better systems. Even though recent advancements in BCI suggest a promising future,
such devices are still quite costly, which can exclude patients, leading to unfair
gaps and widening societal inequality.[6]
To conclude, there is no doubt that ongoing breakthroughs in neurotechnology hold
potential for significant impact. But to make real-world change, we need continued
research, user-friendly designs, and attention toward ethical challenges. Urgent investment
is required to scale these solutions and ensure equitable access to every patient
in need, regardless of their background or income.