
Someone with paralysis using the brain-computer interface. The text above is the cued sentence and the text below is what is being decoded in real-time as she imagines speaking the sentence
Emory BrainGate Team
People with paralysis can now have their thoughts turned into speech just by imagining talking in their heads.
While brain-computer interfaces can already decode the neural activity of people with paralysis when they physically attempt speaking, this can require a fair amount of effort. So Benyamin Meschede-Krasa at Stanford University and his colleagues sought a less energy-intensive approach.
“We wanted to see whether there were similar patterns when someone was simply imagining speaking in their head,” he says. “And we found that this could be an alternative, and indeed, a more comfortable way for people with paralysis to use that kind of system to restore their communication.”
Meschede-Krasa and his colleagues recruited four people with severe paralysis as a result of either amyotrophic lateral sclerosis (ALS) or brainstem stroke. All the participants had previously had microelectrodes implanted into their motor cortex, which is involved in speech, for research purposes.
The researchers asked each person to attempt to say a list of words and sentences, and also to just imagine saying them. They found that brain activity was similar for both attempted and imagined speech, but activation signals were generally weaker for the latter.
The team trained an AI model to recognise those signals and decode them, using a vocabulary database of up to 125,000 words. To ensure the privacy of people’s inner speech, the team programmed the AI to be unlocked only when they thought of the password Chitty Chitty Bang Bang, which it detected with 98 per cent accuracy.
Through a series of experiments, the team found that just imaging speaking a word resulted in the model correctly decoding it up to 74 per cent of the time.
This demonstrates a solid proof-of-principle for this approach, but it is less robust than interfaces that decode attempted speech, says team member Frank Willett, also at Stanford. Ongoing improvements to both the sensors and AI over the next few years could make it more accurate, he says.
The participants expressed a significant preference for this system, which was faster and less laborious than those based on attempted speech, says Meschede-Krasa.
The concept takes “an interesting direction” for future brain-computer interfaces, says Mariska Vansteensel at UMC Utrecht in the Netherlands. But it lacks differentiation between attempted speech, what we want to be speech and the thoughts we want to keep to ourselves, she says. “I’m not sure if everyone was able to distinguish so precisely between these different concepts of imagined and attempted speeches.”
She also says the password would need to be turned on and off, in line with the user’s decision of whether to say what they’re thinking mid-conversation. “We really need to make sure that BCI [brain computer interface]-based utterances are the ones people intend to share with the world and not the ones they want to keep to themselves no matter what,” she says.
Benjamin Alderson-Day at Durham University in the UK says there is no reason to consider this system a mind-reader. “It really only works with very simple examples of language,” he says. “I mean if your thoughts are limited to single words like ‘tree’ or ‘bird,’ then you might be concerned, but we’re still quite a way away from capturing people’s free-form thoughts and most intimate ideas.”
Willett stresses that all brain-computer interfaces are regulated by federal agencies to ensure adherence to “the highest standards of medical ethics”.
Topics:
- artificial intelligence/
- brain
Source link : https://www.newscientist.com/article/2492622-mind-reading-ai-can-turn-even-imagined-speech-into-spoken-words/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=home
Author :
Publish date : 2025-08-14 16:00:00
Copyright for syndicated content belongs to the linked Source.