Meta has been making impressive strides in the AI space, recently surpassing its earning estimates along with its plan to invest $65 billion to build a 2GW+ data centre.
Now, it has showcased progress in using AI to decode language from the brain to help people with brain injuries who have lost their ability to communicate.
Neuroscience and AI Researchers Working Together for Breakthroughs
Meta collaborated with the Basque Center on Cognition, Brain, and Language (BCBL), a leading research centre in San Sebastián, Spain, to study how AI can help advance our understanding of human intelligence. The goal is to achieve advanced machine intelligence (AMI).
During the announcement of the new research, Meta said, “We’re sharing research that successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals.”
The research was led by Jarod Levy, Mingfang (Lucy) Zhang, Svetlana Pinet, Jérémy Rapin, Hubert Jacob Banville, Stéphane d’Ascoli, and Jean Remi King from Meta.
The study involved 35 healthy volunteers who typed memorised sentences while their brain activity was recorded. They were seated in front of a screen with a custom keyboard on a stable platform. The volunteers were asked to type what they saw on the screen without using backspace.
According to the research paper, a new deep learning model, Brain2Qwerty, was designed to decode text from non-invasive brain recordings like electroencephalogram (EEG) and magnetoencephalography (MEG). The model uses a three-stage deep learning architecture, a convolutional module to process brain signals, a transformer module, and a pre-trained language model to correct the transformer’s output.
While it remains unconfirmed whether this model used ‘The Frontier AI Framework’, it is possible that future studies could incorporate it.
Even with the advancements in the AI model, invasive methods continue to remain the gold standard for recording brain signals. However, these tests are a significant step towards bridging the gap between non-invasive and invasive techniques.
Meanwhile, Jean-Rémi King, brain and AI tech lead, said, “The model achieves down to a ~20% character-error-rate on the best individuals. Not quite a usable product for everyday communication…but it’s a huge improvement over current EEG-based approaches.”
“We believe that this approach offers a promising path to restore communication in brain-lesioned patients…without requiring them to get electrodes implanted inside,” King added.
Meta also announced a $2.2 million donation to the Rothschild Foundation Hospital to support the neuroscience community’s collaborative work.
While this is not something that we can use at the moment or benefit from, the insights from Meta’s new research sound promising about how AI can make a difference in the neuroscience field.