Skip to content Skip to sidebar Skip to footer

Source: Sciences et Avenir 04-07-2020

American researchers have been able, through artificial intelligence, to decode entire sentences from neural signals emitted in real time. A hope for paralyzed people who have been speechless. But this draft neuroprosthesis raises ethical reservations.

It’s 2030. On her hospital bed, a patient, unable to express herself since a stroke, thinks:“I’m thirsty. Could I have water?” Immediately, the computer connected to the brain implants translates his thinking into words and a synthetic voice alerts caregivers. Equipped with the same translation system, other patients, paralyzed by amyotrophic lateral sclerosis (Charcot’s disease) or locked-in syndrome, have regained the ability to express themselves…

If it does not yet exist, this “decoder of words” is now on track. The team of Joseph Makin, David Moses and Edward Chang, from the University of California, San Francisco (USCF) in the United States, funded by Facebook, has taken a crucial step in its design. The artificial intelligence (AI) system they have developed already directly decodes phrasesfrom thebrain, in real time, with 97% accuracy. A record! As the researchers reported in March 2020 in a study published in the journal  Nature Neuroscience.

CONTEXT

Never has the stroke of the brain implant been so contested. While Facebook is funding neuroprosthesis projects to decode speech in the brain, American Elon Musk is investing $150 million to develop his Neuralink device, which was scheduled for clinical trial by the end of 2020. The BrainCom project, funded by the European Commission to the tune of 8.3 million euros out of a total of 8.6 million, aims to decipher the neural circuits of speech. A hope for patients unable to express themselves. And for some, the idea of one day equipping healthy humans. 

For the experiment, four American patients – already equipped with electrodes as part of research on epileptic outbreaks – volunteered. The first phase consisted of training an algorithm, connected to the electrodes. To do this, the patients repeated several times aloud series of 30 to 50 sentences, containing up to 250 different words, for about forty minutes. This is so that the system learns to associate the activity of their cortex (electro-corticogram or ECoG) with sentence probabilities. Then, the volunteers randomly drew sentences that ai had to immediately translate according to brain activity. “Thegoal of our study was to decode speech from neural signals,” explainsJosephMakin, co-author of the work. And we managed to achieve an error rate of just 3% out of a limited corpus of fifty sentences.” Never seen before! A record reached for the time being only when the sentence is articulated and expressed aloud. Researchers are hopeful of identifying sentences only when they are spoken silently.

The system requires the invasive implant of a neuroprosthesis – a grid of hundreds of receptor electrodes – placed on the cortex. A first “brain-machine” interface of this type dedicated to speech had already been implanted in 2007 in a patient with confinement syndrome by Frank Guenther of Boston University (USA), and neurologist Philip Kennedy.They had managed to decode vowels. “Thefirst question is to determine the location,”explains Blaise Yvert, director of the Neurotechnology and Network Dynamics team at the Inserm BrainTech laboratory at Grenoble Alpes University, which is coordinating the European BrainCom project for France to develop speech decoding neuroprostheses. Indeed, there is no “speech center” in the brain. The areas of the language cover the temporal (hearing), Wernicke (semantics, comprehension) and the frontal lobe with the Broca area and the motor and premotor cortex for language production.” To each team, its preferences!

Thinking a word activates an articulatory movement

“Like the UCSF, we have chosen to implant electrodes on motor areas, which detect the activity of speech’s articulatorymovements,” explains Blaise Yvert. The electrodes then record the ECoG of the motor area which gives orders to the articulatory movements of the tongue, lips, larynx, jaw and veil of the palate (velum) during the pronunciation of phonemes. “Ourgoal is not to decode the “thoughts” but the spoken speech, whether it is delivered aloud or just mentally,”says JosephMakin. Our hypothesis is that people who are victims of confinement syndrome, for example, can always send the appropriate commands to speech articulators (lips, tongue, jaw), even if these brain controls no longer reach their target. And our study shows that we are able to decode them well.” In other words, thinking a sentence and actually pronouncing it would trigger comparable cortical activities. “Thishas already been demonstrated in the case of motor movement controls of the arm in paralyzed patients,” confirmsBlaise Yvert. These ECoGs are then converted, by algorithms, into probable articulatory movements, which are themselves translated into potential sentences. “Our work is called a proof of concept,”  makin says. We showed what kind of decoding was possible. We must now look at patients who can no longer speak and who will be willing to volunteer to have these electrodes implanted in the longer term.” The team also intends to increase the number of model phrases and relax the system. “Certainly, the algorithm manages to predict live strings of words and the researchers have shown that they produce with very good accuracy the expected sentences,” commentsBlaise Yvert. But the system is very constrained. That is, he is not sure that he recognizes sentences whose words are different or not in the same order as those of the sentences originally learned by the AI.”

Systems popular with Internet giants

The ideal system would be to be able to decode any sentence “with thousands of words in any order,”says Blaise Yvert. We’re still a long way from that. Another difficulty: how do you train the system when it is tested in patients who cannot speak? Not to mention ethical issues. “How do you ensure that the person who is equipped maintains his or her autonomy? Hannah Maslen, a philosopher at the University of Oxford (UK) who is reflecting with Eric Fourneret on the future of these systems, at Brain-Com. It is important to ensure that the device reproduces what the person wants to say and is silent whenever they wish. These fundamental issues must be resolved from the design of the system.” Finally, won’t these neuroprostheses, acclaimed by Internet giants like Facebook, one day equip able-bodied people to “increase” their abilities? “Personally, I reject this idea,”says Joseph Makin. Some may not do the same.

NON INVASIFSSS Brainwaves to Spell Words

If futuristic speech aids require the implantation of electrodes in the brain, other non-invasive solutions already use the electroencephalogram (EEG, see lexiconbelow). “The measurement is coarser than intracerebral implants, of course, but easier to implement,” explainsFabien Lotte, research director at the Inria centre in Bordeaux. Some brain-machine interfaces use EEG to help users spell words. “One of the key techniques is to detect the P300 brain wave, which appears 300 milliseconds after a rare and expected stimulus,” continues Fabien Lotte. If a person watches an alphabet scroll on a screen, their brain will emit the P300 wave when the searched letter appears and sends the information to the algorithm that writes under the dictation. The downside of the system is the detection time, 12 seconds per letter. The Dutch start-up MindAffect may have found the parade. It consists of recording the activity of the visual cortex when the person looks at the letters of the alphabet on a screen, each flashing according to a particular code. “By detecting the code that the visual cortex sees, the system recognizes the corresponding letter,”  explains Ivo de la Rive Box, founder and director of MindAffect. After only one minute of training, the device can recognize each letter in less than 2 seconds!” Fabien Lotte is now studying how to properly learn how to use these new kinds of expression systems. This is the Brain Conquest project of the European Research Council (ERC). Quite a program!

Lexicon

CERVEAU-MACHINE INTERFACE:  A direct link between a brain and a computer, allowing an individual to perform tasks without the action of peripheral nerves and muscles.

EEG: Electroencephalogram is the recording of the activity of neurons in the cortex (the outer layer covering the hemispheres of the brain), made with electrodes placed on the scalp.

ECOG: Electrocortogram is the recording of the activity of neurons in the cortex, carried out by electrodes placed directly on the brain.

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.