Woman communicates again after a stroke with artificial intelligence

The artificial intelligence (AI) arouses our misgivings, partly due to the apocalyptic predictions in many science fiction movies that warn that one day not too far away, machines will take control and subdue humans. However, for the moment it has become a key element in the development of technologies that provide great help in fields such as medicine.

Now, a new technology developed by American scientists that is based on a brain implant and a digital avatar has allowed a woman who survived a stroke, but lost her speech, to have regained the ability to communicate through his facial expressions for the first time in 18 years.

Researchers at the University of California San Francisco (UCSF) and the University of California Berkeley (UC Berkeley) created a system that, for the first time, has synthesized speech or facial expressions from brain signalsand that it can also decode these signals into text at almost 80 words per minute, a significant improvement compared to other devices on the market.

“Our goal is to restore a full and personalized form of communication, which is the most natural way for us to talk to others”

Dr. Edward Chang, chair of neurological surgery at UCSF, who has worked on this technology known as brain-computer interface or BCI For over 10 years, he hopes that this latest research breakthrough will lead to an FDA-approved system that will enable speech from brain signals in the near future. The results of the work have been published in Nature.

Decode speech signals and achieve natural communication

Ann, the patient on whom the new technology has been tested, suffered a stroke when she was 30 years old, as a result of which she lost control of all the muscles in her body. After years of physical therapy, she was able to move her facial muscles enough to laugh or cry, but the muscles needed to talk about her remained immobile. Today, this woman is helping researchers at UC San Francisco and UC Berkeley develop new brain-computer technology that could in the future allow people like her to communicate more naturally through a digital avatar that resembles her. a person.

“Our goal is to restore a full and personalized way of communication, which is the most natural way for us to talk to others,” Chang said. “These advances bring us that much closer to making this a real solution for patients.”

Chang’s team’s goal was to decode Ann’s brain signals into the richness of speech, along with the movements that animate a person’s face during a conversation. To do this, they implanted a paper-thin rectangle of 253 electrodes on the surface of her brain over areas they knew were key to speech.

The electrodes intercepted brain signals that, but for the stroke, would have reached the muscles of Ann’s lips, tongue, jaw and larynx, as well as her face. A cable, attached to a port attached to Ann’s head, connected the electrodes to a bank of computers.

Ann worked with the team for weeks to train the system’s AI algorithms so that recognize their unique brain signals for speechand to achieve this they repeated different phrases from a conversational vocabulary of 1,024 words over and over again until the computer recognized the patterns of brain activity associated with all the basic sounds of speech.

Instead of training artificial intelligence to recognize whole words, the researchers created a system that decodes words from smaller components called phonemes. These are the speech subunits that form spoken words in the same way that letters form written words. For example, ‘Hello’ contains four phonemes: ‘HH’, ‘AH’, ‘L’ and ‘OW’.

In this way, the computer only needed to learn 39 phonemes to decipher any English word, improving the system’s accuracy and tripling its speed. “Accuracy, speed and vocabulary are crucial,” said Sean Metzger, who developed the text decoder with Alex Silva, both graduate students in the UC Berkeley/UCSF Joint Bioengineering Program. “It’s what gives Ann the potential, over time, to communicate almost as fast as we are and to have much more naturalistic and normal conversations.”

The team devised an algorithm to synthesize speech, which they customized to sound like Ann’s pre-injury voice using a recording of her speaking at her wedding. The team animated Ann’s avatar with the help of software that simulates and animates facial muscle movements, developed by Speech Graphics, a company that makes AI-based facial animations.

The researchers created custom machine learning processes that allowed the company’s software to combine signals sent from Ann’s brain as she tried to speak and convert them into movements on your avatar’s facecausing the jaw to open and close, the lips to protrude and pucker, and the tongue to rise and fall, and the facial movements of joy, sadness and surprise were reproduced.

A critical next step for the team is to create a wireless version that does not require Ann to be physically connected to the BCI. “Giving people like Ann the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions,” concluded co-senior author David Moses, an assistant professor of neurological surgery.




Source: www.webconsultas.com



Leave a Reply