And brain activity decoder which has been developed with a new artificial intelligence system based on a methodology similar to that of Open AI’s ChatGPT and Google’s Bard helps to ‘read’ the human mind in a non-invasive way, since it is not necessary to use surgical implants for it to work, and it could be useful for those people who do not have cognitive problems but have lost the ability to speak –for example, after suffering a stroke– they communicate again intelligibly.
This system is called semantic decoder and was developed by researchers at the University of Texas at Austin, who measured the brain activity of their study participants with an fMRI scanner after prolonged decoder training, during which the individual listened to hours of podcasts on the scanner. Then, and only if the participant agrees to have their thoughts decoded, the device can generate text from your brain activity while listening to a new story or imagining telling a story.
“For a non-invasive method, this is a real breakthrough compared to what’s been done before, which is usually single words or short sentences.” “We are getting the model to decode a continuous language over long periods of time with complicated ideas,” said Alex Huth, an assistant professor of neuroscience and computer science at UT Austin and one of the directors of the study, whose results have been published in Nature Neuroscience.
Brain decoding only worked if the person cooperated
The researchers have explained that their objective was capture the essence of what people say or think and that, although it is not an infallible method, they verified that when the coder was trained to monitor the brain activity of a participant, the machine produced a text very similar to the expected meanings of the original words.
“While technology is in such an early stage, it is important to be proactive and enact policies that protect people and their privacy”
These scientists have given an example to help understand the methodology. During the tests the thoughts of a participant who heard someone say “I don’t have a driver’s license yet” were translated into words like “she hasn’t even started learning to drive yet”. And hearing “she didn’t know whether to scream, cry or run away. Instead I said leave me alone! It was translated as “I started screaming and crying, and then she just said, ‘I told you to leave me alone.’”
Another of the experiments they conducted consisted of having the participants look at four videos short and silent while in the scanner, and the semantic decoder was able to use their brain activity to accurately describe some of the situations that appeared in the videos. The decoder, however, only worked properly if people had voluntarily participated in the training and if they offered no resistance during the tests, for example, with thoughts that distorted the results.
Is it possible that they read our mind without us knowing it?
The authors of the paper have reported that this system is not yet viable outside the laboratory because it requires spending a lot of time in an fMRI machine, but they are confident that it could later be transferred to other more accessible brain imaging systems, such as functional spectroscopy. near infrared (fNIRS).
They have also indicated that it is not possible to use this technology to spy on us without us knowing, as “a person needs to spend up to 15 hours lying in an MRI scanner, be perfectly still and pay close attention to the stories they’re hearing before this really works well for them,” Huth said. .
In fact, when they tested the system on people who had not participated in this training, they found that the results were unintelligible. In addition, they found that when those who had done the training resisted attempts at brain decoding –for example, thinking about animals– it was not possible to achieve good results either.
“We take very seriously concerns that it could used for bad purposes and we have worked to avoid it”. “We want to make sure that people only use these types of technologies when they want to and that they help them.” said Jerry Tang, a doctoral student in computer science at UT Austin who also led the study. However, he believes that “at this time, while technology is in such an early state, it is important to be proactive and enact policies that protect people and their privacy.” “Regulating what these devices can be used for is also very important.”
Source: www.webconsultas.com