Wednesday, November 30, 2022

Scientists Create Algorithm that ‘Reads’ Thoughts from Brain Scans

Scientists have developed a method for reconstructing continuous language using functional magnetic resonance imaging brain records for the first time. The discoveries are the next stage in the hunt for improved brain-computer interfaces, which are being developed as assistive technologies for individuals who cannot talk or type.

A team from the University of Texas at Austin describes a ‘decoder,’ or algorithm, that can ‘read’ the words a person is hearing or thinking during a functional magnetic resonance imaging (fMRI) brain scan in a preprint published on bioRxiv on September 29. The new decoder is the first to achieve this using a noninvasive technique. Other researchers have previously reported success in reconstructing language or visuals based on data from implants in the brain.

Alexander Huth, a cognitive neuroscientist at the University of Texas at Austin and a co-author on the study, claims, “If you had asked any cognitive neuroscientist in the world twenty years ago if this were doable, they would have laughed you out of the room.”

It is challenging to use fMRI data for this type of study since it is relatively slow compared to the speed of human thought. As proxies for brain activity, MRI equipment assesses variations in blood flow within the brain; such changes take seconds. According to Huth, the system in this research does not decode language word-by-word; instead, it determines the sentence or thought’s meaning at a higher level.

Three study participants—one woman and two men, all in their 20s or 30s—listened to 16 hours of podcasts and radio stories, including The Moth Radio Hour, TED presentations, and John Green’s Anthropocene Reviewed. Huth and his colleagues used fMRI brain data to train their algorithm. According to Huth, the research participants needed to listen to various media to develop an accurate and broadly applicable decoder.

The decoder generated a set of predictions for the fMRI readings based on its training on the 16 hours of fMRI recordings of the subject’s brain. According to Huth, using these ‘guesses’ allowed the decoder to successfully translate ideas that weren’t related to one of the known audio recordings used in training. The prediction most closely matched the actual reading and was used to decide the words the decoder produced. These ‘guesses’ were then compared to the real-time fMRI recording.

The researchers rated how well the decoder’s creation matched the stimulus given to the participant to determine how effective the decoder was. Additionally, they evaluated language produced by the same decoder without comparing it to an fMRI recording. These results were then compared, and the statistical significance of the difference between the two was examined.

According to the findings, the algorithm’s guess-and-check process eventually creates a whole story from fMRI records that, in Huth’s words, “very well” corresponds to the account being given in the audio recording. It does, however, have certain drawbacks, such as the fact that it frequently confuses the first- and third-person pronouns and isn’t very effective at pronoun conservation. Huth claims the decoder “knows what’s happening very precisely; it doesn’t know who is performing the actions.”

Although using MRI scanners is expensive and inconvenient, Huth claims that the decoder is more likely to be used in the real world than invasive approaches because it employs noninvasive fMRI brain records. He suggests using a similar computational decoder with magnetoencephalography, another noninvasive but more portable brain imaging tool more temporally accurate than fMRI, to give nonverbal persons a means of communication.

The decoder’s accomplishment, in Huth’s opinion, is most intriguing because of its new understanding of how the brain functions. For instance, he points out that the findings show which brain regions are responsible for meaning creation. The decoder’s ability to reconstitute stimuli without semantic language, despite having been trained on listeners of spoken language, is what Huth finds most unexpected.

The study, which has not yet been peer-reviewed, raises problems regarding decoders to comprehend underlying meaning instead of textual or speech-like language. Since the new decoder recognizes meaning, or semantics, as opposed to individual words, its success is difficult to quantify.

Huth agrees that technology that can effectively ‘read minds’ can be ‘scary’ to some. According to him, his team has carefully considered the ramifications of the research and, out of concern for mental privacy, established that the decoder would not function without the participant’s consent.

From a privacy standpoint, it is also noteworthy that a decoder trained on the brain scans of one individual was unable to reconstruct the language of another individual, as the study returned “essentially no useable information.” Therefore, substantial training would be required before a person’s thoughts could be accurately deciphered.