Scientists use brain scans, AI to ‘decode’ thoughts

Scientists have developed a method to transcribe the “gist” or essence of what people think using brain scans, as well as models that use artificial intelligence. This is a first step in mind-reading.

The US scientists admitted that while the primary goal of the decoder was to help those who had lost their ability to communicate, it raised concerns about “mental privacy.”

To allay such fears, the researchers conducted tests that showed their decoder couldn’t be used by anyone who hadn’t allowed it to be taught on their brain activity for long hours in a functional magnetic imaging scanner (fMRI).

Research has shown that brain implants can help people who are unable to speak or type spell out words and sentences.

These “brain computer interfaces” are designed to target the part of the mind that controls the mouth as it attempts to form words.

Alexander Huth is a neuroscientist at the University of Texas at Austin and co-author of the new study. He said that their team’s decoding system “works at a different level.”

 

Huth said in an online press event that “our system works on the level of ideas and semantics. Meaning is what we are after.”

According to a study published in Nature Neuroscience, it is the first system that can reconstruct continuous language without invasive brain implants.

“Deeper than words”

Three people spent 16 hours in an fMRI listening to stories. Most of them were podcasts, such as Modern Love from the New York Times.

The researchers were able to determine how the different words, phrases, and meanings affected the areas of the brain that process language.

This data was fed into a neural network language model using GPT-1. GPT-1 is the predecessor to AI technology, which later became immensely popular in ChatGPT.

The model was taught to predict the brain responses of each individual to speech perception, and then it narrowed down the choices until it found the most similar response.

Each participant listened to an entirely new story on the fMRI device in order to test its accuracy.

Jerry Tang, the first author of the study, said that the decoder “could recover the gist” of what the user heard.

When the participant said, “I haven’t got my license yet,” the model replied, “she’s not even begun to learn how to drive yet.”

Researchers admit that the decoder had difficulty with pronouns like “I” and “she.”

Even when participants made up stories or watched silent films, the decoder could still grasp the “gist,” they claimed.

Huth explained that this showed “we were decoding something deeper than language and then converting it to language.”

Huth explained that because fMRI scanning takes too long to scan individual words, the data collected is a “mishmash” or an accumulation over a few-second periods.

The idea is still clear, but the words are lost.

Ethical Warning

David Rodriguez-Arias Vailhen is a bioethics Professor at Spain’s Granada University who was not involved in the study. He said that it had gone beyond previous brain-computer interactions.

He warned that this could happen against the will of people, for example, when they sleep.

Researchers anticipated this concern.

The tests showed that the decoder would not work if it were not previously trained to recognize a person’s brain activity.

The three participants also managed to defeat the decoder.

During the listening of a podcast, users were instructed to count in sevens, imagine and name animals, or create a story. Researchers said that all these tactics “sabotaged the decoder.”

The team is now working to accelerate the process so they can decode brain scans instantly.

Also, they called for regulations protecting mental privacy.

Bioethicist Rodriguez Arias Vailhen said, “Our minds have so far guarded our privacy.”

This discovery could be the first step in compromising this freedom in the future.”

You Might Also Like

Leave a Reply