Mind Reading: It’s Actually Happening

Mind Reading: It’s Actually Happening

Tara Gumpper, Staff Writer

Magicians all around the world will wow you with their mentalism tricks: the seemingly impossible ability to read your mind. They might guess the number that you’re thinking of, or say that you know they’re thinking of the Grey Elephant from Denmark (a common mentalism illusion). However, mentalism is usually based on what people commonly think of or quirks in mathematics.

 

But mentalism is about to get real.

 

A New Form of Mentalism

This novel form of mentalism is based on pulling strings of thought from your brain, and forming them into a string of language. Most of these techniques that were previously developed were invasive: while they read people’s thoughts, it required an implant or surgery to do so. However, these brain-reading methods use brain scans rather than implantations.

 

Conducting the Study

The study was conducted by placing volunteers into an fMRI scanner, which is a functional magnetic resonance imaging scanner. Basically, it’s a large capsule that takes pictures of your brain, safely. For a total of sixteen hours per volunteer, they listened to stories and podcasts: especially The Moth (a narrative radio show). 

During these periods, the fMRI machine mapped out brain waves and scanned the neural functions happening inside these volunteers’ skulls. Any changes as well as the blood flow in the volunteers’ brains were noted in scans. 

 

Cracking the Thought-to-Speech Code

Using this data, two computational neuroscientists (researchers of the brain), Alexander Huth and Jerry Tang of University of Texas at Austin, as well as their team, started the hard work on cracking the code of neural networks. The team was able to connect specific brain patterns to words and ideas in speech. 

The scientists based their decoder on a large language model (an AI model used for writing, translating, classifying and interpreting text): GPT-1, a prototype of the popular ChatGPT. 

As soon as the researchers were able to crack the code of thought-to-speech technologies, they started to work the development of the decoder backwards. They used the scans and brain patterns to guess what words would come next, a prediction of sorts. The decoder calculated the likelihood of a certain word coming after a different one, then used the brain activity scans as a self-checker to settle on a finalized next word. Then it would, nearly instantaneously, repeat that process to figure out the next word. 

According to Laura Sanders, Science News on Alexander Huth, ‘“It definitely doesn’t nail every word…but that doesn’t account for how it paraphrases things,” he says. “It gets the ideas”’ (Sanders). Basically what he’s saying is that the decoder develops the gist of each person’s idea, rather than generating the exact words. However, this is still an incredible breakthrough in neuroscience. 

 

Decoding from Memory

The decoder also worked to write speech from people’s memories. In one part of the study, the volunteers rehearsed a silent story to themselves, as if they were telling it to somebody in their head. What the scientists found was that the gist of the stories, again, was nearly identical to that of which the decoder specified. 

Another study that was conducted included the participants watching a silent film. The decoder was able to translate the neural activity into a storyline that closely followed the plot of the silent films. 

What this means is that the decoder works at a high level of language interpretation. For an AI to derive the meaning of a word, sentence or phrase, it’s not something that simple artificial intelligence could do. This understanding of speech and meanings of words is super high-level, even for a computer. 

 

Struggles in Translation

Unfortunately, there have been some struggles with the translation. For example, the AI doesn’t really understand how to deal with pronouns. It can’t tell if the speaker is thinking in first or third person, and what who is doing with whom.

Additionally, there are also some setbacks in this technology. The AI isn’t exactly transportable, since fMRI machines are bulky and thousands of pounds, as well as extremely expensive. 

Each decoder is also unique to its users. It takes a long time to train the models, and once trained, it would only work for the individual it was trained to. And even more, the volunteer would have to actively be cooperating with the machine. If looking at a study of taking the gist from a short film, the person would have to make sure that their mind doesn’t wander, since the entire experiment could be ruined if their thoughts drift to simple things, like what they might be having for dinner. 

Although there are some setbacks for commercializing this process, many of these situations could likely be resolved in the future. 

 

Example Decodings

Included below are a few examples of the translations of the studies. The quotes that are italicized are the actual transcripts of what the participants were listening to, and the underlined sentences are the decoded transcripts. As a side note, punctuation and quotation marks are added to the original transcripts to make it easier to understand.

 

Example 1:

“I got up from the air mattress and pressed my face against the glass of the bedroom window, expecting to see eyes staring back at me but instead finding only darkness.”

“I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing”

 

Example 2:

“I didn’t know whether to scream, cry, or run away. Instead, I said ‘Leave me alone, I dont’ need your help.’ Adam disappeared and I cleaned up alone, crying.”

“Started to scream and cry, and then she just said, ‘I told you to leave me alone, you can’t hurt me anymore. I’m sorry,’ and then he stormed off. I thought he had left. I started to cry.”

 

Example 3:

“That night, I went upstairs to what had been our bedroom and not knowing what else to do, I turned out the lights and lay down on the floor.”

“We got back to my dorm room. I had no idea where my bed was. I just assumed I would sleep on it but instead I lay down on the floor.”

 

Example 4: 

“I don’t have my driver’s license yet and I just jumped out right when I needed to and she said, ‘Well, why don’t you come back to my house and I’ll give you a ride.’ I say okay.”

“She is not ready. She has not even started to learn to drive yet. I had to push her out of the car. I said, “We will take her home now and she agreed.”

 

Example 5:

“Marko leaned over to me and whispered, ‘you are the bravest girl I know’.”

“He runs up to me and hugs me tight, and whispers, ‘you saved me’.”

 

Example 6:

“We start to trade stories about our lives. We’re both from up north.”

“We started talking about our experiences in the area. He was born in I was from the north.”

 

Example 7:

“Look for a message from my wife saying that she had changed her mind and that she was coming back.”

“To see her for some reason I thought she would come to me and say she misses me.”

 

Looking Ahead

Mentalism machines are looking forward to helping the community. Devices such as these could enable those who cannot, to speak. People who are unable to speak will be able to get their ideas out into the world, it would help them communicate with the rest of society – which is definitely a win!

 

It might seem that these mind-reading tactics could be scary for the future. It would certainly be a privacy breach if the mind reading MRI machines were projected into society. But for now, the MRI’s only work if you voluntarily suit yourself up and step into them yourself, and even if you do, you can still prevent the machine from decoding your thoughts. And since it’s looking like mentalism technology will help aid in speech machines for those unable to talk, these decoders seem like they’re heading for a bright future. 

 

Works Cited

 

Devlin, Hannah. “AI Makes Non-invasive Mind-reading Possible by Turning Thoughts Into Text.” The Guardian, 1 May 2023, www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind-reading-possible-by-turning-thoughts-into-text.

Sanders, Laura. “Neuroscientists Decoded People’s Thoughts Using Brain Scans.” Science News, 1 May 2023, www.sciencenews.org/article/neuroscientists-decoded-thoughts-brain-scans.

Whang, Oliver. “A.I. Is Getting Better at Mind-Reading.” The New York Times, 1 May 2023, www.nytimes.com/2023/05/01/science/ai-speech-language.html.