AUSTIN, Texas — Artificial intelligence programs like ChatGPT can create original stories just from a single writer’s idea. Now, a new system can build a story — by reading a person’s mind. Researchers in Texas have created an AI system capable of “decoding” brain activity in order to write down what that person is focusing on in their mind. Simply put, AI may finally be able to read our thoughts.
The semantic decoder translates a person’s brain activity as they listen to a story or watch a video. While the technology may be unnerving to some, study authors believe this could be a breakthrough for patients who are mentally conscious but incapable of speaking — such as a stroke victim. Similarly, Professor Stephen Hawking spent years communicating through the use of a specialized wheelchair and voice machine which translated his movements into speech. The theoretical physicist lived for decades with amyotrophic lateral sclerosis (ALS), which robbed him of his ability to speak on his own.
This new work uses a transformer model which is similar to the ones which power OpenAI’s ChatGPT. The sophisticated chatbot can rapidly turn questions and ideas into well-formed scripts using its immense database.
The semantic decoder doesn’t need an implant in the brain to read someone’s mind. Instead, scientists measure brain activity using fMRI scans. After training the decoder to recognize each person’s brain patterns, participants listen to hours of podcasts while in the scanner. The AI program is then able to generate corresponding text from their brain activity alone — essentially decoding what the person was imagining while hearing the podcast.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” says Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
This isn’t a word-for-word copy of your thoughts
Study authors say the semantic decoder provides the basic “gist” of what someone is thinking, and it’s still imperfect. When researchers trained the AI on a participant’s brain activity, the decoder provided a text that closely matched the user’s meanings of the original words around half the time.
In experiments, a participant listening to a podcast saying, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” ended up as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’”
The team notes that their study also addresses the potential for people to abuse this technology and invade someone’s privacy. They add that each person who participated in the study willingly agreed to allow the decoder to read their thoughts.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” says Jerry Tang, a doctoral student in computer science, in a university release. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
As for the system’s current drawbacks, researchers report the decoder provided unintelligible texts for participants who actively resisted the process. For example, researchers had these individuals think about other things while in the scanner — confusing the decoder. The AI also had trouble reading people if it did not previously train to understand their brain activity.
Could scientists create a portable thought reader?
Along with listening to podcasts, the semantic decoder was also able to provide accurate stories while people watched four short, silent videos. In this case, the system created stories based on the thoughts participants had while seeing these images.
At the moment, scientists have no practical way of moving this technology out of the lab since it relies on people spending time in an fMRI machine. However, in the future, the team believes the system could work with portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth concludes. “So, our exact kind of approach should translate to fNIRS.”
The study is published in the journal Nature Neuroscience.
You might also be interested in:
- Artificial intelligence can spot the signs of PTSD in your text messages
- Artificial intelligence can tell if you’ve got heart problems simply by the sound of your voice
- Artificial intelligence reveals these 2 key factors predict if your marriage is doomed
How does artificial intelligence work?
Artificial intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence, such as reasoning, learning, problem-solving, perception, and natural language understanding. There are various approaches to AI, but I will describe the most prominent one in use today, which is machine learning (ML), particularly deep learning based on artificial neural networks.
Machine Learning (ML)
ML is a subset of AI that involves training algorithms to recognize patterns, learn from data, and make decisions or predictions. ML can be broadly categorized into three types: supervised learning, unsupervised learning, and reinforcement learning.
- Supervised learning: In supervised learning, the algorithm is trained on a labeled dataset, which consists of input-output pairs. The algorithm learns to map inputs to the correct outputs by minimizing the difference between predicted and actual output (error). Common supervised learning tasks include classification and regression.
- Unsupervised learning: In unsupervised learning, the algorithm is provided with an unlabeled dataset, meaning there’s no explicit output associated with each input. The goal is to find hidden patterns, relationships, or structures within the data. Examples of unsupervised learning tasks are clustering and dimensionality reduction.
- Reinforcement learning: Reinforcement learning focuses on training agents to make decisions by interacting with their environment. The agent learns through trial and error, receiving feedback in the form of rewards or penalties. The goal is to find a policy that maximizes the cumulative reward over time.
Deep Learning and Artificial Neural Networks
Deep learning is a subset of ML that involves using artificial neural networks (ANNs) to model complex patterns in data. ANNs consist of interconnected nodes or neurons, organized into layers. There are three main types of layers: input, hidden, and output.
- Input layer: The input layer receives the raw data and passes it to the subsequent layers.
- Hidden layers: These layers process and transform the data using weights and activation functions. Deep learning networks typically have multiple hidden layers, allowing them to learn hierarchical representations and model complex patterns.
- Output layer: The output layer produces the final predictions or classifications, which can be used for decision-making or further processing.
The learning process in ANNs involves adjusting the weights (parameters) between neurons to minimize the error between the predicted output and actual output. This is achieved through optimization algorithms like gradient descent and backpropagation.
Basically, artificial intelligence works by using algorithms, particularly machine learning and deep learning models, to identify patterns, learn from data, and make predictions or decisions. The core of this process is adapting the model’s parameters to better represent the underlying relationships within the data.