5 May 2023

Machine uses GPT to decode what you're thinking

3:30 pm on 5 May 2023

One of the inventors of a new brain-computer interface that can report what wearers are thinking is more concerned about corporations abusing the technology than dictators.

Alexander Huth of the University of Texas at Austin's Department of Computer Science is one of the authors of a new paper, published this week in Nature Neuroscience, describing how they were able to use an MRI machine and a AI system known as a 'Large Language Model', similar to ChatGPT, to read people's thoughts.

Well, kind of. While it was not quite word-perfect, it got the "vibes" right, he told RNZ's Afternoons on Thursday.

"We used functional magnetic resonance imaging - this is a type of scanning we can do on an MRI scanner - that measures brain activity in thousands of locations in your brain. We combined that with some advanced AI algorithms and used that to essentially read out words from someone's brain - words that somebody is hearing."

Volunteers spent 16 hours inside an MRI machine while listening to podcasts. The decoder learned how each person's brain reacted to each word, and reversing that process using GPT-1 - a predecessor to today's GPT-4 and ChatGPT - allowed them to read people's thoughts.

The phrase "I don't have my driver's licence yet", for example, was translated by the machine as "she has not even started to learn to drive yet".

"It's not the same words - in fact, only one of the words is the same," said Huth, but nonetheless they were "quite shocked at the level of detail that we got out of this".

Rise of the machines

The technology is only expected to get better. Huth started work on this 15 years ago, but there have been massive leaps in AI over the past few years. The AI language system used in this experiment is five years old, which is ancient. GPT-1 had 117 million 'parametres' in its model, while today's state-of-the art models, such as GPT-4, have about a trillion - thousands of times more.

The goal of the research is to create technology which can bring speech back to those who have lost it - for example, through degenerative conditions and strokes.

"These are disorders that leave people oftentimes able to think - they have normal cognition - but are unable to express themselves, which is deeply frustrating and isolating," said Huth.

Alexander Huth of the University of Texas at Austin's Department of Computer Science.

Alexander Huth of the University of Texas at Austin's Department of Computer Science. Photo: University of Texas / Supplied

There have been growing concerns, some of it from experts, that AI is progressing so quickly it could pose a threat to humanity - and sooner than many people realise.

Huth said the current state of his technology is nothing to be worried about. Not only does it require bulky multimillion-dollar equipment such as an MRI machine, and hours of training, patients - or victims - can render it useless simply by thinking about something else. Nor can it read memories - only take a guess at what the subject is thinking about that very second.

"There's other technologies in this space which can do similar things to ours - they just require brain surgery. I think if you're an authoritarian regime, doing brain surgery to someone isn't that high a bar. I think that line's already been crossed."

Also, a model trained on one person's mind cannot be transferred to another.

"On the one hand this is kind of a downside because it means it's harder to apply this technology - we want to use it to help people for example who have lost the ability to speak… but on the other hand, it's kind of good for mental privacy that you can't just take any person, put them in an MRI scanner and see what they're thinking."

Dark fate?

With technology advancing at a rate where that might end up possible quicker than anyone expects, Huth said there was a potentially scarier prospect than it ending up in the hands of a dictator.

"Another is kind of, basically, capitalism. I think one thing that we can do in the US, in New Zealand, is we can establish legal protections for mental privacy - we can maybe enshrine this in law that you can't have your thoughts read without your consent.

"But these kinds of things, they often can be superseded by contracts, for example - so maybe your employer requires you to sign a contract that you need to be able to have your mind read in order to be employed. That I think is very bad, and that's the kind of thing we really want to publicly campaign against."

While some high-profile names in the tech industry, such as Elon Musk, have warned of the dangers of a too-powerful AI emerging, Huth said he was "more excited than concerned".

"I think the fears about AI apocalypse are wildly overblown. The more immediate concerns are the bigger ones, right? Is it going to put certain people out of work? ...

"I think really the question is, who is building these AIs and why are they building them? I think if it's being built by large industrial corporations for the purpose of enriching themselves, then this might be not necessarily good for the rest of us.

"But I don't know - I'm still just excited by the prospect of intelligent machines- it's something that I've been excited about since I was a little kid."

Get the RNZ app

for ad-free news and current affairs