Decoding inner speech and hidden language

Dr Oiwi Parker Jones is a Postdoctoral Researcher in the Applied Artificial Intelligence Lab at the Oxford Robotics Institute and a Hugh Price Research Fellow at Jesus College, Oxford. His research focuses on the development of neural speech prosthetics for decoding inner speech, which is a kind of thinking that often takes the form of an imagined conversation. His aim is to develop Neural Speech Prosthetics for paralysed patients.

How did you come to be at Jesus College?

It was luck. I had gone to London to do a Postdoc, having previously studied at Oxford. After coming back I saw a position advertised and jumped at the opportunity to apply. I tried to make a joke in the interview about how I wouldn’t be the first ‘Jones’ in College, but I bet I would be the first Oiwi. Joking in interviews can be risky - I’m lucky that didn’t end it there!

Getting to be a member of College has been lucky too from my perspective because it has been such a good fit. I’ve heard it said that our community is about the size of Dunbar’s number, which is proposed to be the number of active relationships that humans have evolved to sustain - or, to put it more colourfully, “the number of people you would not feel embarrassed about joining uninvited for a drink.” There is something very special about how genuinely inviting the community at Jesus is, and how much it is like the small town in Hawaii that I came from. I say this as someone who has been a member of two other Oxford Colleges (and had positive experiences at both). Some prospective students recently asked me how to choose a college. More than anything, I told them to think of colleges as communities of people. For their studies, they will of course want to find the right tutors to match their interests. I have also come to appreciate how significant it is for one’s productivity to spend time with positive and supportive people.

I love learning about new academic disciplines. I’m also very lucky to have found a large number of people at Jesus with overlapping research interests, not only in the fields that I happen to work on but also in adjacent fields. I’m keen, for example, to speak with philosophers, historians and experts in policy-making to help think about the role of creating new technologies. We’re forced to specialise so intensely in our own fields that it’s hard to know everything you should if you want to make a big impact on the world, which I think we’re all trying to do.

Briefly explain your current research

My primary research interest is in the development of a working Neural Speech Prosthetic, or a machine to translate thoughts into audible speech or text. This area bridges the fields of Natural Language Processing (NLP), which is a branch of Computer Science that focuses on language and speech, and Imaging Neuroscience. The target population for Neural Speech Prosthetics are conscious patients who lack the motor control necessary to communicate fluently. These include extremely nightmarish situations of “locked-in” patients trapped in paralysed bodies but who are awake - sometimes unable even to control their eye movements. Stephen Hawking would have been a candidate for a Neural Speech Prosthetic. He was able to communicate with an external computer through a custom interface, but it was slow. The ultimate goal is to transcribe people’s thoughts in real time.

Remarkable work has been done recently in California by Edward Chang’s group, where they have been able to produce a system that can reconstruct audio from brain recordings while experimental volunteers speak out loud. The system could even reconstruct intelligible audio from brain recordings made while their subjects mouthed speech silently. This is amazing but unfortunately won’t produce a usable prosthetic for the target population of paralysed patients because it requires the patients to be able to move.

Therefore, my research is pushing into the area of imagined speech, or ‘verbal thinking’, that is familiar from ordinary acts like silent reading or thinking to oneself. These are the kinds of experiences that we imagine everyone has but which are hard to study objectively, though we do know a lot about them from sources like stream of consciousness narratives in literature.

Why did you choose this topic?

A lot of my past work has been focused on trying to use technology to help people. My NLP work has focused on endangered languages, specifically Hawaiian and other Polynesian languages, because I’m from Hawaii and I grew up speaking a Polynesian language. As a neuroscientist much of my work has been focused on language recovery, for example in patients who have suffered a stroke by trying to predict if, when and how they will recover speech. It has therefore been a consistent theme in my work to focus on applications that help people, and using the toolkits of machine learning and neuro-imaging. Developing Neural Speech Prosthetics appeals to me because I think it requires a set of skills that I happen to have. I’m also very keen to be involved in creating this technology in a responsible way that may be sensitive to potential abuses of that technology.

On a personal note, I grew up speaking Hawaiian, which is an endangered language, so it’s interesting to me that there’s a theme of language recovery across all of my work. Incidentally, there’s a strong Welsh and Hawaiian connection given that the two languages are in similar social and linguistic situations. Hawaiian was severely endangered when I was younger, but my parents and community helped to revitalize it by raising a new generation of Hawaiian speakers, and even creating language immersion schools for us. I grew up hyper-aware that language can be taken away but that it can also sometimes be given back. Working to give people back their voice has always been a core part of who I am.

What are the key findings of your research so far?

I’ve done work recently on resting-state brain data. These are data acquired non-invasively (using fMRI and MEG) while healthy volunteers let their minds wander. My colleagues and I have a series of papers – for example one in the journal NeuroImage: Clinical - where we show that we can accurately predict the kind of results you’d expect from explicit task data, but from resting data. That is, we can begin to interpret what people are doing while mind-wandering. This is building towards interpreting inner speech from ‘spontaneous’ brain activity.

What impact(s) are you expecting to emerge from your research?

A Neural Speech Prosthetic would be transformative for people who lack the motor control needed to speak. That’s really what drives me. At the same time, other potential impacts could be less positive. I worry about potential abuses of this technology. How do we make sure that the technology we create is created in as conscious and responsible a way as possible? How do we solely maximise the benefits? My working hypothesis is that we start by opening it up to a broader conversation.

What are the upcoming milestones in your work?

I’ve received funding for the next round of experiments, which I plan to start running in the new year. In the meantime, I’m writing up papers from my last set of experiments. I’m also hoping to plan an event with some ethicists and policy people in College to talk about some social issues around the development of neural-prosthetics, like privacy. What protections should we put into place ahead of time? And when should we be thinking about such things? It’s a really exciting time.

What will you be focusing on in future research projects?

Something I haven’t talked about yet is my work on developing new methods in machine learning. A running theme in this work has been to find ways to use powerful methods with smaller datasets. There’s been a paradigm shift in machine learning over the past 15 years or so that means we can do amazing new things with large amounts of data (especially annotated or “labelled” data). An example is the speech recognition technology in your mobile phone, which might be trained on hundreds of hours of transcribed speech. The challenge is that for a lot of applications you don’t have that much data. My work has generally focused on developing powerful methods for smaller linguistic and neurological datasets, for example developing speech technology for a language like Hawaiian that doesn’t have the resources of English. Ultimately we want to have these types of methods on hand for use in a brain prosthetic, where the data can be very sparse. So it’s really about setting up the tools we’re going to need to make that work.

Welcome to our Meet the Fellows section, where we profile some of our Jesus College Fellows and highlight the fascinating research they are working on. For a full list of Jesus College Fellows and Lecturers, please see here.