Johannes Lee and colleagues published a paper entitled “Brain–computer interface control with artificial intelligence copilots” in Nature Machine Intelligence for 1 September 2025 that, at first blush, might seem of super-technical or merely popular culture interest until one learns that it is a report about efforts to enhance the control of devices by people with disabilities. In essence, their study compared the performance of people’s use of cursors and robotic arms under conditions where they (a) used the non-invasive1 brain-computer interface along and (b) used the BCI with artificial intelligence features supplementing the analysis of the BCI data. The BCI+AI condition produced faster and more accurate control of the cursor and robotic arm for non-handicapped individuals as well as an individual with cerebral palsy.
I thought that the demonstration looked pretty cool. Of course, I’m eager to learn about applications with real-world tasks and children. Also, there are potential concerns about consumer satisfaction that make additional studies of interest; the participants reported a preference for the BCI without AI support. Is that a matter of familiarity (i.e., practice), the task, the participants’ personal characteristics, something else inherent in the AI support, or something altogether different?

Researchers actually are working on BCI for children with disabilities. Although I haven’t tracked this literature closely, Dear Readers who are interested in learning more about what’s happening might want to check on the work of people like
Dion Kelly, Eli Kinney-Lang, Adam Kirton, and their colleagues at the University of Calgary Pediatric Stroke Program and Possibility Neurotechnologies
George Papanastasiou and colleagues (2020), whose review of research in the area of BCI and children’s neurodevelopment disorders can provide a starting place for interested readers.
Fabien Lotte, of the University of Bordeaux2 Inria Center, who has worked on employing instructional design concepts in neurofeedback (see also this)
This general area of research has fascinated me for decades. In the dim and dark reaches of my past (the 1990s), I harbor a too-brief memory of working with a UVA engineering professor on employing children’s movements to control remote devices. Randy Pausch (a fabulous academic who gave a speech, usually just called the “The Last Lecture,”3 that went viral—became a mass media with stories carried by the Wall Street Journal, Time magazine, Oprah Winfrey’s TV show, a book, and lots of sources around Earth) had a lab just across the street from my office. We visited about his work. Imagine that one could put sensors on the arm and hand of a child with severe tetraplegic cerebral palsy with non-spastic involvement of the least affected arm. One could use the data from the sensor to create a map of the individual’s arm movements in space. Those maps could then be used to control other devices. For example, the movements might control the pan, tilt, and zoom of a camera. Randy’s big hope was that he could map arm movements onto a representations of a machine geneating phonemes, volume, pitch, etc. so that the movements would essentially control artificial speech, lending new meaning to “talking with your hand (and arm).”
Here’s the abstract from the article by Professor Lee and colleagues (who are professors at UCLA):
Motor brain–computer interfaces (BCIs) decode neural signals to help people with paralysis move and communicate. Even with important advances in the past two decades, BCIs face a key obstacle to clinical viability: BCI performance should strongly outweigh costs and risks. To significantly increase the BCI performance, we use shared autonomy, where artificial intelligence (AI) copilots collaborate with BCI users to achieve task goals. We demonstrate this AI-BCI in a non-invasive BCI system decoding electroencephalography signals. We first contribute a hybrid adaptive decoding approach using a convolutional neural network and ReFIT-like Kalman filter, enabling healthy users and a participant with paralysis to control computer cursors and robotic arms via decoded electroencephalography signals. We then design two AI copilots to aid BCI users in a cursor control task and a robotic arm pick-and-place task. We demonstrate AI-BCIs that enable a participant with paralysis to achieve 3.9-times-higher performance in target hit rate during cursor control and control a robotic arm to sequentially move random blocks to random locations, a task they could not do without an AI copilot. As AI copilots improve, BCIs designed with shared autonomy may achieve higher performance.
Also, see a report about the study written for an intelligent lay audience by Rachel Fieldhouse for Nature’s news section.
Reference
Lee, J. Y., Lee, S., Mishra, A., Yan, X., McMahan, B., Gaisford, B., Kobashigawa, C., Qu, M., Xie, C., & Kao, J. C. (2025). Brain–computer interface control with artificial intelligence copilots. Nature Machine Intelligence, https://doi.org/10.1038/s42256-025-01090-y (paywalled)
Footnotes
Non-invasive BCI use EEG and similar sensors external to the individual. There are also partially invasive and invasive sensor systems that get brain data from inside the person’s skin. See the Wikipedia article on brain-computer interface to learn more.
I perked up reading this name of this university. It’s in one of my all-time favorite wine neighborhoods. I’m quite fond of fine Bordeaux from Magaux, Paulliac, and other regions just a tad north of the city of Bordeaux.
If and Dear Readers would like to watch Randy’s “The Last Lecture,” there is a digitally remastered version on YouTube. The remastered version was created by Brain Parker, the same person whose team captured the original lecture on video. The wonderful Archive.org also has a PDF of Randy’s book.