Elon Musk’s Neuralink has been making waves on the technology side of neural implants, but it hasn’t yet shown how we might actually use implants. For now, demonstrating the promise of implants remains in the hands of the academic community.
This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED’s parent company, Condé Nast.
This week, that community provided a rather impressive example of the promise of neural implants. Using an implant, a paralyzed individual managed to type out roughly 90 characters per minute simply by imagining that he was writing those characters out by hand.
Previous attempts at providing typing capabilities to paralyzed people via implants have involved giving subjects a virtual keyboard and letting them maneuver a cursor with their mind. The process is effective but slow, and it requires the user’s full attention, as the subject has to track the progress of the cursor and determine when to perform the equivalent of a key press. It also requires the user to spend the time to learn how to control the system.
But there are other possible routes to getting characters out of the brain and onto the page. Somewhere in our writing thought process, we form the intention of using a specific character, and using an implant to track this intention could potentially work. Unfortunately, the process is not especially well understood.
Downstream of that intention, a decision is transmitted to the motor cortex, where it’s translated into actions. Again, there’s an intent stage, where the motor cortex determines it will form the letter (by typing or writing, for example), which is then translated into the specific muscle motions required to perform the action. These processes are much better understood, and they’re what the research team targeted for their new work.
Specifically, the researchers placed two implants in the premotor cortex of a paralyzed person. This area is thought to be involved in forming the intentions to perform movements. Catching these intentions is much more likely to produce a clear signal than catching the movements themselves, which are likely to be complex (any movement involves multiple muscles) and depend on context (where your hand is relative to the page you’re writing on, etc.).
With the implants in the right place, the researchers asked the participant to imagine writing letters on a page and recorded the neural activity as he did so.
Altogether, there were roughly 200 electrodes in the participant’s premotor cortex. Not all of them were informative for letter-writing. But for those that were, the authors performed a principal component analysis, which identified the features of the neural recordings that differed the most when various letters were imagined. Converting these recordings into a two-dimensional plot, it was obvious that the activity seen when writing a single character always clustered together. And physically similar characters—p and b, for example, or h, n, and r—formed clusters near each other.
(The researchers also asked the participant to do punctuation marks such as a comma and question mark and used a > to indicate a space and a tilde for a period.)
Overall, the researchers found they could decipher the appropriate character with an accuracy of a bit over 94 percent, but the system required a relatively slow analysis after the neural data was recorded. To get things working in real time, the researchers trained a recurrent neural network to estimate the probability of a signal corresponding to each letter.
Despite working with a relatively small amount of data (only 242 sentences’ worth of characters), the system worked remarkably well. The lag between the thought and a character appearing on screen was about half a second, and the participant was able to produce about 90 characters per minute, easily topping the previous record for implant-driven typing, which was about 25 characters per minute. The raw error rate was about 5 percent, and applying a system like a typing autocorrect could drop the error rate down to 1 percent.
The tests were all done with prepared sentences. Once the system was validated, however, the researchers asked the participant to type out free-form answers to questions. Here, the speed went down a bit (75 characters a minute) and errors went up to 2 percent after autocorrection, but the system still worked.