One of the best-studied networks in neuroscience is the brain of a fruit fly, in particular, a part called the mushroom body. This analyzes sensory inputs such as odors, temperature, humidity and visual data so that the fly can learn to distinguish friendly stimuli from dangerous ones.
Neuroscientists have long known how this section of the brain is wired. It consists of a set of cells called projection neurons that transmit the sensory information to a population of 2,000 neurons called Kenyon cells. The Kenyon cells are wired together to form a neural network capable of learning.
This is how fruit flies learn to avoid potentially hazardous sensory inputs — such as dangerous smells and temperatures — while learning to approach foodstuffs, potential mates, and so on.
But the power and flexibility of this relatively small network has long raised a curious question for neuroscientists: could it be re-programmed to tackle other tasks?
Now they get an answer thanks to the work of Yuchan Liang at the Rensselaer Polytechnic Institute, the MIT-IBM Watson AI Lab, and colleagues. This team has hacked the fruit fly brain network to perform other tasks, such as natural language processing. It’s the first time a naturally occurring network has been commandeered in this way.
And this biological brain network is no slouch. Liang and the team says it matches the performance of artificial learning networks while using far fewer computational resources.
In Silico Network
The approach is relatively straightforward. The team began by using a computer program to recreate the network that mushroom bodies rely on — a number of projection neurons feeding data to about 2,000 Kenyon cells. The team then trained the network to recognize the correlations between words in the text.
The task is based on the idea that a word can be characterized by it its context, or the other words that usually appear near it. The idea is to start with a corpus of text and then, for each word, to analyze those words that appear before and after it.
In this way, machine learning systems can learn to predict the next word in a sentence, given the ones that already appear. A number of systems, such as BERT, use this approach to generate seemingly natural sentences. So Liang and the team taught the fly brain network to do the same thing.
It turns out that the natural network is pretty good at this, even though it evolved for an entirely different purpose. “We show that this network can learn semantic representations of words,” says Liang and colleagues.
In their work, they go on to say the fruit fly brain network achieves a comparable performance to existing approaches to natural language processing. And crucially, the biological network uses just a fraction of the computational resources. By that they mean it requires a shorter training time while using a smaller memory footprint.
That’s an interesting result. “We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts,” says Liang and colleagues.
The work raises a number of fascinating questions. One obvious conundrum is why the biological network is so much more efficient. Clearly, evolution will have played a role in selecting better networks in nature. But Liang and colleagues do not comment on the specific properties or architecture that make the network of Kenyon cells so efficient.
The work also raises the possibility that other biological networks can be commandeered in the same way. However, one potential problem is the difficulty neuroscientists have in characterizing the networks in more complex brains, such as mammalian ones. So it may be some time before the networks associated with mouse, dolphin, or human brains can be hacked in this way.