For years, scientists have known that reduced blood flow in the brain is a symptom of Alzheimer’s disease. More recent research has also shown that this reduced blood flow can be caused by clogged blood vessels — or “stalls.” And by reversing these stalls in mice, scientists were able to restore their memory.
The Cornell team hoped to fully understand the connection between stalls and Alzheimer’s through analyzing huge amounts of data that they’d generated using state-of-the-art microscopes. But as their work continued, they simply couldn’t analyze the data fast enough to make a difference for people dealing with Alzheimer’s today.
Each research question was taking a year on average to answer. The team was using machine learning, where computer algorithms can learn automatically through their experience. And even as the team tested and developed computer algorithms to speed up their work searching for stalls, they couldn’t get computers to break above 85 percent accuracy. Finding these clogged blood vessels in the brains of mice was so critical that no less than 95 percent accuracy could do.
Machine learning capabilities are improving rapidly, but more often than not, computers still can’t do as well as humans. And in cases where high data accuracy is needed, the keen skills of citizen scientists — online volunteers who help analyze data — may be the only option.
After a chance encounter, however, the researchers met and teamed up with crowdsourcing experts at the Human Computation Institute. The team created a project called Stall Catchers and enlisted citizen scientists all over the world to scour their brain images and label every stall.
The effort has been a huge success, and along the way, machine learning algorithms have continued to play a role in preparing the Stall Catchers data for human analysis, including finding and outlining all the vessel segments to be analyzed by public volunteers. But the hard work — deciding whether blood vessels are flowing or stalled — fell solely on the hands and eyes of citizen scientists.
Now scientists are giving machines a second chance.
Machine learning research requires huge, labelled datasets in order to teach these models to make predictions, like whether or not a vessel is stalled. And after almost 4 years of running Stall Catchers, citizen scientists have applied millions of crowd-generated labels to over 500,000 vessel movies. This means that, for the first time ever, there’s finally enough training data to give these systems a fresh chance to exceed the 85 percent accuracy levels they topped out at four years ago.
“If there is a job that machines can do, we think it would be unethical to waste volunteer human cognitive labor on that job, when there are other, more pressing societal needs that require the unique mental faculties of the magnificent human mind,” says Pietro Michelucci, who leads the Human Computation Institute.
The institute’s partner organization, Driven Data, has launched a machine learning challenge using Stall Catchers data to design new techniques to analyze blood vessels.
In the challenge, which will last until August 3, machine learning enthusiasts will compete for a $10,000 purse, donated by MathWorks, the company that created the data science programming language called MatLab.
Even today’s best machine-based systems probably can’t analyze the data as well as humans. However, machine learning models could still make a big difference by reliably analyzing the easiest blood vessels. That way, Stall Catchers players could focus their efforts on only the most challenging tasks.
And by working together, humans and machines could soon achieve unprecedented speeds in analyzing Alzheimer’s research data.
Egle Marija Ramanauskaite is Citizen Science Coordinator at the Human Computation Institute, and Communications Director for the Stall Catchers project.