Categories: Technology Facts

New standards for AI clinical trials will help spot snake oil and hypeon September 11, 2020 at 4:15 pm

The news: An international consortium of medical experts has introduced the first official standards for clinical trials that involve artificial intelligence. The move comes at a time when hype around medical AI is at a peak, with inflated and unverified claims about the effectiveness of certain tools threatening to undermine people’s trust in AI overall.

What it means: Announced in Nature Medicine, the British Medical Journal, and the Lancet, the new standards extend two sets of guidelines around how clinical trials are conducted and reported that are already used around the world for drug development, diagnostic tests, and other medical interventions. AI researchers will now have to describe the skills needed to use an AI tool, the setting in which the AI is evaluated, details about how humans interact with the AI, the analysis of error cases, and more.

Why it matters: Randomized controlled trials are the most trustworthy way to demonstrate the effectiveness and safety of a treatment or clinical technique. They underpin both medical practice and health policy. But their trustworthiness depends on whether researchers stick to strict guidelines in how their trials are carried out and reported. In the last few years, many new AI tools have been developed and described in medical journals, but their effectiveness has been hard to compare and assess because the quality of trial designs varies. In March, a study in the BMJ warned that poor research and exaggerated claims about how good AI was at analyzing medical images posed a risk to millions of patients.

Peak hype: A lack of common standards has also allowed private companies to crow about the effectiveness of their AI without facing the scrutiny applied to other types of medical intervention or diagnosis. For example, the UK-based digital health company Babylon Health came under fire in 2018 for announcing that its diagnostic chatbot was “on par with human doctors,” on the basis of a test that critics argued was misleading.

Babylon Health is far from alone. Developers have been claiming that medical AIs outperform or match human ability for some time, and the pandemic has sent this trend into overdrive as companies compete to get their tools noticed. In most cases, evaluation of these AIs is done in-house and in favorable conditions.

Future promise: That’s not to say AI can’t beat human doctors. In fact, the first independent evaluation of an AI diagnostic tool that outperformed humans in spotting cancer on mammograms was published only last month. The study found that a tool made by Lunit AI and used in certain hospitals in South Korea finished in the middle of the pack of radiologists it was tested against. It was even more accurate when paired with a human doctor. By separating the good from the bad, the new standards will make this kind of independent evaluation easier, ultimately leading to better–and more trustworthy–medical AI.

Read More

Recent Posts

Elevating Packaging Standards: Industries That Benefit from Custom Rigid Boxes

In the dynamic world of packaging, customization is the key to making a lasting impression…

1 day ago

Why Choose a Commercial Laundry Service with Pickup and Delivery Options

We live in a busy world, and we need to look for ways to streamline…

2 days ago

Navigating the Complexities of Construction Estimation: Tips and Tricks

Building price quotes are both an art and also a scientific research. It's the keystone…

4 days ago

How Do Islamorada’s Fishing Charters Enhance Your Florida Keys Experience?

Have you ever wondered what it's like to reel in a big catch under the…

4 days ago

Unlocking Possibilities: The Power of Skilled Divorce Advocacy

An attorney with exclusively family law as their focus area helps individuals address, guide, and…

4 days ago

Road Warriors: How Car Accident Attorneys Fight for Your Rights

Car accident attorneys are the legal professionals who are the most important of the lot…

4 days ago