How optimistic should we be about the impact of artificial intelligence in a pandemic?

How optimistic should we be about the impact of artificial intelligence in a pandemic?

Scientists are exploring every possible option for help battling the coronavirus pandemic, and artificial intelligence represents an intriguing avenue. AI has been used to search for new molecules capable of treating Covid-19, to scan through lung CTs for signs of Covid-related pneumonia, and to aid the epidemiologists who tracked the diseases spread early on. The technology is even powering new tracking software that might help identify those walking around with a fever or catch people violating quarantine rules. But how much faith should people really have in these untested tools?
In a recent brief, Alex Engler, who studies AI at the Brookings Institution, warned that people should manage their expectations. Artificial intelligence can be helpful, he says, but its important to be wary of tech companies making broad, unfounded claims about what AI can do, and question whether these companies really have the data and expertise to ensure that the application of this technology is actually helpful. Ultimately, Engler argues that AI could be helpful on the margins, but its nowhere near ready to replace human experts in the battle against Covid-19.
Just because were in a pandemic doesnt mean that some of AIs greatest challenges accuracy, bias, and the risk of exacerbating surveillance havent gone away. Engler warns that people need to question whether the companies touting this technology really have access to the information they would need to build it, and whether the AI is even right to address many of the problems that Covid-19 has created.
The risks of overhyped artificial intelligence arent new. But during a pandemic, when people are eager for quick solutions, the dangers of trusting an unproven technology are greater than ever.
The following interview has been edited for clarity and brevity.
Rebecca Heilweil
Starting off, can you explain how artificial intelligence is being used to address the Covid-19 pandemic?
Alex Engler
I mean, thats a really kind of a fundamental question, right? Were seeing a huge number of claims around how AI is being used to fight or aid the fight against coronavirus, but its hard to tell which of those are really the valuable applications. And they fall into different categories.
Some are probably, frankly, just snake oil and might never happen. Some are maybe a good idea. But theyre brand new, and we should be pretty careful in trusting them, especially if they havent been robustly tested and they havent gotten out in the field yet. You might think about the diagnoses using X-rays or CT scans of pneumonia or coronavirus, as an example of that.
And then there are some things for which there are applications that are definitely useful. Maybe its helping on the margin, but not fundamentally changing the field. So some epidemiological modeling uses artificial intelligence, but its not the only part of the modeling software. Its not the only thing. Its working with subject matter experts, but its not an AI epidemiologist doing it all on its own. But the AI is helping on the margin …
Rebecca Heilweil
I think a lot of people dont necessarily understand why AI needs humans. We already have lots of health data, right? Arent we just building off of what we already know?
Alex Engler
Thats a great question, and it depends on the application. So basically, across the board, AI on its own is not helpful in these kinds of situations for a couple reasons. One: We dont have endless, huge datasets about the spread of coronavirus and epidemics that are similar enough to this. So we dont know enough to learn exclusively from historical data. And so thats the first reason. Some of this has to come from subject matter expertise, things that were learning from experiments that were learning on a day-to-day basis.
You might also notice that a lot of the stakes of decisions are really high. And so what if youre going to use AI to diagnose someone and youre wrong? Specifically, you might be concerned about a false negative. That is, you say someone is healthy when they, in fact, have coronavirus. Thats a pretty enormous mistake to make, and we want to be really careful when we give AI too much influence in situations like that.
Thats the same sort of concern we have with the CT scan approach. We might be able to diagnose coronavirus with a CT scan at some point in the future, but the methods arent robust enough yet. Were not sure they work well enough yet.
Another example is if youre looking at fever-detection, and you want to see whether people have fevers using thermal imaging. You can be wrong the other way: You could think people have fevers, and they dont. And based on that, youre going let AI keep people out of a grocery store or an airport?
Rebecca Heilweil
Whats the worst example of artificial intelligence thats been touted in response to Covid-19? You called it snake oil.
Alex Engler
I think the worst examples come not only from claims that are difficult to believe but also those that have subtle and pernicious side effects. A lot of the time, AI has second-order consequences that can be easy to forget about.
So some people have suggested a little bit of news coverage, a little bit of corporate claims that if you attach various sensors to drones, you can detect all sorts of things. The most ridiculous claims I saw were that drones could not only do thermal imaging to detect whether they have a fever but also get a sense of respiratory rate and heart rate from a drone.
Maybe that’s possibly true, but I have a very hard time believing it for some of the reasons Ive already talked about. Whats worse is that it also justifies a substantial surveillance opportunity, a mechanism of a surveillance state. Thats where you really run into problems where you might justify a new level of surveillance thats imposing in public spaces, that maybe affects peoples behavior, and that still cant do the task that the AI is claiming it can do.
So thats probably the worst example, the one that makes me the most concerned. But there are different mechanisms, different ways to be concerned about this. The mortality rate predictions make me the most concerned for bias, for instance, and thats a very different perspective on whats potentially harmful.
Rebecca Heilweil
Most people have heard that AI can be biased, and that it can be discriminatory based on race or gender, or other factors. Can you explain what that AI bias might look like in a pandemic?
Alex Engler
One of the most important examples of AI bias that weve seen is the case of the Optum algorithm, which was a health care algorithm used by Optum, the data analytics subsidiary of UnitedHealthcare thats used to determine the risk of future health care needs.
What researchers discovered when they got access to the Optum algorithm was that it was very biased against African Americans for reasons that werent relevant to health, that were more relevant to finances and socio-economic status. Through both automated decisions human-made decisions, they basically argued for less care for black Americans.
When you see that type of system, your default should be: There are biases in them until you rigorously evaluate them to show that theyre not there. Probably you will find some, and probably you would have to do some mitigation.
So in the case of these early algorithms that people are using to evaluate the mortality risk of Covid-19, the likelihood that there are subtle biases is very high, especially when you look at things like different biological characteristics, things called biomarkers. These might help you guess who is more likely to be seriously ill, but they can also be very misleading in terms of unaccounted-for signals.
So men, for instance, are more likely to be smokers, and they also show higher mortality risk. But if you didnt account for the fact that they were smoking or that theres smoking in their medical history, your algorithm might show that all men are more at risk, and thus, all men are going to get prioritized for care hypothetically.
You have to be concerned if youre going to roll out these mortality-risk algorithms, which for the record, can work and are valuable. But youd have to be concerned about rolling them out so quickly that they include these sort of pernicious biases in something so important and so high-risk as health care allocation.
Rebecca Heilweil
Is there an application of this that sticks out to you as the most promising application of artificial intelligence thats being used to tackle the Covid-19 pandemic?
Alex Engler
Im hopeful about two efforts. One is AlphaFold, which is Deep Minds protein-folding initiative. It is possible that those protein structure estimations are helpful for creating vaccines and also treatments. To Deep Minds credit, they did this immediately once they got the genetic makeup of Covid-19. They did estimations, and they publicly released those models. So it is possible and I think it is very much worth keeping an eye on that this effort might speed the development of a vaccine or a therapeutic antibody that helps undermine the damage of the virus. Its a little too early to tell. These are estimations, and these predictions need to be experimentally validated, but this is a new development. It could be one thats very meaningful.
Theres another effort to analyze the research about Covid-19 one thats an emerging area of AI that involves using AI to analyze large amounts of academic papers. Because there are so many papers, the pure amount of academic research being created, especially in fields like biomedicine, makes it very hard for anyone to read all of them or even to find all the ones that are relevant. So there are some people taking a very large database of papers and seeing what they can discover about them.
I am hedged in my optimism here. I think this could do some useful things, like make it easier for researchers to find relevant papers and to categorize papers. I think its unlikely that our vaccines, our solutions, or our core understanding of Covid-19 will come from that. But it could help in meaningful ways to organize what we know.
For the record, I think that in time, with less of an absurd turnaround period, you can see AI meaningfully help in medical imagery. Theres tons of good news around AI and medical imagery. Maybe it can tell the difference between bacterial pneumonia and the pneumonia thats associated with Covid-19. Maybe with really good thermal imaging, it can get closer to fever detection. Its not that these things are fundamentally impossible tasks. Its that its worth approaching them with a skeptical, informed take rather than just taking the idea on its face, Of course, AI can do that.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Support Voxs explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Voxs work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.

Share