Preparing cancer patients for difficult decisions is an oncologist’s job. They don’t always remember to do it, however. At the University of Pennsylvania Health System, doctors are nudged to talk about a patient’s treatment and end-of-life preferences by an artificially intelligent algorithm that predicts the chances of death.
But it’s far from being a set-it-and-forget-it tool. A routine tech checkup revealed the algorithm decayed during the covid-19 pandemic, getting 7 percentage points worse at predicting who would die, according to a 2022 study.
There were likely real-life impacts. Ravi Parikh, an Emory University oncologist who was the study’s lead author, told KFF Health News the tool failed hundreds of times to prompt doctors to initiate that important discussion — possibly heading off unnecessary chemotherapy — with patients who needed it.
He believes several algorithms designed to enhance medical care weakened during the pandemic, not just the one at Penn Medicine. “Many institutions are not routinely monitoring the performance” of their products, Parikh said.
Algorithm glitches are one facet of a dilemma that computer scientists and doctors have long acknowledged but that is starting to puzzle hospital executives and researchers: Artificial intelligence systems require consistent monitoring and staffing to put in place and to keep them working well.
In essence: You need people, and more machines, to make sure the ne …