Many studies claiming artificial intelligence as good as humans are of poor quality, with high risk for bias
THURSDAY, March 26, 2020 (HealthDay News) — In the field of medical imaging, there are few prospective studies and randomized trials of deep learning, according to a review published online March 25 in The BMJ.
Myura Nagendran, B.M., B.Ch., from Imperial College London, and colleagues conducted a systematic review of studies that use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups.
The researchers identified 10 records for deep learning randomized clinical trials; of these studies, two have been published and eight are ongoing. Eighty-one nonrandomized clinical trials were identified; of these, nine were prospective and six were tested in a real-world setting. In the comparator group, the median number of experts was four. In 95 and 93 percent of studies, respectively, full access to all datasets and code was unavailable. In 58 of 81 studies, the overall risk for bias was high and adherence to reporting standards was suboptimal. Sixty-one of the 81 studies stated in their abstract that artificial intelligence performance was at least comparable to or better than that of clinicians. Only 38 percent of the 81 studies stated that there was a need for further prospective studies or trials.
“At present, many arguably exaggerated claims exist about equivalence with or superiority over clinicians, which presents a risk for patient safety and population health at the societal level, with artificial intelligence algorithms applied in some cases to millions of patients,” the authors write.
Copyright © 2020 HealthDay. All rights reserved.