The FDA in 2019 proposed a new framework for regulating AI algorithms used for medical purposes, such as analyzing medical images. Traditionally, the agency has regulated algorithms that are considered "locked," meaning they don't change much after each use.
But the new generation of AI technology consists of machine-learning (ML) algorithms that learn continuously as they are used, such as a mammography algorithm that might gain confidence as it analyzes more data. Such technology may require a different approach than what's been used with "locked" algorithms, as using the former approach on newer technology might require cleared algorithms to be reviewed again and again, the FDA said at the time.
In the new paper, the research team led by James Andrew Smith, PhD, of the University of Oxford in the U.K. noted that the typical process for developing regulations is to issue a proposal, receive feedback, and modify the proposal based on the feedback. But the agency does not have any requirement to disclose financial conflicts of interest.
Therefore, Smith's team decided to review all publicly available comments on the proposed policy from April to August 2019. In all, 63% of commenters had industry ties to companies in the sector, while in 29% of comments the presence or absence of financial ties could not be confirmed. Only 9% of commenters were confirmed not to have any financial ties to industry.
"What concerns us about these findings is that we don't have a good idea of the impact of these ties and whether they might lead to bias in this specific context," said Smith in a statement from the University of Oxford. "Whether these observations about prevalence of ties hold true in the development of other regulations, we don't yet know, but there is a growing body of evidence showing the influence of industry throughout the medical research enterprise, and this paper adds to that."
What's more, the group found that 86% of the comments didn't cite any scientific literature.
"We were also concerned by the lack of scientific evidence used in comments, and the dominance of industry over academic commenters, despite AI/ML being a very active area of research," said co-author Gary Collins, PhD. "But we hope our findings will bring the FDA proposal to the attention of academics and encourage more of them to participate in the next round of feedback on the framework, and other regulatory frameworks, where academic input could be valuable."
Copyright © 2020 AuntMinnieEurope.com