关键词: Artificial intelligence bias fairness machine learning medical imaging radiology

来  源:   DOI:10.4274/dir.2024.242854

Abstract:
Although artificial intelligence (AI) methods hold promise for medical imaging-based prediction tasks, their integration into medical practice may present a double-edged sword due to bias (i.e., systematic errors). AI algorithms have the potential to mitigate cognitive biases in human interpretation, but extensive research has highlighted the tendency of AI systems to internalize biases within their model. This fact, whether intentional or not, may ultimately lead to unintentional consequences in the clinical setting, potentially compromising patient outcomes. This concern is particularly important in medical imaging, where AI has been more progressively and widely embraced than any other medical field. A comprehensive understanding of bias at each stage of the AI pipeline is therefore essential to contribute to developing AI solutions that are not only less biased but also widely applicable. This international collaborative review effort aims to increase awareness within the medical imaging community about the importance of proactively identifying and addressing AI bias to prevent its negative consequences from being realized later. The authors began with the fundamentals of bias by explaining its different definitions and delineating various potential sources. Strategies for detecting and identifying bias were then outlined, followed by a review of techniques for its avoidance and mitigation. Moreover, ethical dimensions, challenges encountered, and prospects were discussed.
摘要:
尽管人工智能(AI)方法有望用于基于医学成像的预测任务,由于偏见,他们融入医疗实践可能会带来一把双刃剑(即系统误差)。人工智能算法有可能减轻人类解释中的认知偏见,但广泛的研究强调了人工智能系统在其模型中内化偏见的趋势。这个事实,不管是有意还是无意,最终可能在临床环境中导致无意的后果,可能影响患者预后。这种关注在医学成像中尤为重要,在那里,人工智能比任何其他医学领域都更加渐进和广泛地接受。因此,全面了解AI管道每个阶段的偏见对于开发不仅减少偏见而且广泛适用的AI解决方案至关重要。这项国际合作审查工作旨在提高医学影像界对主动识别和解决AI偏见的重要性的认识,以防止其负面后果在以后实现。作者从偏见的基本原理开始,解释了偏见的不同定义并描绘了各种潜在来源。然后概述了检测和识别偏差的策略,其次是对其避免和缓解技术的审查。此外,伦理维度,遇到的挑战,和前景进行了讨论。
公众号