关键词: app application applications apps artificial intelligence bias biases diagnose diagnosis ethic ethical ethics law laws legal mHealth mobile health mobile phone professional obligation professional obligations regulation regulations safety self-diagnose self-diagnosis symptom checker symptom checkers

Mesh : Humans Mobile Applications Artificial Intelligence / ethics Telemedicine COVID-19 Bias SARS-CoV-2 Pandemics Social Responsibility

来  源:   DOI:10.2196/50344   PDF(Pubmed)

Abstract:
The growing prominence of artificial intelligence (AI) in mobile health (mHealth) has given rise to a distinct subset of apps that provide users with diagnostic information using their inputted health status and symptom information-AI-powered symptom checker apps (AISympCheck). While these apps may potentially increase access to health care, they raise consequential ethical and legal questions. This paper will highlight notable concerns with AI usage in the health care system, further entrenchment of preexisting biases in the health care system and issues with professional accountability. To provide an in-depth analysis of the issues of bias and complications of professional obligations and liability, we focus on 2 mHealth apps as examples-Babylon and Ada. We selected these 2 apps as they were both widely distributed during the COVID-19 pandemic and make prominent claims about their use of AI for the purpose of assessing user symptoms. First, bias entrenchment often originates from the data used to train AI systems, causing the AI to replicate these inequalities through a \"garbage in, garbage out\" phenomenon. Users of these apps are also unlikely to be demographically representative of the larger population, leading to distorted results. Second, professional accountability poses a substantial challenge given the vast diversity and lack of regulation surrounding the reliability of AISympCheck apps. It is unclear whether these apps should be subject to safety reviews, who is responsible for app-mediated misdiagnosis, and whether these apps ought to be recommended by physicians. With the rapidly increasing number of apps, there remains little guidance available for health professionals. Professional bodies and advocacy organizations have a particularly important role to play in addressing these ethical and legal gaps. Implementing technical safeguards within these apps could mitigate bias, AIs could be trained with primarily neutral data, and apps could be subject to a system of regulation to allow users to make informed decisions. In our view, it is critical that these legal concerns are considered throughout the design and implementation of these potentially disruptive technologies. Entrenched bias and professional responsibility, while operating in different ways, are ultimately exacerbated by the unregulated nature of mHealth.
摘要:
人工智能(AI)在移动健康(mHealth)中的日益突出已经产生了一个不同的应用程序子集,这些应用程序使用用户输入的健康状况和症状信息为用户提供诊断信息-AI支持的症状检查器应用程序(AIShycheck)。虽然这些应用程序可能会增加获得医疗保健的机会,他们提出了相应的道德和法律问题。本文将强调人工智能在医疗保健系统中的使用值得注意的问题,进一步巩固医疗保健系统中现有的偏见和专业问责制问题。对专业义务和责任的偏见和复杂性问题进行深入分析,我们专注于2mHealth应用程序作为例子-巴比伦和阿达。我们选择了这两个应用程序,因为它们在COVID-19大流行期间都广泛分发,并对它们使用人工智能来评估用户症状做出了突出的声明。首先,偏见根深蒂固通常源于用于训练人工智能系统的数据,让人工智能通过垃圾复制这些不平等,“垃圾出”现象。这些应用程序的用户也不太可能在人口统计上代表更大的人口,导致扭曲的结果。第二,鉴于AISymCheck应用程序可靠性的巨大多样性和缺乏监管,专业问责制构成了重大挑战。目前还不清楚这些应用程序是否应该接受安全审查。负责应用介导的误诊,以及这些应用程序是否应该由医生推荐。随着应用程序数量的迅速增加,对卫生专业人员的指导仍然很少。专业机构和宣传组织在解决这些道德和法律差距方面可以发挥特别重要的作用。在这些应用程序中实施技术保障措施可以减轻偏见,人工智能可以主要用中性数据进行训练,应用程序可能会受到监管系统的约束,以允许用户做出明智的决定。在我们看来,至关重要的是,在这些潜在破坏性技术的设计和实施过程中,必须考虑这些法律问题。根深蒂固的偏见和职业责任,在以不同方式操作时,最终加剧了mHealth的不受管制的性质。
公众号