关键词: AI ethics competence human-AI trust human-automation trust interpersonal trust trust measurement trustworthy AI warmth

来  源:   DOI:10.3389/fpsyg.2024.1382693   PDF(Pubmed)

Abstract:
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI\'s trustworthiness and foster its adoption and application.
摘要:
人工智能(AI)的快速发展在许多方面对社会产生了影响。伴随着这一进展,隐私侵犯等问题,歧视性偏见,安全隐患也浮出水面,强调道德发展的必要性,负责任,对社会有益的AI。作为回应,值得信赖的人工智能的概念已经变得越来越突出,并提出了一些开发可信人工智能的指导方针。在这种背景下,我们证明了心理学研究在识别有助于AI信任形成的因素方面的重要性。具体来说,我们回顾了关于人际关系的研究结果,人类自动化,从三维框架的角度(即,信任者,受托人,以及它们的交互式上下文)。该框架综合了与不同信任类型之间的信任形成和维护相关的公共因素。这些因素指出了构建值得信赖的AI的基本要求,并为其发展提供了关键指导,其中也涉及通信。教育,对用户进行培训。最后,我们讨论了信任研究中的见解如何帮助增强AI的可信性并促进其采用和应用。
公众号