关键词: acceptance agency banking human-AI interaction survey trust calibration user perception

来  源:   DOI:10.3389/frai.2023.1241290   PDF(Pubmed)

Abstract:
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system\'s uncanniness and potential dangers. Manipulating the AI assistant\'s agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
摘要:
在人工智能(AI)系统中校准非专家用户的适当信任是一项具有挑战性但至关重要的任务。为了使主观信任水平与系统的客观可信度保持一致,用户需要有关其优点和缺点的信息。帮助个人避免过度信任或信任不足的具体解释可能会有所不同,具体取决于他们对系统的最初看法。在一项在线研究中,127名参与者观看了一个金融AI助手与不同程度的决策机构的视频。他们生成了358个自发的系统文字描述,并从自动化和技术接受文献中完成了标准问卷(包括感知的系统能力,可理解性,人类相似,直率,开发商的意图,打算使用,和信任)。高信任度和低信任度用户组之间的比较显示,开放式和封闭式答案均存在显着差异。虽然高信任度用户认为AI助手更有用,主管,可以理解,和人类一样,低信任度的用户强调了该系统的不合理和潜在的危险。操纵AI助手的机构对信任或使用意图没有影响。这些发现与有关AI的有效沟通以及对初始信任级别不同的用户进行信任校准有关。
公众号