关键词: anthropomorphism blame human-autonomy teaming power distance orientation shared tasks status trust

来  源:   DOI:10.3389/frai.2024.1273350   PDF(Pubmed)

Abstract:
If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (N = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants\' power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.
摘要:
如果人类要和人工队友合作,在设计代理人时,必须考虑影响信任和共同责任的因素。本研究调查了拟人化的影响,等级,决策成本,和任务难度对人类自主团队(HAT)的信任,以及如果共享任务失败,如何分摊责任。参与者(N=31)使用空中交通管制游戏的低保真度变体与人工队友一起完成了重复试验。我们操纵了拟人化(类似人类或类似机器),使用三星(上级)的人造队友的军事等级,二星(同行),或一星(下属)代理人,船上有人员或物资的车辆的有效载荷,以及使用主题内设计的简单或困难任务的任务难度。当参与者接受代理人建议时,可以推断出信任的行为度量,以及当建议被拒绝或忽略时不信任的措施。我们使用二项逻辑回归分析了信任数据。每次审判后,使用2项量表分摊责任,并使用单向重复测量方差分析进行分析.实验后问卷使用七项量表获得了参与者的力量距离取向。讨论了与权力有关的对信任和责任分配的可能影响。我们的研究结果表明,拟人化水平较高,等级较低的人工代理增加了信任和共同的责任,人类团队成员为团队失败承担更多责任。
公众号