algorithm aversion

  • 文章类型: Journal Article
    受重大技术进步的启发,快速增长的研究流探索人类围绕AI工具的信仰和反应,使用算法来模仿人类智能的元素。这些文献主要记录了对这些工具或底层算法的负面反应,通常被称为算法厌恶或,或者,对人类的偏爱。这篇文章提出了第三种解释:人们可能厌恶他们的标签,但赞赏他们的输出。这个观点为我们如何研究人们对算法的反应提供了三个核心见解。研究将受益于(1)仔细考虑人工智能工具的标签,(2)扩大研究范围,包括与这些工具的相互作用,和(3)核算其技术配置。
    Inspired by significant technical advancements, a rapidly growing stream of research explores human lay beliefs and reactions surrounding AI tools, which employ algorithms to mimic elements of human intelligence. This literature predominantly documents negative reactions to these tools or the underlying algorithms, often referred to as algorithm aversion or, alternatively, a preference for humans. This article proposes a third interpretation: people may be averse to their labels, but appreciative of their output. This perspective offers three core insights for how we study people\'s reactions to algorithms. Research would benefit from (1) carefully considering the labeling of AI tools, (2) broadening the scope of study to include interactions with these tools, and (3) accounting for their technical configuration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    虽然大型语言模型(LLM)在技术进步和能力方面得到了积极的考虑,人们相当反对机器做出道德决定。但是,尚未对LLM更有可能发生算法厌恶或算法欣赏的情况进行充分的研究。因此,这项研究的目的是调查具有道德或技术主题的文本,据称是由人类作者或ChatGPT撰写的,被感知。
    在一项随机对照实验中,n=164名参与者阅读了六篇课文,其中三个有道德主题,三个有技术主题(预测文本主题)。每个文本的所谓作者被随机标记为“ChatGPT”或“人类作者”(预测作者身份)。我们捕获了三个因变量:作者能力评估,内容质量评估,和参与者打算在假设的大学课程中提交文本(共享意图)。我们假设相互作用效应,也就是说,我们预计ChatGPT在道德主题上的得分低于所谓的人类作者,在技术主题上的得分高于所谓的人类作者,反之亦然。
    我们只发现对感知的作者能力有很小的交互作用,p=0.004,d=0.40,但对于其他因变量则不是。然而,在所有因变量中,与所谓的人类作者相比,ChatGPT始终贬值:作者身份对评估作者能力有主要影响,p<0.001,d=0.95;对于内容质量评估,p<0.001,d=0.39;对于共享意图,p<0.001,d=0.57。文本主题对文本质量的评估也有很小的主要影响,p=0.002,d=0.35。
    这些结果更符合先前关于算法厌恶的发现,而不是算法欣赏。我们讨论了这些发现对接受使用LLM进行文本撰写的影响。
    UNASSIGNED: While Large Language Models (LLMs) are considered positively with respect to technological progress and abilities, people are rather opposed to machines making moral decisions. But the circumstances under which algorithm aversion or algorithm appreciation are more likely to occur with respect to LLMs have not yet been sufficiently investigated. Therefore, the aim of this study was to investigate how texts with moral or technological topics, allegedly written either by a human author or by ChatGPT, are perceived.
    UNASSIGNED: In a randomized controlled experiment, n = 164 participants read six texts, three of which had a moral and three a technological topic (predictor text topic). The alleged author of each text was randomly either labeled \"ChatGPT\" or \"human author\" (predictor authorship). We captured three dependent variables: assessment of author competence, assessment of content quality, and participants\' intention to submit the text in a hypothetical university course (sharing intention). We hypothesized interaction effects, that is, we expected ChatGPT to score lower than alleged human authors for moral topics and higher than alleged human authors for technological topics and vice versa.
    UNASSIGNED: We only found a small interaction effect for perceived author competence, p = 0.004, d = 0.40, but not for the other dependent variables. However, ChatGPT was consistently devalued compared to alleged human authors across all dependent variables: there were main effects of authorship for assessment of the author competence, p < 0.001, d = 0.95; for assessment of content quality, p < 0.001, d = 0.39; as well as for sharing intention, p < 0.001, d = 0.57. There was also a small main effect of text topic on the assessment of text quality, p = 0.002, d = 0.35.
    UNASSIGNED: These results are more in line with previous findings on algorithm aversion than with algorithm appreciation. We discuss the implications of these findings for the acceptance of the use of LLMs for text composition.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    最近,ChatGPT引起了公众的极大兴趣,引发一系列反应,包括厌恶和欣赏。本文深入研究了个体对ChatGPT的矛盾态度,强调算法厌恶和欣赏的同时存在。从心理学和算法决策的角度进行了综合分析,从三个维度探索这些冲突态度的根本原因:自我表现,任务类型,和个人因素。随后,提出了调和这些对立心理立场的策略,分为两类:灵活应对和不灵活应对。鉴于人工智能的不断进步,本文提出了个人在面对人工智能时应该采取的态度和行动的建议。不管一个人是否表现出算法厌恶或欣赏,这篇论文强调,在人工智能时代,与算法共存是一个不可避免的现实,必须保护人类的优势。
    In recent times, ChatGPT has garnered significant interest from the public, sparking a range of reactions that encompass both aversion and appreciation. This paper delves into the paradoxical attitudes of individuals towards ChatGPT, highlighting the simultaneous existence of algorithmic aversion and appreciation. A comprehensive analysis is conducted from the vantage points of psychology and algorithmic decision-making, exploring the underlying causes of these conflicting attitudes from three dimensions: self-performance, task types, and individual factors. Subsequently, strategies to reconcile these opposing psychological stances are proposed, delineated into two categories: flexible coping and inflexible coping. In light of the ongoing advancements in artificial intelligence, this paper posits recommendations for the attitudes and actions that individuals ought to adopt in the face of artificial intelligence. Regardless of whether one exhibits algorithm aversion or appreciation, the paper underscores that coexisting with algorithms is an inescapable reality in the age of artificial intelligence, necessitating the preservation of human advantages.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    对上帝的思考促进了对基于人工智能(AI)的建议的更多接受。八个预先登记的实验(n=2,462)揭示了当上帝突出时,与上帝不突出的时候相比,人们更愿意考虑基于人工智能的建议。研究1和2a到2d证明了在各种各样的背景下,从选择娱乐和食物到共同基金和牙科程序,上帝显着性减少了对人类推荐者的依赖,并提高了考虑人工智能推荐的意愿。研究3和4表明,当上帝显着时,对人类的依赖程度降低是由于对渺小的感觉增强所致,其次是对人类错误的认识。研究5解决了上帝和人工智能之间神秘的相似性,但不支持,解释。最后,研究6(n=53,563)用来自21个国家的关于机器人顾问在财务决策中的使用的数据证实了实验结果。
    Thinking about God promotes greater acceptance of Artificial intelligence (AI)-based recommendations. Eight preregistered experiments (n = 2,462) reveal that when God is salient, people are more willing to consider AI-based recommendations than when God is not salient. Studies 1 and 2a to 2d demonstrate across a wide variety of contexts, from choosing entertainment and food to mutual funds and dental procedures, that God salience reduces reliance on human recommenders and heightens willingness to consider AI recommendations. Studies 3 and 4 demonstrate that the reduced reliance on humans is driven by a heightened feeling of smallness when God is salient, followed by a recognition of human fallibility. Study 5 addresses the similarity in mysteriousness between God and AI as an alternative, but unsupported, explanation. Finally, study 6 (n = 53,563) corroborates the experimental results with data from 21 countries on the usage of robo-advisors in financial decision-making.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Published Erratum
    [This corrects the article DOI: 10.3389/fpsyg.2022.779028.].
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    相对于算法,人们有时表现出对人类的昂贵偏好,这通常被定义为域通用算法厌恶。我认为它是由对自我和其他人的偏见评估驱动的,在身份受到威胁和评估标准不明确的领域中,这种情况的发生范围更窄。
    People sometimes exhibit a costly preference for humans relative to algorithms, which is often defined as a domain-general algorithm aversion. I propose it is instead driven by biased evaluations of the self and other humans, which occurs more narrowly in domains where identity is threatened and when evaluative criteria are ambiguous.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    由包含人工智能的决策支持系统辅助的简历筛选目前正在许多组织中得到强有力的发展,提高技术,管理,legal,和伦理问题。本文的目的是更好地了解招聘人员在简历筛选过程中提供基于算法的建议时的反应。在关于用户对基于算法的推荐的反应的文献中,已经确定了两种两极分化的态度:算法厌恶,这反映了人们对人类建议的普遍不信任和偏好;以及自动化偏见,这对应于算法决策支持系统(ADSS)做出的决策或建议的过度自信。借鉴自动化决策支持领域取得的成果,我们提出一般假设,招聘人员比ADSS更信任人类专家,因为他们不信任算法的主观决策,如招聘。对参与工作申请筛选的专业人员样本(N=694)进行了简历筛选实验。他们被要求研究一份工作,然后在2×2阶乘设计中评估两个虚构的简历,并操纵推荐类型(无推荐/算法推荐/人类专家推荐)和建议的一致性(一致与不一致的建议)。我们的结果支持了对人类推荐的偏好的一般假设:与算法推荐相比,招聘人员对人类专家建议的信任度更高。然而,我们还发现,推荐的一致性对决策有不同的和意想不到的影响:在存在不一致的算法推荐的情况下,招聘人员倾向于不合适的人而不是合适的简历。我们的结果还表明,特定的人格特质(外向,神经质,和自信)与算法推荐的不同使用相关。最后讨论了对研究和人力资源政策的影响。
    Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users\' reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation\'s consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在接下来的几年里,人工智能可以越来越多地取代人类做出复杂的决策,因为它有望实现决策程序的标准化和消除偏见。尽管关于算法公平性的激烈争论,很少有研究研究外行人在资源分配决策移交给人工智能时的反应。我们通过研究感知公正性的作用来解决这个问题,这个因素可能会影响人工智能作为人类决策者的替代品的接受。我们认为,外行人比人类决策者更公正地赋予人工智能。我们的调查表明,人们重视决策程序中的公正性,这些决策程序涉及稀缺资源的分配,人们认为人工智能比人类更有公正性。然而,矛盾的是,外行人在分配决策中更喜欢人类决策者。当潜在的人类偏见变得突出时,这种偏好就会逆转。调查结果强调了人工智能中公正性的重要性,因此对政策措施的设计具有影响。
    Over the coming years, AI could increasingly replace humans for making complex decisions because of the promise it holds for standardizing and debiasing decision-making procedures. Despite intense debates regarding algorithmic fairness, little research has examined how laypeople react when resource-allocation decisions are turned over to AI. We address this question by examining the role of perceived impartiality as a factor that can influence the acceptance of AI as a replacement for human decision-makers. We posit that laypeople attribute greater impartiality to AI than human decision-makers. Our investigation shows that people value impartiality in decision procedures that concern the allocation of scarce resources and that people perceive AI as more capable of impartiality than humans. Yet, paradoxically, laypeople prefer human decision-makers in allocation decisions. This preference reverses when potential human biases are made salient. The findings highlight the importance of impartiality in AI and thus hold implications for the design of policy measures.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Algorithms have become increasingly relevant in supporting human resource (HR) management, but their application may entail psychological biases and unintended side effects on employee behavior. This study examines the effect of the type of HR decision (i.e., promoting or dismissing staff) on the likelihood of delegating these HR decisions to an algorithm-based decision support system. Based on prior research on algorithm aversion and blame avoidance, we conducted a quantitative online experiment using a 2×2 randomly controlled design with a sample of N = 288 highly educated young professionals and graduate students in Germany. This study partly replicates and substantially extends the methods and theoretical insights from a 2015 study by Dietvorst and colleagues. While we find that respondents exhibit a tendency of delegating presumably unpleasant HR tasks (i.e., dismissals) to the algorithm-rather than delegating promotions-this effect is highly conditional upon the opportunity to pretest the algorithm, as well as individuals\' level of trust in machine-based and human forecast. Respondents\' aversion to algorithms dominates blame avoidance by delegation. This study is the first to provide empirical evidence that the type of HR decision affects algorithm aversion only to a limited extent. Instead, it reveals the counterintuitive effect of algorithm pretesting and the relevance of confidence in forecast models in the context of algorithm-aided HRM, providing theoretical and practical insights.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Previous research has described physicians\' reluctance to use computerized diagnostic aids (CDAs) but has never experimentally examined the effects of not consulting an aid that was readily available. Experiment 1. Participants read about a diagnosis made either by a physician or an auto mechanic (to control for perceived expertise). Half read that a CDA was available but never actually consulted; no mention of a CDA was made for the remaining half. For the physician, failure to consult the CDA had no significant effect on competence ratings for either the positive or negative outcome. For the auto mechanic, failure to consult the CDA actually increased competence ratings following a negative but not a positive outcome. Negligence judgments were greater for the mechanic than for the physician overall. Experiment 2. Using only a negative outcome, we included 2 different reasons for not consulting the aid and provided accuracy information highlighting the superiority of the CDA over the physician. In neither condition was the physician rated lower than when no aid was mentioned. Ratings were lower when the physician did not trust the CDA and, surprisingly, higher when the physician believed he or she already knew what the CDA would say. Finally, consistent with our previous research, ratings were also high when the physician consulted and then followed the advice of a CDA and low when the CDA was consulted but ignored. Individual differences in numeracy did not qualify these results. Implications for the literature on algorithm aversion and clinical practice are discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号