human-automation interaction

  • 文章类型: Journal Article
    Carragher和Hancock(2023)研究了在自动面部识别系统(AFRS)的帮助下,个人在一对一面部匹配任务中的表现。在五个预先注册的实验中,他们发现了辅助性能欠佳的证据,由于AFRS辅助的个人始终无法达到AFRS单独实现的绩效水平。当前的研究重新分析了这些数据(卡拉格和汉考克,2023),将自动化辅助性能与一系列协同决策的统计模型进行基准测试,跨越一系列的效率水平。使用贝叶斯分层信号检测模型的分析表明,协作性能非常低效,最接近所测试的自动化依赖的最次优模型。这种结果模式概括了以前关于一系列视觉搜索中次优的人类与自动化交互的报告,目标检测,感官辨别,和数值估计决策任务。当前的研究是第一个在一对一面部匹配任务中提供自动化辅助性能基准的研究。
    Carragher and Hancock (2023) investigated how individuals performed in a one-to-one face matching task when assisted by an Automated Facial Recognition System (AFRS). Across five pre-registered experiments they found evidence of suboptimal aided performance, with AFRS-assisted individuals consistently failing to reach the level of performance the AFRS achieved alone. The current study reanalyses these data (Carragher and Hancock, 2023), to benchmark automation-aided performance against a series of statistical models of collaborative decision making, spanning a range of efficiency levels. Analyses using a Bayesian hierarchical signal detection model revealed that collaborative performance was highly inefficient, falling closest to the most suboptimal models of automation dependence tested. This pattern of results generalises previous reports of suboptimal human-automation interaction across a range of visual search, target detection, sensory discrimination, and numeric estimation decision-making tasks. The current study is the first to provide benchmarks of automation-aided performance in the one-to-one face matching task.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在军事环境中识别联系人可能需要操作员集成多个线索并调整事件基本速率的响应标准。当前的实验测试了决策辅助的支持是否会改善这些过程。参与者执行了信号识别任务,要求他们整合显示为视觉刻度读数的提示。在静态条件下,参与者在每个试验中看到一组读数.在动态条件下,读数随着时间的推移而更新。信号类别的基本速率不相等,要求参与者采用有偏见的回应标准,以最大限度地提高回应的准确性。参与者在有或没有帮助的情况下以理想的方式结合了线索和基本费率信息。援助的支持推动了参与者的响应标准朝着动态线索的最佳和改进集成方向发展。当任务需求需要有偏差的响应标准以及随着时间的推移对线索进行采样时,决策辅助可能特别有用。应用决策通常需要操作员收集和整合多个概率线索。一个实验检查了多线索决策任务中的信息处理步骤,这些步骤可以通过自动决策辅助工具进行改进。统计理想的辅助工具改善了操作员的响应偏差和信息集成,尽管运营商的表现仍然欠佳。
    Identifying contacts in a military context can require operators to integrate multiple cues and to adjust response criteria to event base rates. The current experiment tested whether support from a decision aid would improve these processes. Participants performed a signal identification task that required them to integrate cues displayed as visual scale readings. In a static condition, participants saw a single set of readings each trial. In dynamic conditions, readings were updated over time. Base rates of signal categories were unequal, requiring participants to adopt biased response criteria to maximise response accuracy. Participants worked with or without an aid that combined cues and base rate information in an ideal manner. Support from the aid pushed participants\' response criteria towards optimal and improved integration of dynamic cues. Decision aids may be especially useful when task demands require biased response criteria and when cues are sampled over time.
    Applied decision making often requires operators to gather and integrate multiple probabilistic cues. An experiment examined the information processing steps in multiple-cue decision tasks that could be improved by an automated decision aid. Statistically ideal aids improved operators’ response bias and information integration, although operator performance remained suboptimal.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    计算能力的指数级增长和信息的数字化程度不断提高,大大推动了机器学习(ML)研究领域的发展。然而,ML算法通常被认为是“黑匣子”,“这助长了不信任。在医学领域,错误会导致致命的后果,从业者可能特别不愿意信任ML算法。
    本研究的目的是探索用户界面设计特征对基于ML的临床决策支持系统中强化者信任的影响。
    在基于ML的模拟系统中,共有47名来自重症监护专科的医生接受了3例菌血症患者的治疗。根据信息相关性和交互性的组合测试了模拟的三个条件。参与者对系统的信任是通过他们与系统的预测和实验后问卷的一致性来评估的。线性回归模型用于测量效果。
    参与者与系统预测的一致性根据实验条件没有差异。然而,在实验后问卷中,较高的信息相关性评级和交互性评级与较高的系统信任度相关(两者P<.001).ML算法的特征在用户界面上的显式视觉呈现导致参与者之间的较低信任(P=.05)。
    在基于ML的临床决策支持系统的用户界面设计中,应考虑信息相关性和交互性特征,以增强强化者的信任。这项研究揭示了信息相关性之间的联系,交互性,并信任人类机器学习交互,特别是在重症监护病房的环境中。
    UNASSIGNED: The exponential growth in computing power and the increasing digitization of information have substantially advanced the machine learning (ML) research field. However, ML algorithms are often considered \"black boxes,\" and this fosters distrust. In medical domains, in which mistakes can result in fatal outcomes, practitioners may be especially reluctant to trust ML algorithms.
    UNASSIGNED: The aim of this study is to explore the effect of user-interface design features on intensivists\' trust in an ML-based clinical decision support system.
    UNASSIGNED: A total of 47 physicians from critical care specialties were presented with 3 patient cases of bacteremia in the setting of an ML-based simulation system. Three conditions of the simulation were tested according to combinations of information relevancy and interactivity. Participants\' trust in the system was assessed by their agreement with the system\'s prediction and a postexperiment questionnaire. Linear regression models were applied to measure the effects.
    UNASSIGNED: Participants\' agreement with the system\'s prediction did not differ according to the experimental conditions. However, in the postexperiment questionnaire, higher information relevancy ratings and interactivity ratings were associated with higher perceived trust in the system (P<.001 for both). The explicit visual presentation of the features of the ML algorithm on the user interface resulted in lower trust among the participants (P=.05).
    UNASSIGNED: Information relevancy and interactivity features should be considered in the design of the user interface of ML-based clinical decision support systems to enhance intensivists\' trust. This study sheds light on the connection between information relevancy, interactivity, and trust in human-ML interaction, specifically in the intensive care unit environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    通过使用由两人团队执行的简化事故处理任务进行混合设计实验,这项研究检查了自动化功能和条件的影响(之前,during,以及故障后)对人类表现的影响。考虑了与人类信息处理模型相关的五个不同且不重叠的功能,并以首次故障的方式设置了它们的故障。结果表明,虽然自动化故障损害了任务性能,信息分析的性能下降比响应计划更严重。与其他功能相反,应对计划和应对实施的情况意识在发生故障期间趋于增加,而在发生故障后则下降。此外,任务性能下降,降低了对自动化的信任,信息处理早期阶段的故障导致信任度降低。为与自动化相关的设计和培训提供的建议强调了高级认知支持的重要性以及在培训中涉及自动化错误处理的好处。
    自动化功能和故障对人类表现的影响对于设计和培训很重要。本研究的实验结果揭示了高级认知支持的重要性。此外,在培训中引入自动化错误处理有助于提高团队的情境意识。
    By conducting a mixed-design experiment using simplified accident handling tasks performed by two-person teams, this study examined the effects of automation function and condition (before, during, and after malfunction) on human performance. Five different and non-overlapping functions related to human information processing model were considered and their malfunctions were set in a first-failure way. The results showed that while the automation malfunction impaired task performance, the performance degradation for information analysis was more severe than response planning. Contrary to other functions, the situation awareness for response planning and response implementation tended to increase during malfunctioning and decrease after. In addition, decreased task performance reduced trust in automation, and malfunctions in earlier stages of information processing resulted in lower trust. Suggestions provided for the design and training related to automation emphasise the importance of high-level cognitive support and the benefit of involving automation error handling in training.
    The effects of automation function and malfunction on human performance are important for design and training. The experimental results in this study revealed the significance of high-level cognitive support. Also, introducing automation error handling in training can be helpful in improving situation awareness of the teams.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    自动化信任和依赖的概念通常被认为是密切相关的,有时,在研究界混为一谈。然而,信任是一种认知态度,依赖是一种行为度量,因此,不同的因素会影响两者并不奇怪。这里,我们回顾了有关信任与依赖之间相关性的文献。平均而言,人与人之间的这种相关性很低,这表明更信任自动化的人不一定更多地依赖自动化。分别,我们检查了明确操纵自动化可靠性的实验,发现更高的自动化可靠性提高了信任评级的速度是依赖行为的两倍。这篇综述提供了新的定量证据,表明这两个构建体没有很强的相关性。这项工作的含义,包括潜在的调节变量,信任仍然相关的环境,以及信任衡量的考虑,正在讨论。
    对自动化的信任是一种认知态度,对自动化的依赖是一种物理行为。因此,重要的是要了解两者之间的差异,尤其是它们在文献中被混为一谈。这篇综述强调了文献中主观信任和客观依赖之间的小平均相关性,这表明将信任衡量为依赖(反之亦然)可能是无效的。这表明,然后,从业者应该仔细考虑在给定的背景下如何衡量信任和依赖,以免错误地将两者混为一谈。
    The concepts of automation trust and dependence have often been viewed as closely related and on occasion, have been conflated in the research community. Yet, trust is a cognitive attitude and dependence is a behavioural measure, so it is unsurprising that different factors can affect the two. Here, we review the literature on the correlation between trust and dependence. On average, this correlation across people was quite low, suggesting that people who are more trusting of automation do not necessarily depend upon it more. Separately, we examined experiments that explicitly manipulated the reliability of automation, finding that higher automation reliability increased trust ratings twice as fast as dependence behaviours. This review provides novel quantitative evidence that the two constructs are not strongly correlated. Implications of this work, including potential moderating variables, contexts where trust is still relevant, and considerations of trust measurement, are discussed.
    Trust in automation is a cognitive attitude, and dependence on automation is a physical behaviour. Therefore, it is important to understand the differences between the two, especially as they have been conflated in the literature. This review highlights the small average correlation in the literature between subjective trust and objective dependence, which suggests that measuring trust as dependence (or vice versa) may not be valid. This suggests, then, that practitioners should carefully consider how trust and dependence are being measured in a given context so as not to incorrectly conflate the two.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    数字化水平的提高为石油和天然气行业的效率带来了重大机遇,但也可能导致新的风险和漏洞。根据行业的发展,挪威海洋工业局(HAVTIL)近年来一直致力于有针对性的知识开发和公司数字化计划的后续行动。本文探讨了通过HAVTIL对井作业中自动化系统的开发和使用进行审计而收集的数据。对数据的分析导致确定了与数字技术实施有关的五个主要主题。五个主要主题是组织复杂性,技术的跟进和实施,分析和文档,用户界面和报警和能力和培训。总的来说,结果支持人为因素和技术发展中的研究成果,指出在发展项目和业务中都缺乏对人为因素的关注。此外,本文介绍了如何跟进数字化计划,并根据行业当前的发展探讨了分析结果。
    为了调查自动化操作和人员表现,挪威海洋工业局(HAVTIL)进行了三项审核。这些审计已被用作案例研究和本文的基础。分析结果支持人为因素和技术开发领域的研究结果,指出在发展项目和业务中都缺乏对人为因素的关注。
    Increased levels of digitalisation present major opportunities for efficiency in the oil and gas industry but can also contribute to new risks and vulnerabilities. Based on developments in the industry, the Norwegian Ocean Industry Authority (HAVTIL) has in recent years pursued targeted knowledge development and follow-up of company\'s digitalisation initiatives. This paper explores data collected through HAVTIL\'s audits of the development and use of automated systems within well operations. The analysis of the data resulted in the identification of five main topics related to the implementation of digital technologies. The five main topics were organisational complexity, follow-up and implementation of technology, analysis and documentation, user-interface and alarms and competence and training. Overall, the results support research findings within human factors and technology development, pointing out that there is a lack of focus on human factors in both development projects and in operations. In addition, this paper provides insight into how digitalisation initiatives are followed-up and explores the results from the analysis in light of the current developments in the industry.
    To investigate automated operations and human performance, three audits were performed by the Norwegian Ocean Industry Authority (HAVTIL). These audits have been used as case studies and the basis for this paper. Results from the analysis support research findings within the field of human factors and technology development, pointing out that there is a lack of focus on human factors in both development projects and in operations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目标:主要目的是确定当自动化可靠性增加或降低时,信任如何随时间变化。第二个目的是确定特定任务的自信心与信任和可靠性水平的关系。
    背景:过度信任和不信任都可能对系统性能有害;因此,随着可靠性水平的变化,需要探索信任的时间动态。
    方法:两个实验使用主色识别任务,自动化向用户提供建议,建议的可靠性在300次试验中发生变化。在实验1中,两组参与者与系统进行了交互:一组以50%的可靠系统开始,该系统增加到100%,而另一个使用的系统从100%下降到50%。实验2包括一组自动化可靠性从70%提高到100%。
    结果:信任度最初在递减组中很高,然后随着可靠性水平的降低而下降;但是,在可靠性提高50%的组中,信任度也下降了。此外,当用户自信心增强时,自动化可靠性对信任的影响更大。在实验2中,可靠性增加70%的组对系统的信任度增加。
    结论:信任并不总是跟踪自动化系统的可靠性;特别是,一旦用户与低可靠性系统交互,信任就很难恢复。
    结论:这项研究为自动化的信任动态提供了初步证据,随着时间的推移,这种信任会变得更好,这表明用户应该只有在足够可靠的情况下才开始与自动化进行交互。
    OBJECTIVE: The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level.
    BACKGROUND: Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored.
    METHODS: Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%.
    RESULTS: Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system.
    CONCLUSIONS: Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system.
    CONCLUSIONS: This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    车辆自动化正变得越来越普遍。了解驾驶员如何使用这项技术及其对安全的影响至关重要。在为期6-8周的自然主义研究中,我们利用混合自然驾驶研究设计来评估具有2级车辆自动化的驾驶员行为,结合独特的自然和实验控制条件。我们的调查涵盖了四个主要领域:自动化使用,系统警告,驱动需求,和司机唤醒,以及次要任务参与。在州际公路上,建议驾驶员在认为安全的情况下使用2级自动化,他们遵守了70%以上的时间。有趣的是,系统警告的频率随着长时间使用而增加,建议驱动程序和自动化功能之间不断发展的关系。我们的数据还显示,司机在使用自动化方面有洞察力,在高驾驶需求条件下选择手动控制。与常见的安全问题相反,我们的数据表明,在使用自动化时,驾驶员疲劳或烦躁不安没有显著增加,与控制条件相比。此外,观察到的参与次要任务的模式,如无线电收听和短信挑战现有的关于自动化的假设,导致危险的驾驶员分心。总的来说,我们的发现为驾驶员选择使用自动化的条件提供了新的见解,并揭示了自动化使用时出现的细微差别的行为概况。
    Vehicle automation is becoming more prevalent. Understanding how drivers use this technology and its safety implications is crucial. In a 6-8 week naturalistic study, we leveraged a hybrid naturalistic driving research design to evaluate driver behavior with Level 2 vehicle automation, incorporating unique naturalistic and experimental control conditions. Our investigation covered four main areas: automation usage, system warnings, driving demand, and driver arousal, as well as secondary task engagement. While on the interstate, drivers were advised to engage Level 2 automation whenever they deemed it safe, and they complied by using it over 70% of the time. Interestingly, the frequency of system warnings increased with prolonged use, suggesting an evolving relationship between drivers and the automation features. Our data also revealed that drivers were discerning in their use of automation, opting for manual control under high driving demand conditions. Contrary to common safety concerns, our data indicated no significant rise in driver fatigue or fidgeting when using automation, compared to a control condition. Additionally, observed patterns of engagement in secondary tasks like radio listening and text messaging challenge existing assumptions about automation leading to dangerous driver distraction. Overall, our findings provide new insights into the conditions under which drivers opt to use automation and reveal a nuanced behavioral profile that emerges when automation is in use.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    像COVID-19大流行和澳大利亚的野火这样的现实世界事件,欧洲,美国提醒我们,复杂的作战环境的要求是由多个,分布式团队与大量的工件和网络技术交织在一起,包括自动化。然而,当前的人-自动化交互模型,包括用于人机合作或协作的那些,在性质上往往是二元的,假设个体人类与个体机器交互。鉴于新兴人工智能(AI)技术的机遇和挑战,以及许多组织对在复杂操作中利用这些技术的兴趣日益浓厚,我们建议转向当代社会技术系统的观点,以寻求前进的方向。我们展示了分布式认知的想法,联合认知系统,和自组织导致设计人类人工智能系统的特定概念,并提出需要以当代复杂工作绩效观点为依据的设计框架。我们以认知工作分析为例进行讨论。
    人工智能的新兴发展将对人机系统的设计提出挑战。社会技术系统的当代观点,即分布式认知,联合认知系统,和自我组织,具有传统方法无法适应的设计含义。认知工作分析可以提供前进的道路。
    Real-world events like the COVID-19 pandemic and wildfires in Australia, Europe, and America remind us that the demands of complex operational settings are met by multiple, distributed teams interwoven with a large array of artefacts and networked technologies, including automation. Yet, current models of human-automation interaction, including those intended for human-machine teaming or collaboration, tend to be dyadic in nature, assuming individual humans interacting with individual machines. Given the opportunities and challenges of emerging artificial intelligence (AI) technologies, and the growing interest of many organisations in utilising these technologies in complex operations, we suggest turning to contemporary perspectives of sociotechnical systems for a way forward. We show how ideas of distributed cognition, joint cognitive systems, and self-organisation lead to specific concepts for designing human-AI systems, and propose that design frameworks informed by contemporary views of complex work performance are needed. We discuss cognitive work analysis as an example.
    Emerging developments in AI will pose challenges for the design of human-machine systems. Contemporary perspectives of sociotechnical systems, namely distributed cognition, joint cognitive systems, and self-organisation, have design implications that are unaccommodated by traditional methods. Cognitive work analysis may provide a way forward.Abbreviation: AI: Artificial intelligence.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    当前研究的目的是研究自动规避操作期间驾驶员发起的接管的预测因素和后果。关于自动驾驶中的控制转换的文献主要集中在系统发起的接管上。然而,司机也可以在没有接管请求的情况下发起接管。迄今为止,这种司机发起的接管很少被调查。我们的研究解决了这个研究差距。在一项有61名参与者的驾驶模拟器研究中,我们调查了高度动态规避操作的关键性和对自动化的信任是否会影响驾驶员发起的接管的概率。通过时距(THW)和牵引力使用(TU)操纵临界性。在实验试验之前,通过操纵自动化可靠性来改变信任。评估了驾驶员发起的接管在碰撞和车道偏离方面的后果。结果表明THW,TU,信任会影响司机发起接管的概率。此外,自动化通过开始规避操作来响应前方障碍物所需的时间可能是预测接管的另一个相关因素。接管之后,司机产生了一些不必要的车道偏离和碰撞。这些独立于THW和TU。研究表明,当临界性增加且对自动化的信任度降低时,在没有接管请求的情况下,驾驶员更有可能在自动规避操作期间接管车辆控制。这种接管对于交通安全可能是危险的。我们的发现有助于设计自动驾驶车辆,避免在关键驾驶情况下进行不必要的接管或有效降低其后果。从而提高交通安全。
    The aim of the current study is to investigate predictors and consequences of driver-initiated take-overs during automated evasion maneuvers. Literature on control transitions in automated driving has mainly focused on system-initiated take-overs. However, drivers may also initiate take-overs without take-over requests. To date, such driver-initiated take-overs have rarely been investigated. Our study addresses this research gap. In a driving simulator study with 61 participants, we investigated whether the criticality of highly dynamic evasion maneuvers and trust in automation affect the probability of driver-initiated take-overs. Criticality was manipulated via time headway (THW) and traction usage (TU). Trust was varied by manipulating automation reliability before the experimental trials. Consequences of driver-initiated take-overs in terms of collisions and lane departures were assessed. The results indicate that THW, TU, and trust affect the probability of driver-initiated take-overs. Moreover, the time it takes the automation to respond to an obstacle ahead by starting an evasion maneuver may be another relevant factor in predicting take-overs. After a take-over, drivers produced a number of unnecessary lane departures and collisions. These were independent of THW and TU. The study demonstrates that drivers are more likely to take over vehicle control during automated evasion maneuvers without take-over requests when criticality increases and trust in automation decreases. Such take-overs may be hazardous for traffic safety. Our findings help to design automated vehicles that avoid unnecessary take-overs in critical driving situations or de-escalate their consequences effectively, thus increasing traffic safety.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号