Human-agent interaction

  • 文章类型: Journal Article
    群体内的偏爱和群体间的歧视在社会互动中可以相辅相成,威胁到群体间合作和社会的可持续性。在两项研究中(N=880),我们调查了促进亲社会群体利他主义是否会削弱群体偏爱的影响周期。通过计算机介导的实验平台使用新的人与主体相互作用方法,我们通过(i)具有预编程的外群利他行为的非适应性人工代理(研究1;N=400)和(ii)通过机器学习算法的预测了解其利他行为的自适应人工代理(研究2;N=480)引入了外群利他主义.评级任务确保观察到的行为不是由于参与者对人工代理的意识而引起的。在研究1中,非适应性代理人促使小组成员拒绝与小组代理人合作,并加强了人类之间的小组偏爱。在研究2中,随着时间的推移,适应性代理人能够通过在群体内和群体外成员中保持良好的声誉来削弱群体内的偏爱,认为代理人比人类更公平,并认为代理人比人类更人性化。我们得出的结论是,表现出群体利他主义的个人的良好声誉对于削弱群体偏爱和改善群体间合作是必要的。因此,声誉对于设计微推代理很重要。
    Ingroup favoritism and intergroup discrimination can be mutually reinforcing during social interaction, threatening intergroup cooperation and the sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant\'s awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to withhold cooperation from ingroup agents and reinforced ingroup favoritism among humans. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and improve intergroup cooperation. Thus, reputation is important for designing nudge agents.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    虚拟现实(VR)环境在各种应用中越来越受欢迎,虚拟角色的出现是影响用户行为的关键因素。在这项研究中,我们旨在研究虚拟形象和代理外观对VR中触摸前代理的影响。为了实现这一目标,我们设计了利用三个用户化身(男人/女人/机器人)和三个虚拟代理(男人/女人/机器人)的实验。具体来说,我们测量了触摸前反应到脸部和身体的距离,这是一个人在被触摸之前开始感到不舒服的距离。我们检查了这些距离如何根据化身的外观而变化,代理商,和用户性别。我们的结果表明,化身和代理的外观显着影响触摸前的反应距离。具体来说,那些使用女性化身的人倾向于在他们的脸和身体被触摸之前保持更大的距离,在被机器人代理接触之前,人们也更喜欢更大的距离。有趣的是,我们没有观察到用户性别对触摸前反应距离的影响.这些发现对VR系统的设计和实现具有重要意义。因为他们认为化身和代理外观在塑造用户对预触摸代理的感知中起着重要作用。我们的研究强调了在创建沉浸式和社会可接受的VR体验时考虑这些因素的重要性。
    Virtual reality (VR) environments are increasingly popular for various applications, and the appearance of virtual characters is a critical factor that influences user behaviors. In this study, we aimed to investigate the impact of avatar and agent appearances on pre-touch proxemics in VR. To achieve this goal, we designed experiments utilizing three user avatars (man/woman/robot) and three virtual agents (man/woman/robot). Specifically, we measured the pre-touch reaction distances to the face and body, which are the distances at which a person starts to feel uncomfortable before being touched. We examined how these distances varied based on the appearances of avatars, agents, and user gender. Our results revealed that the appearance of avatars and agents significantly impacted pre-touch reaction distances. Specifically, those using a female avatar tended to maintain larger distances before their face and body to be touched, and people also preferred greater distances before being touched by a robot agent. Interestingly, we observed no effects of user gender on pre-touch reaction distances. These findings have implications for the design and implementation of VR systems, as they suggest that avatar and agent appearances play a significant role in shaping users\' perceptions of pre-touch proxemics. Our study highlights the importance of considering these factors when creating immersive and socially acceptable VR experiences.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    机器人介导教育的可能好处之一是机器人成为人与促进学习的催化剂。在这项研究中,作者重点研究了一种由机器人介导的异步主动学习方法。主动学习被认为可以帮助学生继续学习和发展独立思考的能力。因此,作者改进了我们为COVID-19中的长期主动学习创建的UGA(用户生成代理)系统,以创建一个孩子们通过机器人相互介绍书籍的环境。作者在一所小学安装了机器人,并进行了为期一年多的实验。因此,经证实,即使在很长一段时间内,机器人也可以继续使用而不会感到无聊。他们还分析了孩子们如何通过分析具有特别高的观看次数的内容来创建内容。特别是,作者观察到儿童行为的变化,如自发的广告活动,从高年级到低年级的指导,与多人合作,对技术的兴趣增加,即使在新的冠状病毒正在传播和儿童社会互动受到抑制的情况下。
    One of the possible benefits of robot-mediated education is the effect of the robot becoming a catalyst between people and facilitating learning. In this study, the authors focused on an asynchronous active learning method mediated by robots. Active learning is believed to help students continue learning and develop the ability to think independently. Therefore, the authors improved the UGA (User Generated Agent) system that we have created for long-term active learning in COVID-19 to create an environment where children introduce books to each other via robots. The authors installed the robot in an elementary school and conducted an experiment lasting more than a year. As a result, it was confirmed that the robot could continue to be used without getting bored even over a long period of time. They also analyzed how the children created the contents by analyzing the contents that had a particularly high number of views. In particular, the authors observed changes in children\'s behavior, such as spontaneous advertising activities, guidance from upperclassmen to lowerclassmen, collaboration with multiple people, and increased interest in technology, even under conditions where the new coronavirus was spreading and children\'s social interaction was inhibited.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在过去的二十年中,无线接入需求呈指数级增长,预计将持续数年。满足需求必然会带来更多的人类暴露于微波和射频(RF)辐射。我们对其健康影响的认识有所增加。然而,他们已经成为当前关注和关注的焦点。在现代社会中,手机和相关的无线通信技术已经证明了它们对人们的直接好处。然而,至于它们对长时间甚至一生中不必要地承受各种水平的RF暴露的人类的辐射健康和安全的影响,陪审团还在外面.此外,流行病学研究和动物调查显示,RF暴露可能对人类致癌。应将ALARA的原则-尽可能低,可合理实现-作为RF健康和安全保护的策略。
    The past two decades have seen exponential growth in demand for wireless access that has been projected to continue for years to come. Meeting the demand would necessarily bring about greater human exposure to microwave and radiofrequency (RF) radiation. Our knowledge regarding its health effects has increased. Nevertheless, they have become a focal point of current interest and concern. The cellphone and allied wireless communication technologies have demonstrated their direct benefit to people in modern society. However, as for their impact on the radiation health and safety of humans who are unnecessarily subjected to various levels of RF exposure over prolonged durations or even over their lifetime, the jury is still out. Furthermore, there are consistent indications from epidemiological studies and animal investigations that RF exposure is probably carcinogenic to humans. The principle of ALARA-as low as reasonably achievable-ought to be adopted as a strategy for RF health and safety protection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    与人类进行自然有效的交流需要能够理解自然语言含义的人工代理。然而,理解自然语言并不简单,需要适当的基础机制来在单词和相应的感知信息之间建立联系。自从1990年引入“符号接地问题”以来,已经提出了许多不同的接地方法,它们采用了有监督或无监督的学习机制。后者的优点是不需要其他代理人学习正确的基础,虽然前者通常更有效和准确,但需要另一个代理的支持,像人类或其他人工媒介。尽管将这两种范式结合起来似乎很自然,它没有引起太多关注。因此,本文提出了一种混合接地框架,它结合了两种学习范式,以便能够利用导师的支持,如果可用,虽然它仍然可以学习什么时候不提供支持。此外,该框架旨在以连续和开放式的方式学习,因此不需要明确的训练阶段。通过两种不同的接地方案对所提出的框架进行评估,并将其无监督接地组件与最先进的无监督贝叶斯接地框架进行比较。而结合两种范式的好处是通过分析不同的反馈率评估的。获得的结果表明,采用的无监督接地机制在精度方面优于基线,透明度,和可部署性,并且将两种范例结合起来可以提高样本效率以及纯粹无监督接地的准确性,虽然它确保框架仍然能够学习正确的映射,当没有监督可用时。
    Natural and efficient communication with humans requires artificial agents that are able to understand the meaning of natural language. However, understanding natural language is non-trivial and requires proper grounding mechanisms to create links between words and corresponding perceptual information. Since the introduction of the \"Symbol Grounding Problem\" in 1990, many different grounding approaches have been proposed that either employed supervised or unsupervised learning mechanisms. The latter have the advantage that no other agent is required to learn the correct groundings, while the former are often more sample-efficient and accurate but require the support of another agent, like a human or another artificial agent. Although combining both paradigms seems natural, it has not achieved much attention. Therefore, this paper proposes a hybrid grounding framework which combines both learning paradigms so that it is able to utilize support from a tutor, if available, while it can still learn when no support is provided. Additionally, the framework has been designed to learn in a continuous and open-ended manner so that no explicit training phase is required. The proposed framework is evaluated through two different grounding scenarios and its unsupervised grounding component is compared to a state-of-the-art unsupervised Bayesian grounding framework, while the benefit of combining both paradigms is evaluated through the analysis of different feedback rates. The obtained results show that the employed unsupervised grounding mechanism outperforms the baseline in terms of accuracy, transparency, and deployability and that combining both paradigms increases both the sample-efficiency as well as the accuracy of purely unsupervised grounding, while it ensures that the framework is still able to learn the correct mappings, when no supervision is available.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在日本,最近发生了许多类似漫画的虚拟代理的事件,其中批评家指出,在公共场所使用的虚拟代理过于性。先前的研究将这种感知定义为“恐惧症”。“在许多情况下,批评者指出了特工的衣服。然而,在核实了恐惧症的实际事件后,我假设这些事件不仅与代理商的衣服有关,而且与使用它们的情况有关。我进行了三个因素和两个水平的实验来验证这个假设。独立的价值观是特工的衣服,使用场景,以及参与者的性别。依赖值是代理的可信度,熟悉度,讨人喜欢,性,和人类感知的适用性。我对女性和男性组进行了实验,并对每组的每个依赖值进行了三因素方差分析。因此,我观察到关于女性和男性群体之间代理人印象的不同趋势;然而,两组在感知适合性方面具有相同的倾向.女性和男性参与者不仅从他们的衣服,而且从场景来判断代理人的适用性。
    In Japan, many incidents regarding manga-like virtual agents have happened recently, in which critics have indicated that virtual agents used in public spaces are too sexual. Prior study defined this perception as \"moe-phobia.\" In many cases, critics have pointed to agents\' clothes. However, after verifying actual moe-phobia incidents, I hypothesize that these incidents are associated with not only the agents\' clothes but also the situations in which they are used. I conducted an experiment with three factors and two levels to verify this hypothesis. The independent values were the agents\' clothes, usage scenario, and the gender of the participants. The dependent values were the agents\' trustworthiness, familiarity, likability, sexuality, and suitability as perceived by humans. I conducted the experiment with female and male groups and conducted a three-way ANOVA for each dependent value for each group. As a result, I observed a different tendency regarding the impression of the agents between female and male groups; however, both groups had the same tendency regarding the perceived suitability. The female and male participants judged the agents\' suitability from not only their clothes but also the scenario.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在本文中,我们讨论了人工心理理论的发展,作为代理人与人类团队成员协作能力的基础。具有人工社会智能的代理商将需要各种功能来收集所需的社会数据,以告知其人类同行的人工心理理论。我们从社会信号中汲取理论,并讨论了一个框架,以指导对人工社会智能核心特征的考虑。我们讨论人类的社会智慧,以及心理理论的发展,可以通过形成帮助代理人建模的基础来促进人工社会智能的发展,解释和预测人类的行为和精神状态,以支持人与代理人的互动。人工智能将需要处理能力来感知,解释,并生成社交线索的组合,以在人类代理团队中运作。人工心智理论提供了一种结构,通过这种结构,可以使社会智能主体具有对其人类对手进行建模并进行有效的人与主体互动的能力。Further,建模人工思维理论可以被ASI用来支持与人类的透明交流,提高对代理人的信任,因此,他们可以更好地预测未来的系统行为,基于他们的理解和支持对人工社会智能代理的信任。
    In this paper, we discuss the development of artificial theory of mind as foundational to an agent\'s ability to collaborate with human team members. Agents imbued with artificial social intelligence will require various capabilities to gather the social data needed to inform an artificial theory of mind of their human counterparts. We draw from social signals theorizing and discuss a framework to guide consideration of core features of artificial social intelligence. We discuss how human social intelligence, and the development of theory of mind, can contribute to the development of artificial social intelligence by forming a foundation on which to help agents model, interpret and predict the behaviors and mental states of humans to support human-agent interaction. Artificial social intelligence will need the processing capabilities to perceive, interpret, and generate combinations of social cues to operate within a human-agent team. Artificial Theory of Mind affords a structure by which a socially intelligent agent could be imbued with the ability to model their human counterparts and engage in effective human-agent interaction. Further, modeling Artificial Theory of Mind can be used by an ASI to support transparent communication with humans, improving trust in agents, so that they may better predict future system behavior based on their understanding of and support trust in artificial socially intelligent agents.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在本文中,我们提出了一项研究,目的在于了解人工代理的具体体现和人性化是否会影响人们对其面部表情的自发和指导模仿。该研究遵循混合实验设计,并围绕情感识别任务进行。参与者被随机分配到一个人性化水平(受试者间变量:人性化,像字符一样,或改变人工代理的面部纹理),并观察了三个实施例不同的人工代理显示的面部表情(受试者内变量:视频记录的机器人,物理机器人,和虚拟代理)和人类(控制)。为了研究自发和指导的面部模仿,我们将实验会议分为两个阶段。在第一阶段,我们要求参与者观察和识别由代理显示的情绪。在第二阶段,我们让他们看一下特工的面部表情,尽可能紧密地复制它们的动态,然后识别观察到的情绪。在这两种情况下,我们使用自动行动单元(AU)强度检测器评估参与者的面部表情.与我们的假设相反,我们的结果揭示了被认为是最不不可思议的特工,最拟人化的,讨人喜欢,并共同存在,是自发模仿最少的人。此外,他们表明,指示的面部模仿对自发的面部模仿有负面影响。进一步的探索性分析显示,当参与者对他们识别的情绪不太确定时,就会出现自发的面部模仿。因此,我们假设,情感识别目标可以翻转面部模仿的社会价值,因为它将一个讨人喜欢的人造代理转变为干扰者。需要进一步的工作来证实这一假设。然而,我们的发现揭示了人类-主体和人类-机器人模仿在情感识别任务中的功能,并帮助我们解开面部模仿之间的关系,喜欢,和融洽。
    In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people\'s spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents\' facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants\' facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    The authors evaluate the extent to which a user\'s impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user\'s impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player\'s impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent\'s self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player\'s impression toward agents. These results suggest that game agent designers can improve the player\'s disposition toward an agent and the game experience by adjusting the agent\'s self-estimation level, thinking time, and risk-taking tendency according to the player\'s personality and inner state during the game.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    The human-agent team, which is a problem in which humans and autonomous agents collaborate to achieve one task, is typical in human-AI collaboration. For effective collaboration, humans want to have an effective plan, but in realistic situations, they might have difficulty calculating the best plan due to cognitive limitations. In this case, guidance from an agent that has many computational resources may be useful. However, if an agent guides the human behavior explicitly, the human may feel that they have lost autonomy and are being controlled by the agent. We therefore investigated implicit guidance offered by means of an agent\'s behavior. With this type of guidance, the agent acts in a way that makes it easy for the human to find an effective plan for a collaborative task, and the human can then improve the plan. Since the human improves their plan voluntarily, he or she maintains autonomy. We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms and demonstrated through a behavioral experiment that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号