Human-centered computing

  • 文章类型: Journal Article
    简介:在这项工作中,我们探索了一种潜在的方法,通过根据操作员的自然线索调整协作机器人行为来改善人与机器人的协作体验。方法:受关于人与人之间相互作用的文献启发,我们进行了一项绿野仙踪研究,以检查对cobot的凝视是否可以作为在协作会议中启动联合活动的触发因素。在这项研究中,37名参与者参与了装配任务,同时分析了他们的凝视行为。我们采用了基于凝视的注意力识别模型来识别参与者何时观看协作机器人。结果:我们的结果表明,在大多数情况下(83.74%),在联合活动之前,凝视着cobot。此外,在整个装配周期中,参与者倾向于在联合活动期间查看协作机器人。鉴于上述结果,一个完全集成的系统,只有当视线指向协作机器人时,才会触发联合行动,由10名志愿者驾驶,其中以高功能自闭症谱系障碍为特征。尽管他们从未与机器人互动过,也不知道基于凝视的触发系统,他们中的大多数人成功地与cobot合作,并报告了流畅自然的互动体验。讨论:据我们所知,这是第一项分析在协作装配任务期间与机器人进行关节活动的参与者的自然注视行为,并尝试完全集成基于自动注视的触发系统的研究。
    Introduction: In this work we explore a potential approach to improve human-robot collaboration experience by adapting cobot behavior based on natural cues from the operator. Methods: Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities in collaborative sessions. In this study, 37 participants engaged in an assembly task while their gaze behavior was analyzed. We employed a gaze-based attention recognition model to identify when the participants look at the cobot. Results: Our results indicate that in most cases (83.74%), the joint activity is preceded by a gaze towards the cobot. Furthermore, during the entire assembly cycle, the participants tend to look at the cobot mostly around the time of the joint activity. Given the above results, a fully integrated system triggering joint action only when the gaze is directed towards the cobot was piloted with 10 volunteers, of which one characterized by high-functioning Autism Spectrum Disorder. Even though they had never interacted with the robot and did not know about the gaze-based triggering system, most of them successfully collaborated with the cobot and reported a smooth and natural interaction experience. Discussion: To the best of our knowledge, this is the first study to analyze the natural gaze behavior of participants working on a joint activity with a robot during a collaborative assembly task and to attempt the full integration of an automated gaze-based triggering system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    育儿实践对儿童的幸福感有着深远的影响,是几种儿童心理健康心理干预措施的核心目标。然而,到目前为止,HCI对如何设计能够支持亲子社会实践积极转变的社会技术系统的理解有限。本文着重于父母对情感的社会化,以此作为探讨这一问题的范例。我们提出了一个两步研究,将理论驱动的合理设计方向识别与22名6-10岁儿童父母的共同设计研讨会相结合。我们的数据表明,技术驱动系统的潜力,旨在促进亲子社会实践的积极变化,并强调了一些合理的设计方向,以在未来的工作中探索。
    Parenting practices have a profound effect on children\'s well-being and are a core target of several psychological interventions for child mental health. However, there is only limited understanding in HCI so far about how to design socio-technical systems that could support positive shifts in parent-child social practices in situ. This paper focuses on parental socialisation of emotion as an exemplar context in which to explore this question. We present a two-step study, combining theory-driven identification of plausible design directions with co-design workshops with 22 parents of children aged 6-10 years. Our data suggest the potential for technology-enabled systems that aim to facilitate positive changes in parent-child social practices in situ, and highlight a number of plausible design directions to explore in future work.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标设定对于在生活中实现预期的变化至关重要。许多技术支持定义和跟踪实现目标的进展,但这些只是设定和实现目标过程的一部分。人们希望设定比技术支持更复杂的目标。此外,人们纵向使用目标设定技术,然而,对人们的目标如何演变的理解仍然有限。我们研究了心理健康治疗师和客户的合作实践,以通过对11位客户和7位在治疗过程中练习目标设定的治疗师进行半结构化访谈,以纵向设定和实现目标。根据结果,我们在心理健康中创建了纵向目标设定模型,三阶段模型.该模型描述了客户和治疗师如何在多个复杂问题中进行选择,将复杂问题简化为特定目标,并调整目标以帮助人们解决复杂的问题。我们的研究结果表明,客户和治疗师之间的合作可以支持变革性的反思实践,这在没有治疗师的情况下很难实现。比如通过新的视角看待问题,质疑和改变实践,或解决避免的问题。
    Goal setting is critical to achieving desired changes in life. Many technologies support defining and tracking progress toward goals, but these are just some parts of the process of setting and achieving goals. People want to set goals that are more complex than the ones supported through technology. Additionally, people use goal-setting technologies longitudinally, yet the understanding of how people\'s goals evolve is still limited. We study the collaborative practices of mental health therapists and clients for longitudinally setting and working toward goals through semi-structured interviews with 11 clients and 7 therapists who practiced goal setting in their therapy sessions. Based on the results, we create the Longitudinal Goal Setting Model in mental health, a three-stage model. The model describes how clients and therapists select among multiple complex problems, simplify complex problems to specific goals, and adjust goals to help people address complex issues. Our findings show collaboration between clients and therapists can support transformative reflection practices that are difficult to achieve without the therapist, such as seeing problems through new perspectives, questioning and changing practices, or addressing avoided issues.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在智能家居中,专注于人类活动识别(HAR)的应用已经复苏,特别是在环境智能和辅助生活技术领域。然而,这些应用对在现实世界中运行的任何自动分析系统提出了许多重大挑战,比如可变性,稀疏,和传感器测量中的噪声。尽管最先进的HAR系统在应对其中一些挑战方面取得了长足的进步,它们受到实际限制:它们需要在自动识别之前对连续传感器数据流进行成功的预分割,即,他们假设在部署期间存在oracle,并且它能够识别跨离散传感器事件的感兴趣的时间窗口。为了克服这个限制,我们提出了一种新颖的图引导神经网络方法,通过学习传感器之间的显式共燃关系来执行活动识别。我们通过以数据驱动的方式学习表示智能家居中传感器网络的更具表现力的图结构来实现这一目标。我们的方法通过应用注意力机制和节点嵌入的分层池化将离散输入传感器测量映射到特征空间。我们通过在CASAS数据集上进行几个实验来证明我们提出的方法的有效性,这表明所得到的图引导神经网络在多个数据集上比智能家居中HAR的最先进方法更胜一筹。这些结果是有希望的,因为它们推动智能家居的HAR更接近现实世界的应用。
    There has been a resurgence of applications focused on human activity recognition (HAR) in smart homes, especially in the field of ambient intelligence and assisted-living technologies. However, such applications present numerous significant challenges to any automated analysis system operating in the real world, such as variability, sparsity, and noise in sensor measurements. Although state-of-the-art HAR systems have made considerable strides in addressing some of these challenges, they suffer from a practical limitation: they require successful pre-segmentation of continuous sensor data streams prior to automated recognition, i.e., they assume that an oracle is present during deployment, and that it is capable of identifying time windows of interest across discrete sensor events. To overcome this limitation, we propose a novel graph-guided neural network approach that performs activity recognition by learning explicit co-firing relationships between sensors. We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home in a data-driven manner. Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms and hierarchical pooling of node embeddings. We demonstrate the effectiveness of our proposed approach by conducting several experiments on CASAS datasets, showing that the resulting graph-guided neural network outperforms the state-of-the-art method for HAR in smart homes across multiple datasets and by large margins. These results are promising because they push HAR for smart homes closer to real-world applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们提出了一个交互式视觉分析工具,Vis-Split,用于将个体群体划分为具有相似基因特征的群体。Vis-SPLIT允许用户以交互方式探索数据集并利用视觉分离来构建特定癌症的分类模型。可视化组件揭示基因表达和相关性,以帮助特定的分区决策,同时还提供了决策模型和聚类遗传签名的概述。我们通过案例研究证明了我们的框架的有效性,并与领域专家一起评估了其可用性。我们的结果表明,Vis-SPLIT可以根据患者的遗传特征对其进行分类,以有效地了解RNA测序数据,与现有的分类系统相比。
    We propose an interactive visual analytics tool, Vis-SPLIT, for partitioning a population of individuals into groups with similar gene signatures. Vis-SPLIT allows users to interactively explore a dataset and exploit visual separations to build a classification model for specific cancers. The visualization components reveal gene expression and correlation to assist specific partitioning decisions, while also providing overviews for the decision model and clustered genetic signatures. We demonstrate the effectiveness of our framework through a case study and evaluate its usability with domain experts. Our results show that Vis-SPLIT can classify patients based on their genetic signatures to effectively gain insights into RNA sequencing data, as compared to an existing classification system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    慢性疼痛是儿童和青少年发病的主要原因,影响全球35%的人口。儿科慢性疼痛管理需要通过各种身心干预跨越身体和心理子系统的综合健康方法。瑜伽疗法就是这样一种方法,以其在慢性疼痛条件下改善身体和心理生活质量的能力而闻名。然而,由于害怕运动,因此在家中保持个性化瑜伽治疗课程的临床结果具有挑战性,缺乏动力,和无聊。虚拟现实(VR)有可能通过激励参与和减轻疼痛相关的焦虑或对运动的恐惧来弥合诊所和家庭之间的差距。我们开发了一种多模态算法架构,用于融合实时3D人体姿势估计模型与自定义开发的物理运动逆运动学模型,以呈现生物力学信息的6自由度全身化身,能够体现个人的实时瑜伽在VR环境中的姿势。在对照参与者中进行的实验表明,与现有的商用现成化身跟踪解决方案相比,运动跟踪精度更高。导致成功的体现和参与。这些发现证明了呈现虚拟化身动作的可行性,这些动作体现了复杂的物理姿势,例如在瑜伽疗法中遇到的姿势。这项工作的影响使这个领域更接近互动系统,以促进患有慢性疼痛疾病的儿童的家庭个人或团体瑜伽治疗。
    Chronic pain is a leading cause of morbidity among children and adolescents affecting 35% of the global population. Pediatric chronic pain management requires integrative health methods spanning physical and psychological subsystems through various mind-body interventions. Yoga therapy is one such method, known for its ability to improve the quality of life both physically and psychologically in chronic pain conditions. However, maintaining the clinical outcomes of personalized yoga therapy sessions at-home is challenging due to fear of movement, lack of motivation, and boredom. Virtual Reality (VR) has the potential to bridge the gap between the clinic and home by motivating engagement and mitigating pain-related anxiety or fear of movement. We developed a multi-modal algorithmic architecture for fusing real-time 3D human body pose estimation models with custom developed inverse kinematics models of physical movement to render biomechanically informed 6-DoF whole-body avatars capable of embodying an individual\'s real-time yoga poses within the VR environment. Experiments conducted among control participants demonstrated superior movement tracking accuracy over existing commercial off-the-shelf avatar tracking solutions, leading to successful embodiment and engagement. These findings demonstrate the feasibility of rendering virtual avatar movements that embody complex physical poses such as those encountered in yoga therapy. The impact of this work moves the field one step closer to an interactive system to facilitate at-home individual or group yoga therapy for children with chronic pain conditions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在过去的一个世纪里,多通道荧光成像通过实现生物样品中蛋白质的空间可视化,在无数科学突破中发挥了关键作用。随着向数字方法和可视化软件的转变,专家现在可以灵活地伪彩色和组合图像通道,每个对应于不同的蛋白质,探索它们的空间关系。因此,我们提出了psudo,一种交互式系统,允许用户为多通道空间数据创建最佳调色板。在普苏多,一种新颖的优化方法生成调色板,该调色板最大化通道之间的感知差异,同时减轻重叠通道中混淆的颜色混合。我们将此方法集成到一个系统中,该系统允许用户浏览多通道图像数据,并比较和评估其数据的调色板。交互式透镜方法提供关于通道重叠和颜色混淆度量的按需反馈,同时为基础通道值提供上下文。调色板可以全局应用,使用镜头,对当地感兴趣的地区。我们在一项有150名参与者的众包用户研究中,使用三个图形感知任务来评估我们的调色板优化方法,表明用户在使用我们的方法识别和比较基础数据方面更准确。此外,我们在一个案例研究中展示了psudo与生物学家一起探索癌症组织数据中的复杂免疫反应。
    以人为中心的计算→可视化系统和工具。
    Over the past century, multichannel fluorescence imaging has been pivotal in myriad scientific breakthroughs by enabling the spatial visualization of proteins within a biological sample. With the shift to digital methods and visualization software, experts can now flexibly pseudocolor and combine image channels, each corresponding to a different protein, to explore their spatial relationships. We thus propose psudo, an interactive system that allows users to create optimal color palettes for multichannel spatial data. In psudo, a novel optimization method generates palettes that maximize the perceptual differences between channels while mitigating confusing color blending in overlapping channels. We integrate this method into a system that allows users to explore multi-channel image data and compare and evaluate color palettes for their data. An interactive lensing approach provides on-demand feedback on channel overlap and a color confusion metric while giving context to the underlying channel values. Color palettes can be applied globally or, using the lens, to local regions of interest. We evaluate our palette optimization approach using three graphical perception tasks in a crowdsourced user study with 150 participants, showing that users are more accurate at discerning and comparing the underlying data using our approach. Additionally, we showcase psudo in a case study exploring the complex immune responses in cancer tissue data with a biologist.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    准确的深度估计在以自我为中心的增强现实(AR)中提出了重大挑战,特别是对于医疗领域中依赖精度的任务,例如经皮手术期间的针或工具插入。增强反射镜(AM)通过提供额外的非自我中心视点来增强对AR场景的空间理解,从而为该问题提供了独特的解决方案。尽管使用AM具有感知优势,其实际效用尚未得到彻底检验。在这项工作中,我们提供了一项试点研究的结果,该研究涉及五名参与者,他们的任务是在AR环境中模拟硬膜外注射程序,无论有没有AM的帮助。我们的发现表明,使用AM有助于减少脑力,同时提高对准精度。这些结果凸显了AM作为AR医疗程序的强大工具的潜力,为涉及医疗专业人员的未来探索奠定基础。
    Accurate depth estimation poses a significant challenge in egocentric Augmented Reality (AR), particularly for precision-dependent tasks in the medical field, such as needle or tool insertions during percutaneous procedures. Augmented Mirrors (AMs) provide a unique solution to this problem by offering additional non-egocentric viewpoints that enhance spatial understanding of an AR scene. Despite the perceptual advantages of using AMs, their practical utility has yet to be thoroughly tested. In this work, we present results from a pilot study involving five participants tasked with simulating epidural injection procedures in an AR environment, both with and without the aid of an AM. Our findings indicate that using AM contributes to reducing mental effort while improving alignment accuracy. These results highlight the potential of AM as a powerful tool for AR-enabled medical procedures, setting the stage for future exploration involving medical professionals.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们描述了一种智能手机/智能手表系统,通过使用基于录音的生态瞬时评估来评估失语症患者的失语症。系统向参与者的智能手表提供对象命名评估,由此提示信号这些对象的图像在手表屏幕上的可用性。参与者试图说出出现在手表显示屏上的图像的名称,并在他们的生活中大声说出。我们对6名轻度至中度失语症患者进行了为期三周的可行性研究。参与者被分配到9个项目(每天4个提示,9个图像)或单个项目(每天36个提示,每个图像)生态瞬时评估协议。对于两种协议,记录对提示的音频响应的符合性约为80%。对参与者访谈的定性分析表明,参与者感到有能力完成协议,但是关于使用智能手表的意见不一。我们回顾了参与者的反馈,并强调了在设计技术和培训方案时考虑人群特定认知或运动障碍的重要性。
    We describe a smartphone/smartwatch system to evaluate anomia in individuals with aphasia by using audio-recording-based ecological momentary assessments. The system delivers object-naming assessments to a participant\'s smartwatch, whereby a prompt signals the availability of images of these objects on the watch screen. Participants attempt to speak the names of the images that appear on the watch display out loud and into the watch as they go about their lives. We conducted a three-week feasibility study with six participants with mild to moderate aphasia. Participants were assigned to either a nine-item (four prompts per day with nine images) or single-item (36 prompts per day with one image each) ecological momentary assessment protocol. Compliance in recording an audio response to a prompt was approximately 80% for both protocols. Qualitative analysis of the participants\' interviews suggests that the participants felt capable of completing the protocol, but opinions about using a smartwatch were mixed. We review participant feedback and highlight the importance of considering a population\'s specific cognitive or motor impairments when designing technology and training protocols.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    吸烟是全球可预防死亡的主要原因。香烟烟雾包括数千种有害并导致烟草相关疾病的化学物质。迄今为止,人类暴露于特定化合物和有害影响之间的因果关系是未知的。缩小知识差距的第一步是测量吸烟地形,或者吸烟者是如何吸烟的(泡芙,抽吸体积,和持续时间)。然而,当前吸烟地形的黄金标准方法涉及昂贵的,笨重,和突出的传感器设备,创造不自然的吸烟行为,并阻止他们在野外进行实时干预的可能性。尽管基于运动的可穿戴传感器及其相应的机器学习模型在轻松跟踪吸烟手势方面表现出了希望,他们因混淆吸烟与其他类似的手口手势,如吃饭和喝酒而臭名昭著。在本文中,我们展示SmokeMon,胸部佩戴的热感应可穿戴系统,可以捕捉空间,temporal,以及佩戴者和香烟周围的热量信息,全天不引人注意地和被动地检测吸烟事件。我们还开发了一个基于深度学习的框架来提取抽吸和吸烟地形。我们在控制和自由生活实验中评估了SmokeMon,共有19名参与者,超过110小时的数据,115次吸烟,在实验室中进行抽吸检测的F1评分为0.9,在野外为0.8。通过提供SmokeMon作为一个开放平台,我们提供自由生活环境中吸烟地形的测量,以便在现实世界中测试吸烟地形,有可能促进及时戒烟干预。
    Smoking is the leading cause of preventable death worldwide. Cigarette smoke includes thousands of chemicals that are harmful and cause tobacco-related diseases. To date, the causality between human exposure to specific compounds and the harmful effects is unknown. A first step in closing the gap in knowledge has been measuring smoking topography, or how the smoker smokes the cigarette (puffs, puff volume, and duration). However, current gold-standard approaches to smoking topography involve expensive, bulky, and obtrusive sensor devices, creating unnatural smoking behavior and preventing their potential for real-time interventions in the wild. Although motion-based wearable sensors and their corresponding machine-learned models have shown promise in unobtrusively tracking smoking gestures, they are notorious for confounding smoking with other similar hand-to-mouth gestures such as eating and drinking. In this paper, we present SmokeMon, a chest-worn thermal-sensing wearable system that can capture spatial, temporal, and thermal information around the wearer and cigarette all day to unobtrusively and passively detect smoking events. We also developed a deep learning-based framework to extract puffs and smoking topography. We evaluate SmokeMon in both controlled and free-living experiments with a total of 19 participants, more than 110 hours of data, and 115 smoking sessions achieving an F1-score of 0.9 for puff detection in the laboratory and 0.8 in the wild. By providing SmokeMon as an open platform, we provide measurement of smoking topography in free-living settings to enable testing of smoking topography in the real world, with potential to facilitate timely smoking cessation interventions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号