关键词: cobot human–robot collaboration projection virtual reality visualization techniques cobot human–robot collaboration projection virtual reality visualization techniques cobot human–robot collaboration projection virtual reality visualization techniques

Mesh : Augmented Reality Caregivers Humans Perception Augmented Reality Caregivers Humans Perception

来  源:   DOI:10.3390/s22030755

Abstract:
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they \"see\" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot\'s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.
摘要:
如今,机器人在越来越多的领域被发现,它们与人类紧密合作。由轻质材料和安全传感器启用,这些合作机器人在家庭护理中越来越受欢迎,他们在日常生活中支持身体受损的人。然而,当协作机器人自主执行动作时,人类合作者理解和预测他们的行为仍然具有挑战性,这对于实现信任和用户接受至关重要。预测协作机器人行为的一个重要方面是理解他们的感知并理解他们如何“看待”世界。为了应对这一挑战,我们比较了三种不同的空间增强现实可视化技术。所有这些都通过视觉上指示协作机器人周围的哪些物体已被其传感器识别来传达协作机器人的感知。我们将完善的可视化Wedge和Halo与我们提出的可视化Line在远程用户实验中进行了比较,参与者患有身体损伤。在第二个远程实验中,我们使用更广泛的非特定用户群验证了这些发现.我们的发现表明Line,复杂性较低的可视化,与Halo相比,反应时间明显加快,与楔形和光晕相比,任务负载更低。总的来说,用户更喜欢Line作为更直接的可视化。在空间增强现实中,由于其已知的投影面积有限的缺点,建立的屏幕外可视化不能有效地传达协作机器人的感知,而Line提供了一个易于理解的替代方案。
公众号