关键词: cross-modal skeleton-based action recognition transformer

Mesh : Humans Semantics Linguistics Movement / physiology Pattern Recognition, Automated / methods Algorithms Learning / physiology

来  源:   DOI:10.3390/s24154860   PDF(Pubmed)

Abstract:
Skeleton-based action recognition, renowned for its computational efficiency and indifference to lighting variations, has become a focal point in the realm of motion analysis. However, most current methods typically only extract global skeleton features, overlooking the potential semantic relationships among various partial limb motions. For instance, the subtle differences between actions such as \"brush teeth\" and \"brush hair\" are mainly distinguished by specific elements. Although combining limb movements provides a more holistic representation of an action, relying solely on skeleton points proves inadequate for capturing these nuances. Therefore, integrating detailed linguistic descriptions into the learning process of skeleton features is essential. This motivates us to explore integrating fine-grained language descriptions into the learning process of skeleton features to capture more discriminative skeleton behavior representations. To this end, we introduce a new Linguistic-Driven Partial Semantic Relevance Learning framework (LPSR) in this work. While using state-of-the-art large language models to generate linguistic descriptions of local limb motions and further constrain the learning of local motions, we also aggregate global skeleton point representations and textual representations (which generated from an LLM) to obtain a more generalized cross-modal behavioral representation. On this basis, we propose a cyclic attentional interaction module to model the implicit correlations between partial limb motions. Numerous ablation experiments demonstrate the effectiveness of the method proposed in this paper, and our method also obtains state-of-the-art results.
摘要:
基于骨架的动作识别,以其计算效率和对照明变化的冷漠而闻名,已经成为运动分析领域的焦点。然而,当前的大多数方法通常只提取全局骨架特征,忽略各种部分肢体运动之间的潜在语义关系。例如,诸如“刷牙”和“刷毛”之类的动作之间的细微差别主要通过特定元素来区分。虽然结合肢体动作提供了一个动作的更全面的表现,仅仅依靠骨架点被证明不足以捕捉这些细微差别。因此,将详细的语言描述集成到骨骼特征的学习过程中至关重要。这促使我们探索将细粒度的语言描述集成到骨骼特征的学习过程中,以捕获更具歧视性的骨骼行为表示。为此,在这项工作中,我们引入了一种新的语言驱动的部分语义关联学习框架(LPSR)。在使用最先进的大型语言模型来生成局部肢体运动的语言描述并进一步约束局部运动的学习的同时,我们还聚合了全局骨架点表示和文本表示(从LLM生成),以获得更广义的跨模态行为表示。在此基础上,我们提出了一个循环注意交互模块来建模部分肢体运动之间的隐含相关性。大量的烧蚀实验证明了本文方法的有效性,我们的方法也获得了最先进的结果。
公众号