EMOKINE是一个软件包和数据集创建套件,用于实验心理学中的情感全身运动研究,情感神经科学,计算机视觉。一个计算框架,全面的指示,一个试点数据集,观察者评级,提供了运动学特征提取代码,以促进未来大规模的数据集创建。此外,EMOKINE框架概述了复杂的运动序列如何促进情绪研究。传统上,在这样的研究中经常使用基于情绪的“行动”刺激,像挥手或走路动作。在这里,相反,一个试点数据集提供了短舞蹈编舞,舞者重复了几次,每次重复都表达了不同的情感意图:愤怒,满足,恐惧,joy,中立,和悲伤。数据集同时是专业拍摄的,并使用XSENS®运动捕捉技术(17个传感器,240帧/秒)。离线提取了12个运动学特征的32个统计数据,第一次在一个单一的数据集中:速度,加速度,角速度,角加速度,肢体收缩,到质心的距离,运动量,无量纲冲击(积分),头部角度(相对于垂直轴和背部),和空间(凸包2D和3D)。Average,中位数绝对偏差(MAD),和最大值是根据适用情况计算的。EMOKINE软件适用于其他运动捕捉系统,并可在Zenodo存储库中公开使用。GitHub上的版本包括:(i)提取32个统计数据的代码,(ii)用于将MVNX文件转换为Blender格式(MVNX=输出文件XSENS®系统)的Python索具插件,和(iii)Python脚本支持的自定义软件,以帮助模糊面孔;后两个在GPLv3许可证下。
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-\'action\'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.