activity classification

活动分类
  • 文章类型: Journal Article
    背景:我们在自由生活条件下评估人类的身体行为越准确,我们就越能更好地理解其与健康和福祉的关系。大腿磨损的加速度计可用于高精度地识别基本活动类型以及不同姿势。无需专门编程的用户友好的软件可以支持采用这种方法。本研究旨在评估两种新颖的无代码分类方法的分类精度,即SENS运动和ActiPASS。
    方法:38名健康成年人(30.8±9.6岁;53%的女性)在各种体育活动中在大腿上佩戴SENS运动加速度计(12.5Hz;±4g)。参与者在实验室中完成了强度不同的标准化活动。活动包括散步,跑步,骑自行车,坐着,站立,躺下.随后,参与者在实验室外进行不受限制的自由生活活动,同时使用胸部摄像头进行录像.使用预定义的标签方案对视频进行了注释,并将注释作为自由生活条件的参考。将SENS运动软件和ActiPASS软件的分类输出与参考标签进行比较。
    结果:共分析了63.6小时的活性数据。我们观察到两种分类算法及其各自在两种条件下的参考之间的高度一致性。在自由生活条件下,科恩的卡帕系数为SENS为0.86,ActiPASS为0.92。在所有活动类型中,SENS的平均平衡精度范围为0.81(骑自行车)至0.99(跑步),ActiPASS的平均平衡精度范围为0.92(步行)至0.99(久坐)。
    结论:研究表明,两种可用的无代码分类方法可用于准确识别基本的身体活动类型和姿势。我们的结果强调了基于相对较低采样频率数据的两种方法的准确性。分类方法表现出差异,在自由生活骑自行车(SENS)和慢速跑步机步行(ActiPASS)中观察到较低的敏感性。这两种方法都使用不同定义的活动类的不同集合,这可以解释观察到的差异。我们的结果支持使用SENS运动系统和两种无代码分类方法。
    BACKGROUND: The more accurate we can assess human physical behaviour in free-living conditions the better we can understand its relationship with health and wellbeing. Thigh-worn accelerometry can be used to identify basic activity types as well as different postures with high accuracy. User-friendly software without the need for specialized programming may support the adoption of this method. This study aims to evaluate the classification accuracy of two novel no-code classification methods, namely SENS motion and ActiPASS.
    METHODS: A sample of 38 healthy adults (30.8 ± 9.6 years; 53% female) wore the SENS motion accelerometer (12.5 Hz; ±4 g) on their thigh during various physical activities. Participants completed standardized activities with varying intensities in the laboratory. Activities included walking, running, cycling, sitting, standing, and lying down. Subsequently, participants performed unrestricted free-living activities outside of the laboratory while being video-recorded with a chest-mounted camera. Videos were annotated using a predefined labelling scheme and annotations served as a reference for the free-living condition. Classification output from the SENS motion software and ActiPASS software was compared to reference labels.
    RESULTS: A total of 63.6 h of activity data were analysed. We observed a high level of agreement between the two classification algorithms and their respective references in both conditions. In the free-living condition, Cohen\'s kappa coefficients were 0.86 for SENS and 0.92 for ActiPASS. The mean balanced accuracy ranged from 0.81 (cycling) to 0.99 (running) for SENS and from 0.92 (walking) to 0.99 (sedentary) for ActiPASS across all activity types.
    CONCLUSIONS: The study shows that two available no-code classification methods can be used to accurately identify basic physical activity types and postures. Our results highlight the accuracy of both methods based on relatively low sampling frequency data. The classification methods showed differences in performance, with lower sensitivity observed in free-living cycling (SENS) and slow treadmill walking (ActiPASS). Both methods use different sets of activity classes with varying definitions, which may explain the observed differences. Our results support the use of the SENS motion system and both no-code classification methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:当前活动跟踪器中的运动确定软件的准确性不足以用于科学应用,它们也不是开源的。
    目标:为了解决这个问题,我们开发了一种精确的,可训练,以及基于智能手机的开源活动跟踪工具箱,该工具箱由一个Android应用程序(HumanActivityRecorder)和2种可以适应新行为的不同深度学习算法组成。
    方法:我们采用了一种半监督深度学习方法,基于加速度测量和陀螺仪数据来识别不同类别的活动。使用我们自己的数据和开放的竞争数据。
    结果:我们的方法对采样率和传感器尺寸输入的变化具有鲁棒性,在对我们自己记录的数据和MotionSense数据的6种不同行为进行分类时,准确率约为87%。然而,如果在我们自己的数据上测试维度自适应神经架构模型,准确率下降到26%,这证明了我们算法的优越性,它对用于训练维度自适应神经架构模型的MotionSense数据的执行率为63%。
    结论:HumanActivityRecorder是一种多功能,可重新训练,开源,和精确的工具箱,不断测试新的数据。这使研究人员能够适应被测量的行为,并在科学研究中实现可重复性。
    BACKGROUND: The accuracy of movement determination software in current activity trackers is insufficient for scientific applications, which are also not open-source.
    OBJECTIVE: To address this issue, we developed an accurate, trainable, and open-source smartphone-based activity-tracking toolbox that consists of an Android app (HumanActivityRecorder) and 2 different deep learning algorithms that can be adapted to new behaviors.
    METHODS: We employed a semisupervised deep learning approach to identify the different classes of activity based on accelerometry and gyroscope data, using both our own data and open competition data.
    RESULTS: Our approach is robust against variation in sampling rate and sensor dimensional input and achieved an accuracy of around 87% in classifying 6 different behaviors on both our own recorded data and the MotionSense data. However, if the dimension-adaptive neural architecture model is tested on our own data, the accuracy drops to 26%, which demonstrates the superiority of our algorithm, which performs at 63% on the MotionSense data used to train the dimension-adaptive neural architecture model.
    CONCLUSIONS: HumanActivityRecorder is a versatile, retrainable, open-source, and accurate toolbox that is continually tested on new data. This enables researchers to adapt to the behavior being measured and achieve repeatability in scientific studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    通常使用髋部佩戴的加速度计,但是使用每分钟100计数切点处理的数据不能准确测量坐姿。我们开发并验证了一个模型,该模型可以使用来自广泛年龄范围的老年人的臀部佩戴加速度计数据对坐姿和坐姿模式进行准确分类。
    深度学习模型以30Hz三轴髋关节佩戴加速度计数据作为输入,并以activPAL坐姿/非坐姿事件作为地面实况进行训练。来自两大洲队列的981名35-99岁成年人的数据用于训练模型,我们称之为CHAP-成人(卷积神经网络髋关节加速度计姿势-成人)。在419个随机选择的未包括在模型训练中的成年人中进行验证。
    平均误差(activPAL-CHAP-Adult)和95%的一致性限制为:久坐时间-10.5(-63.0,42.0)分钟/天,久坐时间休息1.9(-9.2,12.9)休息/天,平均回合持续时间-0.6(-4.0,2.7)分钟,通常回合持续时间-1.4(-8.3,5.4)分钟,阿尔法.00(-.04,.04),时间≥30分钟,-15.1(-84.3,54.1)分钟/天。各自的平均(和绝对)百分比误差为:-2.0%(4.0%),-4.7%(12.2%),4.1%(11.6%),-4.4%(9.6%),0.0%(1.4%),和5.4%(9.6%)。皮尔森的相关性分别为:.96、.92、.86、.92、.78和.96。错误在不同年龄之间基本一致,性别,和体重指数≥30kg/m2的体重指数组的偏差最大。
    总的来说,这些有力的验证结果表明,CHAP-Adult在使用髋部佩戴加速度计进行坐姿和坐姿模式的动态测量方面取得了显著进步.等待外部验证,它可以广泛应用于来自世界各地的数据,以扩大对坐姿的流行病学和健康后果的理解。
    UNASSIGNED: Hip-worn accelerometers are commonly used, but data processed using the 100 counts per minute cut point do not accurately measure sitting patterns. We developed and validated a model to accurately classify sitting and sitting patterns using hip-worn accelerometer data from a wide age range of older adults.
    UNASSIGNED: Deep learning models were trained with 30-Hz triaxial hip-worn accelerometer data as inputs and activPAL sitting/nonsitting events as ground truth. Data from 981 adults aged 35-99 years from cohorts in two continents were used to train the model, which we call CHAP-Adult (Convolutional Neural Network Hip Accelerometer Posture-Adult). Validation was conducted among 419 randomly selected adults not included in model training.
    UNASSIGNED: Mean errors (activPAL - CHAP-Adult) and 95% limits of agreement were: sedentary time -10.5 (-63.0, 42.0) min/day, breaks in sedentary time 1.9 (-9.2, 12.9) breaks/day, mean bout duration -0.6 (-4.0, 2.7) min, usual bout duration -1.4 (-8.3, 5.4) min, alpha .00 (-.04, .04), and time in ≥30-min bouts -15.1 (-84.3, 54.1) min/day. Respective mean (and absolute) percent errors were: -2.0% (4.0%), -4.7% (12.2%), 4.1% (11.6%), -4.4% (9.6%), 0.0% (1.4%), and 5.4% (9.6%). Pearson\'s correlations were: .96, .92, .86, .92, .78, and .96. Error was generally consistent across age, gender, and body mass index groups with the largest deviations observed for those with body mass index ≥30 kg/m2.
    UNASSIGNED: Overall, these strong validation results indicate CHAP-Adult represents a significant advancement in the ambulatory measurement of sitting and sitting patterns using hip-worn accelerometers. Pending external validation, it could be widely applied to data from around the world to extend understanding of the epidemiology and health consequences of sitting.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Although machine learning techniques have been repeatedly used for activity prediction from wearable devices, accurate classification of 24-hour activity behaviour categories from accelerometry data remains a challenge. We developed and validated a deep learning-based framework for classifying 24-hour activity behaviours from wrist-worn accelerometers.
    Using an openly available dataset with free-living wrist-based raw accelerometry data from 151 participants (aged 18-91 years), we developed a deep learning framework named AccNet24 to classify 24-hour activity behaviours. First, the acceleration signal (x, y, and z-axes) was segmented into 30-second nonoverlapping windows, and signal-to-image conversion was performed for each segment. Deep features were automatically extracted from the signal images using transfer learning and transformed into a lower-dimensional feature space. These transformed features were then employed to classify the activity behaviours as sleep, sedentary behaviour, and light-intensity (LPA) and moderate-to-vigorous physical activity (MVPA) using a bidirectional long short-term memory (BiLSTM) recurrent neural network. AccNet24 was trained and validated with data from 101 and 25 randomly selected participants and tested with the remaining unseen 25 participants. We also extracted 112 hand-crafted time and frequency domain features from 30-second windows and used them as inputs to five commonly used machine learning classifiers, including random forest, support vector machines, artificial neural networks, decision tree, and naïve Bayes to classify the 24-hour activity behaviour categories.
    Using the same training, validation, and test data and window size, the classification accuracy of AccNet24 outperformed the accuracy of the other five machine learning classification algorithms by 16%-30% on unseen data.
    AccNet24, relying on signal-to-image conversion, deep feature extraction, and BiLSTM achieved consistently high accuracy (>95 %) in classifying the 24-hour activity behaviour categories as sleep, sedentary, LPA, and MVPA. The next generation accelerometry analytics may rely on deep learning techniques for activity prediction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    将机器学习算法应用于可穿戴运动传感器提供的数据是检测宠物行为和监测其健康状况的最常用方法之一。然而,定义导致高度准确的行为分类的特征是相当具有挑战性的。为了解决这个问题,在这项研究中,我们的目标是对狗的六种主要活动进行分类(站立,走路,跑步,坐着,躺下,和休息)使用高维传感器原始数据。数据是从加速度计和陀螺仪传感器接收的,这些传感器被设计成附着在狗的智能服装上。一旦收到数据,该模块为每个数据点计算四元数值,为分类提供一些功能。接下来,为了进行分类,我们使用了几种有监督的机器学习算法,例如高斯朴素贝叶斯(GNB),决策树(DT)K最近邻(KNN),和支持向量机(SVM)。为了评估性能,最后,我们将提出的方法的F分数精度与经典方法性能的精度进行了比较,其中传感器数据在不计算四元数值的情况下收集,并由模型直接使用。总的来说,18只配有安全带的狗参加了实验。实验结果表明,所提出的方法可以显着增强分类。在所有分类器中,GNB分类模型对狗行为的准确率最高。这些行为的F分数准确度分别为0.94、0.86、0.94、0.89、0.95和1。此外,已经观察到GNB分类器在数据集由四元数值组成的情况下平均实现93%的准确度。相比之下,当模型使用来自传感器数据的数据集时,这一比例仅为88%。
    The employment of machine learning algorithms to the data provided by wearable movement sensors is one of the most common methods to detect pets\' behaviors and monitor their well-being. However, defining features that lead to highly accurate behavior classification is quite challenging. To address this problem, in this study we aim to classify six main dog activities (standing, walking, running, sitting, lying down, and resting) using high-dimensional sensor raw data. Data were received from the accelerometer and gyroscope sensors that are designed to be attached to the dog\'s smart costume. Once data are received, the module computes a quaternion value for each data point that provides handful features for classification. Next, to perform the classification, we used several supervised machine learning algorithms, such as the Gaussian naïve Bayes (GNB), Decision Tree (DT), K-nearest neighbor (KNN), and support vector machine (SVM). In order to evaluate the performance, we finally compared the proposed approach\'s F-score accuracies with the accuracy of classic approach performance, where sensors\' data are collected without computing the quaternion value and directly utilized by the model. Overall, 18 dogs equipped with harnesses participated in the experiment. The results of the experiment show a significantly enhanced classification with the proposed approach. Among all the classifiers, the GNB classification model achieved the highest accuracy for dog behavior. The behaviors are classified with F-score accuracies of 0.94, 0.86, 0.94, 0.89, 0.95, and 1, respectively. Moreover, it has been observed that the GNB classifier achieved 93% accuracy on average with the dataset consisting of quaternion values. In contrast, it was only 88% when the model used the dataset from sensors\' data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在这项工作中,提出了一种非接触式非侵入性身体活动监测的新方法,它利用多轴惯性测量单元(IMU)来测量多轴活动引起的结构振动。该方法在监测饲养笼中小鼠的活动中得到了证明,活动被归类为休息,固定活动和运动。在这个设置中,IMU安装在轿厢底板下侧的中心,在该位置振动被测量为X中的加速度和角速度,Y轴和Z轴。活动的地面实况由安装在笼盖中的摄像机提供。此设置用于记录27.67h的IMU数据和地面实况活动标签。用16.17h的数据训练分类模型,该数据总计3880个数据点。每个数据点包含11个特征,从X-计算,Y轴和Z轴加速度计数据。该方法在将活动与非活动分类方面实现了超过90%的准确度。活动在超过一天的时间内被连续监测,并且清楚地描绘了居民的夜间行为。这项工作的影响是评估活动的强大方法,可以自动进行健康评估和优化工作流程,以改善动物的健康状况。
    In this work, a novel method is presented for non-contact non-invasive physical activity monitoring, which utilizes a multi-axial inertial measurement unit (IMU) to measure activity-induced structural vibrations in multiple axes. The method is demonstrated in monitoring the activity of a mouse in a husbandry cage, where activity is classified as resting, stationary activity and locomotion. In this setup, the IMU is mounted in the center of the underside of the cage floor where vibrations are measured as accelerations and angular rates in the X-, Y- and Z-axis. The ground truth of activity is provided by a camera mounted in the cage lid. This setup is used to record 27.67 h of IMU data and ground truth activity labels. A classification model is trained with 16.17 h of data which amounts to 3880 data points. Each data point contains eleven features, calculated from the X-, Y- and Z-axis accelerometer data. The method achieves over 90% accuracy in classifying activity versus non-activity. Activity is monitored continuously over more than a day and clearly depicts the nocturnal behavior of the inhabitant. The impact of this work is a powerful method to assess activity which enables automatic health evaluation and optimization of workflows for improved animal wellbeing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    智能手机技术的发展决定了丰富和普遍的计算。使用移动传感器的活动识别系统能够连续监测人类行为和辅助生活。本文提出了基于移动传感器的流行病观察系统(EWS)利用AI模型来识别一组新的活动,以进行有效的社会距离监测。感染概率估计,和COVID-19传播预防。该研究的重点是用户活动识别和行为,涉及COVID-19大流行的风险和有效性。拟议的EWS包括用于COVID-19相关活动传感器数据收集的智能手机应用程序,特征提取,对活动进行分类,并为传播呈现提供警报。我们收集了COVID-19相关活动的新数据集,如洗手,手部消毒,鼻子-眼睛触摸,并使用建议的EWS智能手机应用程序进行握手。我们评估了几个分类器,如随机森林,决策树,支持向量机,和长期短期记忆收集的数据集,并达到最高的总分类准确率97.33%。我们使用GPS传感器数据提供COVID-19感染者的接触追踪。EWS活动监测,identification,和分类系统检查来自COVID-19感染者的另一个人的感染风险。它决定了COVID-19感染者和正常人之间的一些日常活动,比如坐在一起,站在一起,或走在一起,以尽量减少大流行疾病的传播。
    The development of smartphones technologies has determined the abundant and prevalent computation. An activity recognition system using mobile sensors enables continuous monitoring of human behavior and assisted living. This paper proposes the mobile sensors-based Epidemic Watch System (EWS) leveraging the AI models to recognize a new set of activities for effective social distance monitoring, probability of infection estimation, and COVID-19 spread prevention. The research focuses on user activities recognition and behavior concerning risks and effectiveness in the COVID-19 pandemic. The proposed EWS consists of a smartphone application for COVID-19 related activities sensors data collection, features extraction, classifying the activities, and providing alerts for spread presentation. We collect the novel dataset of COVID-19 associated activities such as hand washing, hand sanitizing, nose-eyes touching, and handshaking using the proposed EWS smartphone application. We evaluate several classifiers such as random forests, decision trees, support vector machine, and Long Short-Term Memory for the collected dataset and attain the highest overall classification accuracy of 97.33%. We provide the Contact Tracing of the COVID-19 infected person using GPS sensor data. The EWS activities monitoring, identification, and classification system examine the infection risk of another person from COVID-19 infected person. It determines some everyday activities between COVID-19 infected person and normal person, such as sitting together, standing together, or walking together to minimize the spread of pandemic diseases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:语义分割和活动分类是创建能够理解和协助临床工作流程的智能手术系统的关键组成部分。在手术室里,语义分割是创建感知临床环境的机器人的核心,而活动分类旨在更高层次地理解OR工作流程。最先进的语义分割和活动识别方法得到充分监督,这是不可伸缩的。自我监督可以减少所需的注释数据量。
    方法:我们提出了一种新的3D自我监督任务,用于OR场景理解,利用ToF相机捕获的OR场景图像。与其他自我监督方法相反,手工制作的借口任务集中在2D图像特征上,我们提出的任务包括通过利用深度图来预测图像块的相对3D距离。通过学习3D空间上下文,它为我们的下游任务生成区分性特征。
    结果:我们的方法是在两个任务和包含从临床场景捕获的多视图数据的数据集上进行评估的。我们在这两个任务上的性能都有了显著的提高,特别是在低制度数据上,自我监督学习的效用最高。
    结论:我们提出了一种利用深度图的新颖的隐私保护自监督方法。我们提出的方法显示出与其他自监督方法相当的性能,并且可能是减轻全面监督负担的有趣方法。
    OBJECTIVE: Semantic segmentation and activity classification are key components to create intelligent surgical systems able to understand and assist clinical workflow. In the operating room, semantic segmentation is at the core of creating robots aware of clinical surroundings, whereas activity classification aims at understanding OR workflow at a higher level. State-of-the-art semantic segmentation and activity recognition approaches are fully supervised, which is not scalable. Self-supervision can decrease the amount of annotated data needed.
    METHODS: We propose a new 3D self-supervised task for OR scene understanding utilizing OR scene images captured with ToF cameras. Contrary to other self-supervised approaches, where handcrafted pretext tasks are focused on 2D image features, our proposed task consists of predicting relative 3D distance of image patches by exploiting the depth maps. By learning 3D spatial context, it generates discriminative features for our downstream tasks.
    RESULTS: Our approach is evaluated on two tasks and datasets containing multiview data captured from clinical scenarios. We demonstrate a noteworthy improvement in performance on both tasks, specifically on low-regime data where utility of self-supervised learning is the highest.
    CONCLUSIONS: We propose a novel privacy-preserving self-supervised approach utilizing depth maps. Our proposed method shows performance on par with other self-supervised approaches and could be an interesting way to alleviate the burden of full supervision.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    不适当的手动材料处理(MMH)技术被证明会导致下腰痛,最常见的与工作相关的肌肉骨骼疾病。由于MMH的复杂性和可变性以及现有危害分析方法的突发性和主观性,提供系统的,连续,自动化风险评估具有挑战性。我们提出了一种机器学习算法,用于使用最小侵入式仪器鞋垫和胸部安装的加速度计来检测和分类MMH任务。六名参与者站立,走路,提升/降低,携带,侧向载荷转移(即,5.7千克和12.5千克),和推/拉。起重和承载载荷以及危险行为(即,弯腰,在有/没有胸部加速度计的情况下,以85.3%/81.5%的平均准确度检测到过度扩张和剧烈抬起)。拟议的系统允许在MMH期间进行连续的暴露评估,并提供客观数据用于分析风险评估模型,这些模型可用于通过暴露估计来提高工作场所的安全性。
    Improper manual material handling (MMH) techniques are shown to lead to low back pain, the most common work-related musculoskeletal disorder. Due to the complex nature and variability of MMH and obtrusiveness and subjectiveness of existing hazard analysis methods, providing systematic, continuous, and automated risk assessment is challenging. We present a machine learning algorithm to detect and classify MMH tasks using minimally-intrusive instrumented insoles and chest-mounted accelerometers. Six participants performed standing, walking, lifting/lowering, carrying, side-to-side load transferring (i.e., 5.7 kg and 12.5 kg), and pushing/pulling. Lifting and carrying loads as well as hazardous behaviors (i.e., stooping, overextending and jerky lifting) were detected with 85.3%/81.5% average accuracies with/without chest accelerometer. The proposed system allows for continuous exposure assessment during MMH and provides objective data for use with analytical risk assessment models that can be used to increase workplace safety through exposure estimation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    UNASSIGNED: Machine learning has been used for classification of physical behavior bouts from hip-worn accelerometers; however, this research has been limited due to the challenges of directly observing and coding human behavior \"in the wild.\" Deep learning algorithms, such as convolutional neural networks (CNNs), may offer better representation of data than other machine learning algorithms without the need for engineered features and may be better suited to dealing with free-living data. The purpose of this study was to develop a modeling pipeline for evaluation of a CNN model on a free-living data set and compare CNN inputs and results with the commonly used machine learning random forest and logistic regression algorithms.
    UNASSIGNED: Twenty-eight free-living women wore an ActiGraph GT3X+accelerometer on their right hip for 7 days. A concurrently worn thigh-mounted activPAL device captured ground truth activity labels. The authors evaluated logistic regression, random forest, and CNN models for classifying sitting, standing, and stepping bouts. The authors also assessed the benefit of performing feature engineering for this task.
    UNASSIGNED: The CNN classifier performed best (average balanced accuracy for bout classification of sitting, standing, and stepping was 84%) compared with the other methods (56% for logistic regression and 76% for random forest), even without performing any feature engineering.
    UNASSIGNED: Using the recent advancements in deep neural networks, the authors showed that a CNN model can outperform other methods even without feature engineering. This has important implications for both the model\'s ability to deal with the complexity of free-living data and its potential transferability to new populations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号