activity recognition

活动识别
  • 文章类型: Journal Article
    下尿路功能障碍(LUTD)是一种使人衰弱的疾病,影响全球数百万人,大大降低了他们的生活质量。无线的使用,用于长期动态膀胱监测的无导管可植入装置,结合能够检测各种膀胱事件的单传感器系统,有可能显着增强LUTD的诊断和治疗。然而,这些系统产生大量的膀胱数据,这些数据可能包含由运动伪影和突然运动引起的压力信号中的生理噪声,比如咳嗽或大笑,可能导致膀胱事件分类期间的假阳性和不准确的诊断/治疗。集成活动识别(AR)可以提高分类精度,提供有关患者活动的背景,并通过识别可能由患者运动引起的收缩来检测运动伪影。这项工作研究了在分类管道中包含来自惯性测量单元(IMU)的数据的实用性,并考虑各种数字信号处理(DSP)和机器学习(ML)技术进行优化和活动分类。在一个案例研究中,我们分析了同时从一只行走的雌性尤卡坦小型猪收集的膀胱压力和IMU数据.我们确定了10个重要的,然而计算信号特征相对便宜,我们实现了平均91.5%的活动分类准确率。此外,当膀胱事件分析管道中包括分类活动时,我们观察到分类精度的提高,从81%到89.0%。这些结果表明,某些IMU特征可以以较低的计算开销提高膀胱事件分类准确性。临床相关性:这项工作确立了活动识别可以与单通道膀胱事件检测系统结合使用,以区分收缩和运动伪影,以减少膀胱事件的错误分类。这对于单独测量膀胱内压力的新兴传感器或对于包含显著腹部压力伪影的非卧床受试者中的膀胱压力的数据分析是相关的。
    Lower urinary tract dysfunction (LUTD) is a debilitating condition that affects millions of individuals worldwide, greatly diminishing their quality of life. The use of wireless, catheter-free implantable devices for long-term ambulatory bladder monitoring, combined with a single-sensor system capable of detecting various bladder events, has the potential to significantly enhance the diagnosis and treatment of LUTD. However, these systems produce large amounts of bladder data that may contain physiological noise in the pressure signals caused by motion artifacts and sudden movements, such as coughing or laughing, potentially leading to false positives during bladder event classification and inaccurate diagnosis/treatment. Integration of activity recognition (AR) can improve classification accuracy, provide context regarding patient activity, and detect motion artifacts by identifying contractions that may result from patient movement. This work investigates the utility of including data from inertial measurement units (IMUs) in the classification pipeline, and considers various digital signal processing (DSP) and machine learning (ML) techniques for optimization and activity classification. In a case study, we analyze simultaneous bladder pressure and IMU data collected from an ambulating female Yucatan minipig. We identified 10 important, yet relatively inexpensive to compute signal features, with which we achieve an average 91.5% activity classification accuracy. Moreover, when classified activities are included in the bladder event analysis pipeline, we observe an improvement in classification accuracy, from 81% to 89.0%. These results suggest that certain IMU features can improve bladder event classification accuracy with low computational overhead.Clinical Relevance: This work establishes that activity recognition may be used in conjunction with single-channel bladder event detection systems to distinguish between contractions and motion artifacts for reducing the incorrect classification of bladder events. This is relevant for emerging sensors that measure intravesical pressure alone or for data analysis of bladder pressure in ambulatory subjects that contain significant abdominal pressure artifacts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    活动识别与人工智能相结合是一个重要的研究领域,跨越不同的领域,从体育和医疗保健到智能家居。在工业领域,和手动装配线,重点转移到人机交互,从而转移到复杂操作环境中的人类活动识别(HAR)。开发能够可靠有效地识别人类活动的模型和方法,传统上只归类为简单或复杂的活动,仍然是该领域的关键挑战。现有方法和途径的局限性包括它们不能考虑与所执行的活动相关联的上下文复杂性。我们应对这一挑战的方法是创建不同级别的活动抽象,这允许对活动有更细致的理解,并定义它们的潜在模式。具体来说,我们基于可在HAR中使用的已执行活动的上下文,为人类活动抽象级别提出了一种新的分层分类法。拟议的层次结构由五个级别组成,即原子,micro,meso,宏,还有mega.我们将此分类法与将活动分为简单和复杂类别以及其他类似分类方案的其他方法进行了比较,并在不同的应用程序中提供了实际示例以证明其有效性。关于人工智能等先进技术,我们的研究旨在指导和优化工业装配程序,特别是在不受控制的非实验室环境中,通过塑造工作流程来实现结构化数据分析,并在整个装配过程中突出显示各个级别的相关性。此外,它在研究人员和行业专业人士之间建立了有效的沟通和共同的理解,同时也为他们提供了促进系统开发的必要资源,传感器,和适应抽象级别的自定义工业用例的算法。
    Activity recognition combined with artificial intelligence is a vital area of research, ranging across diverse domains, from sports and healthcare to smart homes. In the industrial domain, and the manual assembly lines, the emphasis shifts to human-machine interaction and thus to human activity recognition (HAR) within complex operational environments. Developing models and methods that can reliably and efficiently identify human activities, traditionally just categorized as either simple or complex activities, remains a key challenge in the field. Limitations of the existing methods and approaches include their inability to consider the contextual complexities associated with the performed activities. Our approach to address this challenge is to create different levels of activity abstractions, which allow for a more nuanced comprehension of activities and define their underlying patterns. Specifically, we propose a new hierarchical taxonomy for human activity abstraction levels based on the context of the performed activities that can be used in HAR. The proposed hierarchy consists of five levels, namely atomic, micro, meso, macro, and mega. We compare this taxonomy with other approaches that divide activities into simple and complex categories as well as other similar classification schemes and provide real-world examples in different applications to demonstrate its efficacy. Regarding advanced technologies like artificial intelligence, our study aims to guide and optimize industrial assembly procedures, particularly in uncontrolled non-laboratory environments, by shaping workflows to enable structured data analysis and highlighting correlations across various levels throughout the assembly progression. In addition, it establishes effective communication and shared understanding between researchers and industry professionals while also providing them with the essential resources to facilitate the development of systems, sensors, and algorithms for custom industrial use cases that adapt to the level of abstraction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:体力活动正在成为一种结果指标。加速度计已成为监测物理行为的重要工具,新的识别方法分析方法增加了细节的程度。许多研究通过使用多个可穿戴传感器在身体行为分类方面取得了高性能;然而,多个可穿戴设备可能是不切实际的,并且合规性较低。
    目的:这项研究的目的是开发和验证一种算法,用于使用单个大腿安装的加速度计和监督的机器学习方案对几种日常身体行为进行分类。
    方法:我们通过添加行为类来收集训练数据-运行,骑自行车,爬楼梯,轮椅行走,和车辆驾驶-使用现有的算法,说谎,站立,走路,和过渡。组合训练数据后,我们使用随机森林学习方案进行模型开发。我们通过使用胸部安装的摄像机建立地面真相的模拟自由生活程序验证了该算法。此外,我们调整了我们的算法,并将性能与现有的基于向量阈值的算法进行了比较。
    结果:我们开发了一种算法来对11种与康复相关的身体行为进行分类。在模拟的自由生活验证中,该算法的性能下降到57%的平均11类(F-measure)。将班级合并为久坐行为后,站立,走路,跑步,骑自行车,结果表明,与地面实况和现有算法相比,性能更高。
    结论:使用单个大腿安装的加速度计,我们在特定行为中获得了较高的分类水平。具有高水平表现的行为大多发生在功能水平较高的人群中。进一步的发展应旨在描述功能水平较低的人群中的行为。
    BACKGROUND: Physical activity is emerging as an outcome measure. Accelerometers have become an important tool in monitoring physical behavior, and newer analytical approaches of recognition methods increase the degree of details. Many studies have achieved high performance in the classification of physical behaviors through the use of multiple wearable sensors; however, multiple wearables can be impractical and lower compliance.
    OBJECTIVE: The aim of this study was to develop and validate an algorithm for classifying several daily physical behaviors using a single thigh-mounted accelerometer and a supervised machine-learning scheme.
    METHODS: We collected training data by adding the behavior classes-running, cycling, stair climbing, wheelchair ambulation, and vehicle driving-to an existing algorithm with the classes of sitting, lying, standing, walking, and transitioning. After combining the training data, we used a random forest learning scheme for model development. We validated the algorithm through a simulated free-living procedure using chest-mounted cameras for establishing the ground truth. Furthermore, we adjusted our algorithm and compared the performance with an existing algorithm based on vector thresholds.
    RESULTS: We developed an algorithm to classify 11 physical behaviors relevant for rehabilitation. In the simulated free-living validation, the performance of the algorithm decreased to 57% as an average for the 11 classes (F-measure). After merging classes into sedentary behavior, standing, walking, running, and cycling, the result revealed high performance in comparison to both the ground truth and the existing algorithm.
    CONCLUSIONS: Using a single thigh-mounted accelerometer, we obtained high classification levels within specific behaviors. The behaviors classified with high levels of performance mostly occur in populations with higher levels of functioning. Further development should aim at describing behaviors within populations with lower levels of functioning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:当前活动跟踪器中的运动确定软件的准确性不足以用于科学应用,它们也不是开源的。
    目标:为了解决这个问题,我们开发了一种精确的,可训练,以及基于智能手机的开源活动跟踪工具箱,该工具箱由一个Android应用程序(HumanActivityRecorder)和2种可以适应新行为的不同深度学习算法组成。
    方法:我们采用了一种半监督深度学习方法,基于加速度测量和陀螺仪数据来识别不同类别的活动。使用我们自己的数据和开放的竞争数据。
    结果:我们的方法对采样率和传感器尺寸输入的变化具有鲁棒性,在对我们自己记录的数据和MotionSense数据的6种不同行为进行分类时,准确率约为87%。然而,如果在我们自己的数据上测试维度自适应神经架构模型,准确率下降到26%,这证明了我们算法的优越性,它对用于训练维度自适应神经架构模型的MotionSense数据的执行率为63%。
    结论:HumanActivityRecorder是一种多功能,可重新训练,开源,和精确的工具箱,不断测试新的数据。这使研究人员能够适应被测量的行为,并在科学研究中实现可重复性。
    BACKGROUND: The accuracy of movement determination software in current activity trackers is insufficient for scientific applications, which are also not open-source.
    OBJECTIVE: To address this issue, we developed an accurate, trainable, and open-source smartphone-based activity-tracking toolbox that consists of an Android app (HumanActivityRecorder) and 2 different deep learning algorithms that can be adapted to new behaviors.
    METHODS: We employed a semisupervised deep learning approach to identify the different classes of activity based on accelerometry and gyroscope data, using both our own data and open competition data.
    RESULTS: Our approach is robust against variation in sampling rate and sensor dimensional input and achieved an accuracy of around 87% in classifying 6 different behaviors on both our own recorded data and the MotionSense data. However, if the dimension-adaptive neural architecture model is tested on our own data, the accuracy drops to 26%, which demonstrates the superiority of our algorithm, which performs at 63% on the MotionSense data used to train the dimension-adaptive neural architecture model.
    CONCLUSIONS: HumanActivityRecorder is a versatile, retrainable, open-source, and accurate toolbox that is continually tested on new data. This enables researchers to adapt to the behavior being measured and achieve repeatability in scientific studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文介绍了通过可穿戴传感器获得的与压力和无聊相关的活动数据集。数据来自40名20至25岁的右撇子参与者,男性和女性平均分配。每个人都在其优势手臂的手腕上佩戴了智能设备,以方便捕获数据。该数据集涵盖了与压力和无聊相关的五种活动,即,吸烟,吃,咬指甲,脸触摸,并保持静止。这些活动因其潜在的心理影响而被选择,并在不受控制的环境中被捕获以模仿现实生活中的场景。这些数据为开发旨在识别这些行为的机器学习模型提供了独特的资源,这可能导致对压力的实时分析和干预。使用定制支架将装置保持在手腕上,以确保所有参与者具有一致的定向和放置。这个支架位于腕关节的正上方,通常与智能手表的放置相关联的位置。该数据集为开发机器学习模型提供了独特的机会,用于识别压力和无聊相关的活动,除了对压力和无聊进行实时症状分析。
    This article presents a dataset of activities associated with stress and boredom obtained through wearable sensors. Data was collected from 40 right-handed participants aged 20 to 25, evenly split between males and females. Each individual wore a smart device on their dominant arm\'s wrists to facilitate the capture of data. This dataset covers five activities associated with stress and boredom, namely, smoking, eating, nail biting, face touching, and staying still. These activities were selected for their potential psychological implications and captured in an uncontrolled environment to mimic real-life scenarios. The data provides a unique resource for developing machine learning models aimed at recognizing these behaviors, which could lead to real-time analysis and interventions for stress. A custom holder was used to hold the device on the wrists in order to ensure that all participants had consistent orientation and placement. This holder was situated just above the wrist joint, a location typically associated with the placement of smartwatches. The dataset provides a unique opportunity for developing machine learning models for stress & boredom associated activities recognition apart from real-time symptomatic analysis of stress and boredom.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于鞋子的可穿戴传感器系统是健康监测领域的一个不断发展的研究领域,疾病诊断,康复,和运动训练。这些系统-配有一个或多个传感器,相同或不同类型的捕获与足部运动或足部下方的压力图有关的信息。这些捕获的信息提供了受试者整体运动的概述,被称为人类步态。除了感知,这些系统还提供了一个托管环境能量采集器的平台。它们具有从足部运动中获取能量的潜力,并可持续地操作相关的低功耗设备。本文提出了两种类型的策略(策略1和策略2),用于基于能源自主鞋的系统。策略1使用加速度计作为步态采集的传感器,这反映了经典的选择。策略2使用压电元件,这为其实施开辟了新的视角。在这两种策略中,压电元件用于从足部活动中获取能量并操作该系统。本文介绍了两种策略在功耗方面的公平比较,准确度,以及压电能量采集器对整体功率管理的贡献程度。此外,策略2,使用压电元件同时进行传感和能量收集,是一种能量自主制鞋系统的功率优化方法。
    Shoe-based wearable sensor systems are a growing research area in health monitoring, disease diagnosis, rehabilitation, and sports training. These systems-equipped with one or more sensors, either of the same or different types-capture information related to foot movement or pressure maps beneath the foot. This captured information offers an overview of the subject\'s overall movement, known as the human gait. Beyond sensing, these systems also provide a platform for hosting ambient energy harvesters. They hold the potential to harvest energy from foot movements and operate related low-power devices sustainably. This article proposes two types of strategies (Strategy 1 and Strategy 2) for an energy-autonomous shoe-based system. Strategy 1 uses an accelerometer as a sensor for gait acquisition, which reflects the classical choice. Strategy 2 uses a piezoelectric element for the same, which opens up a new perspective in its implementation. In both strategies, the piezoelectric elements are used to harvest energy from foot activities and operate the system. The article presents a fair comparison between both strategies in terms of power consumption, accuracy, and the extent to which piezoelectric energy harvesters can contribute to overall power management. Moreover, Strategy 2, which uses piezoelectric elements for simultaneous sensing and energy harvesting, is a power-optimized method for an energy-autonomous shoe system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    手术阶段识别(SPR)是现代手术室数字化转型的关键要素。虽然基于视频源的SPR已经确立,介入X射线序列的合并尚未被探索。本文介绍了Pelphix,SPR用于X线引导经皮骨盆骨折固定术的第一种方法,它在四个粒度级别对程序进行建模-走廊,活动,视图,和帧值-将骨盆骨折内固定工作流程模拟为马尔可夫过程,以提供完全注释的训练数据。使用额外的监督,从骨骼走廊的检测,工具,解剖学,我们学习图像表示,这些图像表示被馈送到变压器模型中,以在四个粒度级别上回归手术阶段。我们的方法证明了基于X射线的SPR的可行性,在所有粒度级别上,模拟序列的平均准确率为99.2%,尸体的平均准确率为71.7%,在实际数据中,目标走廊的准确率高达84%。这项工作构成了X射线领域SPR的第一步,建立X射线引导手术阶段分类的方法,模拟逼真的图像序列以实现机器学习模型开发,证明了这种方法对实际程序的分析是可行的。随着基于X射线的SPR的不断成熟,它将有利于整形外科手术,血管造影,通过在手术室中配备具有态势感知的智能手术系统,以及介入放射学。
    Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    室内空间中的自动检测活动已经并且是非常感兴趣的问题。因此,在健康监测领域,经常研究的空间之一是家庭的浴室,特别是用户在所述空间的行为,因为某些病理有时可以从中推断出来。这就是为什么,这项研究的目的是了解是否有可能自动对浴室内发生的主要活动进行分类,对于迄今为止使用的方法,使用创新的方法,基于环境参数和机器学习算法的应用,从而保护隐私,与其他方法相比,这是一个显著的改进。为此,遵循的方法是基于预先训练的卷积网络的新颖应用,用于对由浴室环境参数监测产生的图形进行分类。获得的结果使我们得出结论,除了能够检查环境数据是否适合健康之外,在一些最频繁和最重要的活动中,有可能检测到高比例的真阳性率(约80%),从而促进其自动化在一个非常简单和经济的方式。
    Automatic detection activities in indoor spaces has been and is a matter of great interest. Thus, in the field of health surveillance, one of the spaces frequently studied is the bathroom of homes and specifically the behaviour of users in the said space, since certain pathologies can sometimes be deduced from it. That is why, the objective of this study is to know if it is possible to automatically classify the main activities that occur within the bathroom, using an innovative methodology with respect to the methods used to date, based on environmental parameters and the application of machine learning algorithms, thus allowing privacy to be preserved, which is a notable improvement in relation to other methods. For this, the methodology followed is based on the novel application of a pre-trained convolutional network for classifying graphs resulting from the monitoring of the environmental parameters of a bathroom. The results obtained allow us to conclude that, in addition to being able to check whether environmental data are adequate for health, it is possible to detect a high rate of true positives (around 80%) in some of the most frequent and important activities, thus facilitating its automation in a very simple and economical way.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    端到端深度学习模型越来越多地应用于对安全至关重要的人类活动识别(HAR)应用。例如,医疗保健监控和智能家居控制,以减轻开发人员的负担并提高预测模型的性能和鲁棒性。然而,在安全关键型应用程序中集成HAR模型需要信任,最近的方法旨在平衡深度学习模型的性能与复杂活动识别的可解释决策。先前的作品已经利用了复杂的HAR的组成性(即,由较低级别的活动组成的较高级别的活动)形成具有符号接口的模型,如概念瓶颈架构,促进固有的可解释模型。然而,符号概念的特征工程-以及概念之间的关系-需要领域专家对低级活动的精确注释,通常有固定的时间窗口,所有这些都会给领域专家带来沉重而容易出错的工作量。在本文中,我们介绍X-CHAR,一种可解释的复杂人类活动识别模型,不需要对低级活动进行精确注释,以人类可以理解的形式提供解释,高层次的概念,同时保持时间序列数据的端到端深度学习模型的健壮性能。X-CHAR学习以一系列概念的形式对复杂的活动识别进行建模。对于每个分类,X-CHAR输出一系列概念和一个反事实的例子作为解释。我们表明,概念的序列信息可以使用Connectionist时间分类(CTC)损失进行建模,而无需在训练数据集中准确的低级注释开始和结束时间-显着减少开发人员的负担。我们在几个复杂的活动数据集上评估了我们的模型,并证明了我们的模型提供了解释,而与基准模型相比,没有损害预测准确性。最后,我们进行了一项Turk机械研究,以表明我们的模型提供的解释比现有的复杂活动识别方法的解释更容易理解。
    End-to-end deep learning models are increasingly applied to safety-critical human activity recognition (HAR) applications, e.g., healthcare monitoring and smart home control, to reduce developer burden and increase the performance and robustness of prediction models. However, integrating HAR models in safety-critical applications requires trust, and recent approaches have aimed to balance the performance of deep learning models with explainable decision-making for complex activity recognition. Prior works have exploited the compositionality of complex HAR (i.e., higher-level activities composed of lower-level activities) to form models with symbolic interfaces, such as concept-bottleneck architectures, that facilitate inherently interpretable models. However, feature engineering for symbolic concepts-as well as the relationship between the concepts-requires precise annotation of lower-level activities by domain experts, usually with fixed time windows, all of which induce a heavy and error-prone workload on the domain expert. In this paper, we introduce X-CHAR , an eXplainable Complex Human Activity Recognition model that doesn\'t require precise annotation of low-level activities, offers explanations in the form of human-understandable, high-level concepts, while maintaining the robust performance of end-to-end deep learning models for time series data. X-CHAR learns to model complex activity recognition in the form of a sequence of concepts. For each classification, X-CHAR outputs a sequence of concepts and a counterfactual example as the explanation. We show that the sequence information of the concepts can be modeled using Connectionist Temporal Classification (CTC) loss without having accurate start and end times of low-level annotations in the training dataset-significantly reducing developer burden. We evaluate our model on several complex activity datasets and demonstrate that our model offers explanations without compromising the prediction accuracy in comparison to baseline models. Finally, we conducted a mechanical Turk study to show that the explanations provided by our model are more understandable than the explanations from existing methods for complex activity recognition.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于计算机视觉(CV)的识别方法加速了施工现场安全和进度监控的自动化。然而,有限的研究已经探索了其在基于过程的建筑工程质量控制中的应用,尤其是隐蔽工作。在这项研究中,开发了一个框架来利用时空图卷积网络(ST-GCN)促进基于过程的质量控制。为了实验测试这个模型,我们使用现场收集的抹灰工作视频数据集来识别施工活动。建立了ST-GCN模型来识别抹灰工程中的四个主要活动,在验证集上达到99.48%的准确率。然后,ST-GCN模型被用来识别三个额外视频的活动,它以正确的顺序代表了一个有四个活动的过程,没有玻璃纤维网覆盖活动的过程,一个有四个活动但顺序错误的过程,分别。结果表明,活动顺序可以从模型的活动识别结果中清楚地撤回。因此,判断关键活动是否丢失或顺序错误很方便。这项研究确定了一个有前途的框架,有可能发展积极的,实时,施工现场基于过程的质量控制。
    Computer vision (CV)-based recognition approaches have accelerated the automation of safety and progress monitoring on construction sites. However, limited studies have explored its application in process-based quality control of construction works, especially for concealed work. In this study, a framework is developed to facilitate process-based quality control utilizing Spatial-Temporal Graph Convolutional Networks (ST-GCNs). To test this model experimentally, we used an on-site collected plastering work video dataset to recognize construction activities. An ST-GCN model was constructed to identify the four primary activities in plastering works, which attained 99.48% accuracy on the validation set. Then, the ST-GCN model was employed to recognize the activities of three extra videos, which represented a process with four activities in the correct order, a process without the activity of fiberglass mesh covering, and a process with four activities but in the wrong order, respectively. The results indicated that activity order could be clearly withdrawn from the activity recognition result of the model. Hence, it was convenient to judge whether key activities were missing or in the wrong order. This study has identified a promising framework that has the potential to the development of active, real-time, process-based quality control at construction sites.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号