graph convolutional networks

图卷积网络
  • 文章类型: Journal Article
    背景:长时间的不当姿势会导致头部姿势(FHP),导致头痛,呼吸功能受损,和疲劳。这在久坐的情况下尤其相关,个人经常长时间保持静态姿势,这是许多人日常生活的重要组成部分。能够检测FHP的系统的开发是至关重要的,因为它不仅会提醒使用者纠正他们的姿势,而且还可以通过防止与这种情况相关的慢性损伤的进展来实现更广泛的目标,为公众健康做出贡献。然而,尽管在从标准2D图像估计人类姿势方面取得了重大进展,大多数计算姿态模型不包括颅骨角度的测量,涉及到C7椎骨,对于诊断FHP至关重要。
    目的:FHP的准确诊断通常需要专用设备,如临床姿势评估或专业成像设备,但是它们的使用对于连续来说是不切实际的,在日常设置中进行实时监控。因此,开发一个可访问的,定期姿势评估的有效方法,可以轻松地集成到日常活动中,提供实时反馈,并促进纠正行动,是必要的。
    方法:系统从提供的2D图像中依次估计2D和3D人体解剖关键点,使用Detectron2D和VideoPose3D算法,分别。然后使用图卷积网络(GCN),专门设计用于分析3D空间中上身解剖关键点的空间配置和对齐。这个GCN旨在隐含地学习估计的3D关键点和正确的姿势之间的复杂关系,专门用于识别FHP。
    结果:当输入包括与上身关键点相对应的所有关节时,测试准确性为78.27%。GCN模型在各个班级中表现出略微优越的平衡性能,F1得分(宏)为77.54%,与基线前馈神经网络(FFNN)模型的75.88%相比。具体来说,GCN模型显示了类之间更平衡的精度和召回率,表明它有可能在不同姿势的FHP检测中更好地推广。同时,基线FFNN模型为FHP案例展示了更高的精度,但以较低的召回为代价,表明,虽然它在检测到FHP时更准确地确认,它错过了大量的实际FHP实例。通过使用t分布随机邻居嵌入对潜在特征空间的检查,进一步证实了这一主张。其中GCN模型呈现各向同性分布,与FFNN模型不同,表现出各向异性分布。
    结论:基于使用3D人体姿态估计联合输入的2D图像输入,研究发现,使用提出的基于GCN的网络来开发姿势校正系统,可以学习FHP相关特征。我们通过解决当前系统的局限性并提出该领域未来工作的潜在途径来总结本文。
    BACKGROUND: Prolonged improper posture can lead to forward head posture (FHP), causing headaches, impaired respiratory function, and fatigue. This is especially relevant in sedentary scenarios, where individuals often maintain static postures for extended periods-a significant part of daily life for many. The development of a system capable of detecting FHP is crucial, as it would not only alert users to correct their posture but also serve the broader goal of contributing to public health by preventing the progression of chronic injuries associated with this condition. However, despite significant advancements in estimating human poses from standard 2D images, most computational pose models do not include measurements of the craniovertebral angle, which involves the C7 vertebra, crucial for diagnosing FHP.
    OBJECTIVE: Accurate diagnosis of FHP typically requires dedicated devices, such as clinical postural assessments or specialized imaging equipment, but their use is impractical for continuous, real-time monitoring in everyday settings. Therefore, developing an accessible, efficient method for regular posture assessment that can be easily integrated into daily activities, providing real-time feedback, and promoting corrective action, is necessary.
    METHODS: The system sequentially estimates 2D and 3D human anatomical key points from a provided 2D image, using the Detectron2D and VideoPose3D algorithms, respectively. It then uses a graph convolutional network (GCN), explicitly crafted to analyze the spatial configuration and alignment of the upper body\'s anatomical key points in 3D space. This GCN aims to implicitly learn the intricate relationship between the estimated 3D key points and the correct posture, specifically to identify FHP.
    RESULTS: The test accuracy was 78.27% when inputs included all joints corresponding to the upper body key points. The GCN model demonstrated slightly superior balanced performance across classes with an F1-score (macro) of 77.54%, compared to the baseline feedforward neural network (FFNN) model\'s 75.88%. Specifically, the GCN model showed a more balanced precision and recall between the classes, suggesting its potential for better generalization in FHP detection across diverse postures. Meanwhile, the baseline FFNN model demonstrates a higher precision for FHP cases but at the cost of lower recall, indicating that while it is more accurate in confirming FHP when detected, it misses a significant number of actual FHP instances. This assertion is further substantiated by the examination of the latent feature space using t-distributed stochastic neighbor embedding, where the GCN model presented an isotropic distribution, unlike the FFNN model, which showed an anisotropic distribution.
    CONCLUSIONS: Based on 2D image input using 3D human pose estimation joint inputs, it was found that it is possible to learn FHP-related features using the proposed GCN-based network to develop a posture correction system. We conclude the paper by addressing the limitations of our current system and proposing potential avenues for future work in this area.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:准确、及时地评估儿童的发育状况对于早期诊断和干预至关重要。由于缺乏训练有素的医疗保健提供者和不精确的父母报告,更准确和自动化的发展评估至关重要。在发展的各个领域,众所周知,幼儿的粗大运动发育可以预测随后的童年发展。
    目的:这项研究的目的是开发一种模型来评估粗大运动行为,并将结果整合以确定幼儿的整体粗大运动状态。这项研究还旨在确定在评估总体总体运动技能方面很重要的行为,并检测关键时刻和重要的身体部位,以评估每种行为。
    方法:我们使用了18-35个月幼儿的行为视频。为了评估电机总体发展,我们选择了4种行为(爬楼梯,走下楼梯,扔球,并站在1英尺上),已通过韩国婴儿和儿童发育筛查测试进行了验证。在儿童行为视频中,我们将每个孩子的位置估计为边界框,并在框内提取人类关键点。在第一阶段,使用基于图形卷积网络(GCN)的算法分别评估具有每种行为的提取的人类关键点的视频。在第一阶段模型中获得的每个标签的概率值用作第二阶段模型的输入,极端梯度提升(XGBoost)算法,预测总体运动状态。为了可解释性,我们使用梯度加权类激活映射(Grad-CAM)来识别运动过程中的重要时刻和相关身体部位。Shapley加性解释方法用于评估变量重要性,以确定对整体发展评估贡献最大的运动。
    结果:从147名儿童中收集了4种粗大运动技能的行为视频,共产生2395个视频。评估每种行为的阶段1GCN模型的接受者工作特征曲线下面积(AUROC)为0.79至0.90。关键点映射Grad-CAM可视化识别了每个行为中的重要时刻以及重要身体部位的差异。评估总体粗大运动状态的阶段2XGBoost模型的AUROC为0.90。在这四种行为中,“下楼梯”对整体发展评估的贡献最大。
    结论:使用18-35个月幼儿的运动视频,我们开发了客观和自动化的模型来评估每个行为和评估每个孩子的整体粗大运动表现。我们确定了评估总体电机性能的重要行为,并开发了在评估总体电机性能时识别重要力矩和身体部位的方法。
    BACKGROUND: Accurate and timely assessment of children\'s developmental status is crucial for early diagnosis and intervention. More accurate and automated developmental assessments are essential due to the lack of trained health care providers and imprecise parental reporting. In various areas of development, gross motor development in toddlers is known to be predictive of subsequent childhood developments.
    OBJECTIVE: The purpose of this study was to develop a model to assess gross motor behavior and integrate the results to determine the overall gross motor status of toddlers. This study also aimed to identify behaviors that are important in the assessment of overall gross motor skills and detect critical moments and important body parts for the assessment of each behavior.
    METHODS: We used behavioral videos of toddlers aged 18-35 months. To assess gross motor development, we selected 4 behaviors (climb up the stairs, go down the stairs, throw the ball, and stand on 1 foot) that have been validated with the Korean Developmental Screening Test for Infants and Children. In the child behavior videos, we estimated each child\'s position as a bounding box and extracted human keypoints within the box. In the first stage, the videos with the extracted human keypoints of each behavior were evaluated separately using a graph convolutional networks (GCN)-based algorithm. The probability values obtained for each label in the first-stage model were used as input for the second-stage model, the extreme gradient boosting (XGBoost) algorithm, to predict the overall gross motor status. For interpretability, we used gradient-weighted class activation mapping (Grad-CAM) to identify important moments and relevant body parts during the movements. The Shapley additive explanations method was used for the assessment of variable importance, to determine the movements that contributed the most to the overall developmental assessment.
    RESULTS: Behavioral videos of 4 gross motor skills were collected from 147 children, resulting in a total of 2395 videos. The stage-1 GCN model to evaluate each behavior had an area under the receiver operating characteristic curve (AUROC) of 0.79 to 0.90. Keypoint-mapping Grad-CAM visualization identified important moments in each behavior and differences in important body parts. The stage-2 XGBoost model to assess the overall gross motor status had an AUROC of 0.90. Among the 4 behaviors, \"go down the stairs\" contributed the most to the overall developmental assessment.
    CONCLUSIONS: Using movement videos of toddlers aged 18-35 months, we developed objective and automated models to evaluate each behavior and assess each child\'s overall gross motor performance. We identified the important behaviors for assessing gross motor performance and developed methods to recognize important moments and body parts while evaluating gross motor performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    肺癌是癌症相关死亡率的主要原因,和患者生存的准确预测可以帮助治疗计划和潜在的改善结果。在这项研究中,在非小细胞肺癌(NSCLC)患者中,我们提出了一种能够使用图卷积神经网络(GCN)和CT数据进行肺分割和生存预测的自动化系统.
    在这项回顾性研究中,我们对肺部CT图像的10部分进行了分割,并建立了单独的肺部图作为输入,以训练GCN模型来预测5年总生存率.Cox比例风险模型,一组机器学习(ML)模型,基于肿瘤的卷积神经网络(Tuman-CNN),并使用当前的TNM分期系统作为比较。
    共纳入1,705例肺癌患者(主要队列)和125例肺癌患者(外部验证队列)(I期和II期)。GCN模型显著预测5年总生存率,AUC为0.732(p<0.0001)。该模型将患者分为低危组和高危组,与总生存率相关(HR=5.41;95%CI:,2.32-10.14;p<0.0001)。在外部验证数据集上,我们的GCN模型的AUC得分为0.678(95%CI:0.564-0.792;p<0.0001)。
    提出的GCN模型优于所有ML,肿瘤CNN,和TNM分期模型。这项研究证明了利用医学成像图形结构数据的价值,为早期肺癌的生存预测提供了一个稳健有效的模型。
    UNASSIGNED: Lung cancer is the leading cause of cancer-related mortality, and accurate prediction of patient survival can aid treatment planning and potentially improve outcomes. In this study, we proposed an automated system capable of lung segmentation and survival prediction using graph convolution neural network (GCN) with CT data in non-small cell lung cancer (NSCLC) patients.
    UNASSIGNED: In this retrospective study, we segmented 10 parts of the lung CT images and built individual lung graphs as inputs to train a GCN model to predict 5-year overall survival. A Cox proportional-hazard model, a set of machine learning (ML) models, a convolutional neural network based on tumor (Tumor-CNN), and the current TNM staging system were used as comparison.
    UNASSIGNED: A total of 1,705 patients (main cohort) and 125 patients (external validation cohort) with lung cancer (stages I and II) were included. The GCN model was significantly predictive of 5-year overall survival with an AUC of 0.732 (p < 0.0001). The model stratified patients into low- and high-risk groups, which were associated with overall survival (HR = 5.41; 95% CI:, 2.32-10.14; p < 0.0001). On external validation dataset, our GCN model achieved the AUC score of 0.678 (95% CI: 0.564-0.792; p < 0.0001).
    UNASSIGNED: The proposed GCN model outperformed all ML, Tumor-CNN, and TNM staging models. This study demonstrated the value of utilizing medical imaging graph structure data, resulting in a robust and effective model for the prediction of survival in early-stage lung cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号