关键词: Deep learning Fall risk Gait analysis Object detection Visual attention

Mesh : Deep Learning Accidental Falls / prevention & control Humans Risk Assessment / methods Walking / physiology Male Female Adult Eye-Tracking Technology Eye Movements / physiology Gait / physiology Video Recording Young Adult

来  源:   DOI:10.1186/s12984-024-01400-2   PDF(Pubmed)

Abstract:
BACKGROUND: Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual\'s gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.
METHODS: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.
RESULTS: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.
CONCLUSIONS: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.
摘要:
背景:跌倒在一系列临床队列中很常见,其中常规风险评估通常仅包括主观视觉观察。通常,观察性评估包括在实验室内的脚本步行方案中评估个体的步态,以识别可能增加跌倒风险的缺陷,但是微妙的缺陷可能(不容易)观察到。因此,客观方法(例如,惯性测量单元,IMU)可用于量化高分辨率步态特征,通过捕捉细微的缺陷,实现更明智的跌倒风险评估。然而,仅基于IMU的步态仪器是有限的,未能考虑参与者在环境中的行为和细节(例如,障碍)。基于视频的眼睛跟踪眼镜可以为跌倒风险提供额外的洞察力,阐明人们如何根据头部和眼睛的运动来穿越环境。记录头部和眼睛运动可以提供对视觉注意力对环境刺激的分配如何影响障碍物周围的成功导航的见解。然而,手动审查视频数据以评估头部和眼睛的运动是耗时和主观的。需要一种自动化方法,但目前还不存在。本文提出了一种基于深度学习的物体检测算法(VARFA),用于步行过程中的仪器视觉和视频数据,补充仪表步态。
方法:该方法自动标记在步态实验室中捕获的视频数据,以评估视觉注意力和环境细节。所提出的算法使用YoloV8模型进行训练,并使用基于实验室的新数据集。
结果:VARFA取得了出色的评估指标(0.93mAP50),识别,和定位静态对象(例如,步行路径中的障碍物),平均准确率为93%。同样,基于U-NET的航迹/路径分割模型取得了良好的指标(IoU0.82),表明预测的轨迹(即,步行路径)与实际轨道紧密对齐,有82%的重叠。值得注意的是,这两个模型都实现了这些指标,同时以实时速度处理,展示务实应用的效率和有效性。
结论:仪器化方法通过评估视觉上的注意力分配(即,导航过程中关于某人何时何地出席的信息),提高这一领域仪器的广度。将VARFA用于仪器视觉可用于通过提供行为和背景数据来补充仪器来更好地告知跌倒风险评估,例如,步态任务期间的IMU数据。这可能值得注意(例如,个性化)康复影响广泛的临床队列,其中步态差和跌倒风险增加是常见的。
公众号