METHODS: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.
RESULTS: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.
CONCLUSIONS: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.
方法:该方法自动标记在步态实验室中捕获的视频数据,以评估视觉注意力和环境细节。所提出的算法使用YoloV8模型进行训练,并使用基于实验室的新数据集。
结果:VARFA取得了出色的评估指标(0.93mAP50),识别,和定位静态对象(例如,步行路径中的障碍物),平均准确率为93%。同样,基于U-NET的航迹/路径分割模型取得了良好的指标(IoU0.82),表明预测的轨迹(即,步行路径)与实际轨道紧密对齐,有82%的重叠。值得注意的是,这两个模型都实现了这些指标,同时以实时速度处理,展示务实应用的效率和有效性。
结论:仪器化方法通过评估视觉上的注意力分配(即,导航过程中关于某人何时何地出席的信息),提高这一领域仪器的广度。将VARFA用于仪器视觉可用于通过提供行为和背景数据来补充仪器来更好地告知跌倒风险评估,例如,步态任务期间的IMU数据。这可能值得注意(例如,个性化)康复影响广泛的临床队列,其中步态差和跌倒风险增加是常见的。