Mesh : Animals Mice Behavior, Animal / physiology Algorithms Machine Learning Video Recording / methods Movement / physiology Drosophila melanogaster / physiology Humans Male

来  源:   DOI:10.1038/s41592-024-02318-2   PDF(Pubmed)

Abstract:
Keypoint tracking algorithms can flexibly quantify animal movement from videos obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into discrete actions. This challenge is particularly acute because keypoint data are susceptible to high-frequency jitter that clustering algorithms can mistake for transitions between actions. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules (\'syllables\') from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to identify syllables whose boundaries correspond to natural sub-second discontinuities in pose dynamics. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq also works in multiple species and generalizes beyond the syllable timescale, identifying fast sniff-aligned movements in mice and a spectrum of oscillatory behaviors in fruit flies. Keypoint-MoSeq, therefore, renders accessible the modular structure of behavior through standard video recordings.
摘要:
关键点跟踪算法可以灵活地量化在各种设置中获得的视频中的动物运动。然而,目前尚不清楚如何将连续关键点数据解析为离散动作。这一挑战尤其严重,因为关键点数据易受高频抖动的影响,聚类算法可能误认为动作之间的转换。这里我们介绍关键点-MoSeq,一个基于机器学习的平台,用于从关键点数据中识别行为模块(“音节”),无需人工监督。Keypopoint-MoSeq使用生成模型来区分关键点噪声和行为,使其能够识别其边界对应于姿势动力学中自然的亚秒不连续性的音节。Keypoint-MoSeq在识别这些转变方面优于常用的替代聚类方法,捕获神经活动和行为之间的相关性,并根据人类注释对单独或社会行为进行分类。Keypoint-MoSeq也适用于多个物种,并超越音节时间尺度,识别小鼠的快速嗅探对齐运动和果蝇的振荡行为谱。Keypoint-MoSeq,因此,呈现可通过标准视频记录访问行为的模块化结构。
公众号