背景:计算精神病学有可能推进诊断,机械理解,以及精神健康状况的治疗。来自临床样本的有希望的结果导致呼吁将这些方法扩展到公众的心理健康风险评估;然而,通常与临床样本一起使用的数据既不可用,也无法在普通人群中进行扩展.数字表型通过利用嵌入在个人数字设备中的传感器创建的多模式和广泛可用的数据来解决这一问题(例如,智能手机),是扩展计算精神病学方法以改善普通人群心理健康风险评估的一种有前途的方法。
目的:基于现有计算精神病学和数字表型研究的建议,我们的目标是创建第一个计算精神病学数据集,专门研究一般人群的心理健康风险;包括多模式,基于传感器的行为特征;并被设计为在学术界广泛共享,工业,和政府使用黄金标准方法保护隐私,保密性,和数据完整性。
方法:我们使用的是分层的,使用2个交叉因素(情绪调节困难和感知的生活压力)的随机抽样设计,招募了400名社区居住的成年人,这些成年人在间歇性心理健康状况的高风险和低风险之间保持平衡。参与者首先完成自我报告问卷,评估当前和终生的精神病和医学诊断和治疗,和当前的心理社会功能。然后,参与者完成为期7天的现场数据收集阶段,包括提供每日录音,从智能手机收集的被动传感器数据,每日情绪和重大事件的自我报告,以及每晚通话中重要的日常事件的口头描述。参与者在此阶段后6个月和12个月完成相同的基线问卷。自我报告问卷将使用标准方法进行评分。原始音频和无源传感器数据将被处理以创建一套每日摘要功能(例如,在家里度过的时间)。
结果:数据收集于2022年6月开始,预计将于2024年7月结束。迄今为止,310名参与者同意这项研究;149名参与者完成了基线问卷和7天密集数据收集阶段;61名和31名参与者完成了6个月和12个月的随访问卷,分别。一旦完成,拟议的数据集将提供给学术研究人员,工业,和政府使用逐步的方法来最大化数据隐私。
结论:该数据集被设计为当前计算精神病学和数字表型研究的补充方法,目标是在普通人群中推进心理健康风险评估。该数据集旨在支持该领域从收集专有数据的孤立研究实验室转向纳入临床的跨学科合作,技术,以及研究过程各个阶段的定量专业知识。
■DERR1-10.2196/53857。
BACKGROUND: Computational psychiatry has the potential to advance the diagnosis, mechanistic understanding, and treatment of mental health conditions. Promising results from clinical samples have led to calls to extend these methods to mental health risk assessment in the general public; however, data typically used with clinical samples are neither available nor scalable for research in the general population. Digital phenotyping addresses this by capitalizing on the multimodal and widely available data created by sensors embedded in personal digital devices (eg, smartphones) and is a promising approach to extending computational psychiatry methods to improve mental health risk assessment in the general population.
OBJECTIVE: Building on recommendations from existing computational psychiatry and digital phenotyping work, we aim to create the first computational psychiatry data set that is tailored to studying mental health risk in the general population; includes multimodal, sensor-based behavioral features; and is designed to be widely shared across academia, industry, and government using gold standard methods for privacy, confidentiality, and data integrity.
METHODS: We are using a stratified, random sampling design with 2 crossed factors (difficulties with emotion regulation and perceived life stress) to recruit a sample of 400 community-dwelling adults balanced across high- and low-risk for episodic mental health conditions. Participants first complete self-report questionnaires assessing current and lifetime psychiatric and medical diagnoses and treatment, and current psychosocial functioning. Participants then complete a 7-day in situ data collection phase that includes providing daily audio recordings, passive sensor data collected from smartphones, self-reports of daily mood and significant events, and a verbal description of the significant daily events during a nightly phone call. Participants complete the same baseline questionnaires 6 and 12 months after this phase. Self-report questionnaires will be scored using standard methods. Raw audio and passive sensor data will be processed to create a suite of daily summary features (eg, time spent at home).
RESULTS: Data collection began in June 2022 and is expected to conclude by July 2024. To date, 310 participants have consented to the study; 149 have completed the baseline questionnaire and 7-day intensive data collection phase; and 61 and 31 have completed the 6- and 12-month follow-up questionnaires, respectively. Once completed, the proposed data set will be made available to academic researchers, industry, and the government using a stepped approach to maximize data privacy.
CONCLUSIONS: This data set is designed as a complementary approach to current computational psychiatry and digital phenotyping research, with the goal of advancing mental health risk assessment within the general population. This data set aims to support the field\'s move away from siloed research laboratories collecting proprietary data and toward interdisciplinary collaborations that incorporate clinical, technical, and quantitative expertise at all stages of the research process.
UNASSIGNED: DERR1-10.2196/53857.