关键词: LLMS artificial intelligence generative pre-trained transformer (GPT) large language model psychiatry

来  源:   DOI:10.3389/fpsyt.2024.1422807   PDF(Pubmed)

Abstract:
UNASSIGNED: With their unmatched ability to interpret and engage with human language and context, large language models (LLMs) hint at the potential to bridge AI and human cognitive processes. This review explores the current application of LLMs, such as ChatGPT, in the field of psychiatry.
UNASSIGNED: We followed PRISMA guidelines and searched through PubMed, Embase, Web of Science, and Scopus, up until March 2024.
UNASSIGNED: From 771 retrieved articles, we included 16 that directly examine LLMs\' use in psychiatry. LLMs, particularly ChatGPT and GPT-4, showed diverse applications in clinical reasoning, social media, and education within psychiatry. They can assist in diagnosing mental health issues, managing depression, evaluating suicide risk, and supporting education in the field. However, our review also points out their limitations, such as difficulties with complex cases and potential underestimation of suicide risks.
UNASSIGNED: Early research in psychiatry reveals LLMs\' versatile applications, from diagnostic support to educational roles. Given the rapid pace of advancement, future investigations are poised to explore the extent to which these models might redefine traditional roles in mental health care.
摘要:
凭借其无与伦比的解释和参与人类语言和语境的能力,大型语言模型(LLM)暗示了连接人工智能和人类认知过程的潜力。这篇综述探讨了LLM的当前应用,比如ChatGPT,在精神病学领域。
我们遵循PRISMA指南,并通过PubMed搜索,Embase,WebofScience,还有Scopus,直到2024年3月。
从771篇检索到的文章中,我们纳入了16项直接检查LLM在精神病学中的使用情况。LLM,特别是ChatGPT和GPT-4,在临床推理中显示出不同的应用,社交媒体,和精神病学的教育。他们可以帮助诊断心理健康问题,管理抑郁症,评估自杀风险,并支持该领域的教育。然而,我们的审查还指出了它们的局限性,例如复杂病例的困难和对自杀风险的潜在低估。
精神病学的早期研究揭示了LLM的多功能应用,从诊断支持到教育角色。鉴于发展速度很快,未来的调查准备探索这些模型可能在多大程度上重新定义精神卫生保健中的传统角色。
公众号