关键词: AI LLM LLMs ML NLP artificial intelligence deep learning depression digital health digital intervention digital interventions digital technology ethics generative AI large language model large language models machine learning mental disease mental diseases mental health mental illness mental illnesses natural language processing

Mesh : Humans Artificial Intelligence Depression / psychology therapy Language Communication Humanism

来  源:   DOI:10.2196/56569

Abstract:
UNASSIGNED: Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate \"human-like\" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
摘要:
大型语言模型(LLM)支持的服务由于在许多任务中的出色性能而在各种应用程序中越来越受欢迎,如情绪分析和回答问题。最近,研究一直在探索它们在数字健康环境中的潜在用途,特别是在心理健康领域。然而,实施LLM增强的会话人工智能(CAI)提出了重要的道德,技术,和临床挑战。在这篇观点论文中,我们讨论了2个挑战,这些挑战会影响LLM增强的CAI对于有心理健康问题的个人的使用,专注于抑郁症患者的用例:将LLM增强的CAI人性化的趋势以及他们缺乏情境化的鲁棒性。我们的方法是跨学科的,依靠哲学的考虑,心理学,和计算机科学。我们认为,LLM增强的CAI的人性化取决于对使用LLM模拟“类似人类”特征的含义的反映,以及这些系统在与人类的互动中应该扮演什么角色。Further,确保LLM稳健性的情境化需要考虑抑郁症患者语言产生的特殊性,以及它随时间的演变。最后,我们提供了一系列建议,以促进负责任的设计和部署LLM增强的CAI,为抑郁症患者提供治疗支持.
公众号