关键词: art of nursing nursing role quality of care self-harm therapeutic relationships

Mesh : Humans Artificial Intelligence Psychiatric Nursing Writing Mental Health Self-Injurious Behavior

来  源:   DOI:10.1111/jpm.12965

Abstract:
WHAT IS KNOWN ON THE SUBJECT?: Artificial intelligence (AI) is freely available, responds to very basic text input (such as a question) and can now create a wide range of outputs, communicating in many languages or art forms. AI platforms like OpenAI\'s ChatGPT can now create passages of text that could be used to create plans of care for people with mental health needs. As such, AI output can be difficult to distinguish from human-output, and there is a risk that its use could go unnoticed. WHAT THIS PAPER ADDS TO EXISTING KNOWLEDGE?: Whilst it is known that AI can produce text or pass pre-registration health-profession exams, it is not known if AI can produce meaningful results for care delivery. We asked ChatGPT basic questions about a fictitious person who presents with self-harm and then evaluated the quality of the output. We found that the output could look reasonable to laypersons but there were significant errors and ethical issues. There are potential harms to people in care if AI is used without an expert correcting or removing these errors. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: We suggest that there is a risk that AI use could cause harm if it was used in direct care delivery. There is a lack of policy and research to safeguard people receiving care - and this needs to be in place before AI should be used in this way. Key aspects of the role of a mental health nurse are relational and AI use may diminish mental health nurses\' ability to provide safe care in its current form. Many aspects of mental health recovery are linked to relationships and social engagement, however AI is not able to provide this and may push the people who are in most need of help further away from services that assist recovery. ABSTRACT: Background Artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care-delivery is not yet known. Aims To use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors\' clinical experience and existing guidance. Materials & Methods Basic text commands were input into ChatGPT about a fictitious person called \'Emily\' who presents with self-injurious behaviour. The output from ChatGPT was then evaluated against the authors\' clinical experience and current (national) care guidance. Results ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way. Discussion AI use is increasing in direct-care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care-recipient engagement, \'recycle\' existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms. Conclusion Use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT\'s ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
摘要:
这个主题知道什么?:人工智能(AI)是免费提供的,响应非常基本的文本输入(如问题),现在可以创建广泛的输出,用多种语言或艺术形式进行交流。像OpenAI的ChatGPT这样的AI平台现在可以创建文本段落,可用于为有心理健康需求的人创建护理计划。因此,人工智能输出可能很难与人类输出区分开来,它的使用可能会被忽视。这篇文章对现有知识有什么帮助?:虽然众所周知,人工智能可以产生文本或通过注册前的健康专业考试,目前尚不清楚AI能否为医疗服务产生有意义的结果。我们向ChatGPT询问了一个虚构的人的基本问题,该人表现出自我伤害,然后评估了输出的质量。我们发现,对于外行人来说,输出看起来是合理的,但存在重大错误和道德问题。如果在没有专家纠正或消除这些错误的情况下使用人工智能,对被护理的人有潜在的危害。实践的含义是什么?:我们认为,如果将AI用于直接护理,则可能会造成伤害。缺乏保护接受护理的人的政策和研究-在以这种方式使用人工智能之前,这需要到位。心理健康护士角色的关键方面是关系,人工智能的使用可能会削弱心理健康护士以目前的形式提供安全护理的能力。心理健康恢复的许多方面与人际关系和社会参与有关,然而,人工智能无法提供这一点,可能会推动最需要帮助的人远离帮助恢复的服务。背景技术人工智能(AI)在护理环境中被越来越多地使用和讨论。尽管尚不清楚ChatGPT如何用于护理交付,但ChatGPT已在流行和科学文献中引起了广泛关注。目的使用人工智能(ChatGPT)创建心理健康护理计划,并根据作者的临床经验和现有指导评估输出的质量。材料与方法将基本的文本命令输入到ChatGPT中,内容涉及一个名为“Emily”的虚构人,该人表现出自我伤害行为。然后根据作者的临床经验和当前(国家)护理指导对ChatGPT的输出进行评估。结果ChatGPT能够提供纳入辩证行为疗法的一些原则的护理计划,但是输出有很大的错误和局限性,因此,如果以这种方式使用,则存在合理的损害可能性。讨论通过使用聊天机器人或其他方式,AI在直接护理环境中的使用正在增加。然而,人工智能可以抑制临床医生对护理接受者的参与,\'回收\'现有的污名,并引入错误,因此,这可能会削弱护理维护人格的能力,从而导致重大的可避免的伤害。结论应避免在这种情况下使用AI,直到政策和指导可以保障护理接受者的福祉和AI输出的复杂性增加为止。鉴于ChatGPT能够提供表面上合理的输出,存在错误可能被忽视的风险,从而增加患者伤害的可能性。需要进一步研究评估AI输出,以考虑如何在护理交付中安全使用AI。
公众号