关键词: artificial intelligence chatbots countertransference empathy mental health quality of care

Mesh : Humans Emotions Empathy Psychotherapy / ethics Psychotherapists Countertransference Mental Disorders / therapy Mental Health Adaptation, Psychological

来  源:   DOI:10.1111/bioe.13299

Abstract:
Mental health chatbots (MHCBs) designed to support individuals in coping with mental health issues are rapidly advancing. Currently, these MHCBs are predominantly used in commercial rather than clinical contexts, but this might change soon. The question is whether this use is ethically desirable. This paper addresses a critical yet understudied concern: assuming that MHCBs cannot have genuine emotions, how this assumption may affect psychotherapy, and consequently the quality of treatment outcomes. We argue that if MHCBs lack emotions, they cannot have genuine (affective) empathy or utilise countertransference. Consequently, this gives reason to worry that MHCBs are (a) more liable to harm and (b) less likely to benefit patients than human therapists. We discuss some responses to this worry and conclude that further empirical research is necessary to determine whether these worries are valid. We conclude that, even if these worries are valid, it does not mean that we should never use MHCBs. By discussing the broader ethical debate on the clinical use of chatbots, we point towards how further research can help us establish ethical boundaries for how we should use mental health chatbots.
摘要:
旨在支持个人应对心理健康问题的心理健康聊天机器人(MHCB)正在迅速发展。目前,这些MHCB主要用于商业而非临床环境,但这可能很快就会改变。问题是这种使用是否符合道德要求。本文解决了一个关键但尚未得到充分研究的问题:假设MHCB不能有真正的情绪,这种假设如何影响心理治疗,以及治疗结果的质量。我们认为,如果MHCB缺乏情绪,他们不能有真正的(情感)移情或利用反移情。因此,这使我们有理由担心,与人类治疗师相比,MHCB(a)更容易受到伤害,(b)使患者受益的可能性较小。我们讨论了对这种担忧的一些回应,并得出结论,需要进一步的实证研究来确定这些担忧是否有效。我们的结论是,即使这些担忧是有效的,这并不意味着我们永远不应该使用MHCB。通过讨论关于聊天机器人临床使用的更广泛的伦理辩论,我们指出,进一步的研究如何帮助我们建立道德界限,我们应该如何使用心理健康聊天机器人。
公众号