Chatbot

聊天机器人
  • 文章类型: Journal Article
    背景:患者发现技术工具更容易获取敏感的健康相关信息,如生殖健康信息。人工智能(AI)聊天机器人的创造性对话能力,比如ChatGPT,为患者提供了一种潜在的方法,可以在线有效地找到与健康相关的问题的答案。
    目的:进行了一项初步研究,将新型ChatGPT与现有的Google搜索技术进行比较,有效,以及关于在错过口服避孕药(OCP)剂量后继续行动的最新信息。
    方法:十一个问题的序列,模仿患者在错过一定剂量的OCP后询问要采取的行动,作为级联输入到ChatGPT中,考虑到ChatGPT的会话能力。这些问题被输入到四个不同的ChatGPT帐户中,帐户持有人具有各种人口统计特征,评估给予不同账户持有人的答复中的潜在差异和偏见。最主要的问题,“如果我错过了一天的口服避孕药,我该怎么办?”然后将其单独输入到Google搜索中,考虑到它的非对话性质。ChatGPT问题的结果和Google搜索结果对主要问题的可读性进行了评估,准确度,和有效的信息传递。
    结果:ChatGPT结果被确定为整体较高年级阅读水平,更长的读取持续时间(表2),不太准确,较小的电流,和一个不太有效的信息传递。相比之下,谷歌搜索结果答案框和片段处于较低的阅读水平,较短的阅读持续时间,电流更大,能够参考信息的来源(透明),并提供了除文本之外的各种格式的信息。
    结论:ChatGPT在准确性方面还有改进的空间,透明度,最近,和可靠性之前,它可以公平地实施到医疗保健信息交付,并提供潜在的好处,它带来。然而,AI可以用作提供者优先教育患者的工具,创造性,和有效的方法,例如使用AI从医疗保健提供者审查的信息中生成可访问的短教育视频。需要代表不同用户群的更大研究。
    背景:
    BACKGROUND: Patients find technology tools to be more approachable for seeking sensitive health-related information, such as reproductive health information. The inventive conversational ability of artificial intelligence (AI) chatbots, such as ChatGPT (OpenAI Inc), offers a potential means for patients to effectively locate answers to their health-related questions digitally.
    OBJECTIVE: A pilot study was conducted to compare the novel ChatGPT with the existing Google Search technology for their ability to offer accurate, effective, and current information regarding proceeding action after missing a dose of oral contraceptive pill.
    METHODS: A sequence of 11 questions, mimicking a patient inquiring about the action to take after missing a dose of an oral contraceptive pill, were input into ChatGPT as a cascade, given the conversational ability of ChatGPT. The questions were input into 4 different ChatGPT accounts, with the account holders being of various demographics, to evaluate potential differences and biases in the responses given to different account holders. The leading question, \"what should I do if I missed a day of my oral contraception birth control?\" alone was then input into Google Search, given its nonconversational nature. The results from the ChatGPT questions and the Google Search results for the leading question were evaluated on their readability, accuracy, and effective delivery of information.
    RESULTS: The ChatGPT results were determined to be at an overall higher-grade reading level, with a longer reading duration, less accurate, less current, and with a less effective delivery of information. In contrast, the Google Search resulting answer box and snippets were at a lower-grade reading level, shorter reading duration, more current, able to reference the origin of the information (transparent), and provided the information in various formats in addition to text.
    CONCLUSIONS: ChatGPT has room for improvement in accuracy, transparency, recency, and reliability before it can equitably be implemented into health care information delivery and provide the potential benefits it poses. However, AI may be used as a tool for providers to educate their patients in preferred, creative, and efficient ways, such as using AI to generate accessible short educational videos from health care provider-vetted information. Larger studies representing a diverse group of users are needed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    人工智能的使用呈指数增长,包括患者在医疗决策中。由于聊天机器人的局限性以及接收错误或不完整信息的可能性,病人。
    Artificial intelligence use is increasing exponentially, including by patients in medical decision- making. Because of the limitations of chatbots and the possibility of receiving erroneous or incomplete information, patient.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:移动健康(mHealth)应用程序提供了独特的机会来支持自我护理和行为改变,但是糟糕的用户参与度限制了他们的有效性。对于没有任何人支持的全自动mHealth应用程序尤其如此。mHealth应用程序中的人工支持与更好的参与度相关,但以降低可扩展性为代价。
    目的:这项工作旨在(1)描述一种全自动放松和正念应用程序的理论基础开发,以减少癌症患者的痛苦(CanRelax应用程序2.0),(2)描述在10周内的全自动随机对照试验中,在多个层面上参与应用程序,(3)检查参与度是否与用户特征相关。
    方法:CanRelax应用程序2.0是在迭代过程中开发的,涉及癌症患者和相关专家的输入。该应用程序包括基于证据的放松练习,使用基于规则的会话代理进行个性化的每周辅导会议,39种可自我执行的行为改变技术,带有游戏化元素的自我监控仪表板,高度定制的提醒通知,一个教育视频剪辑,和个性化的应用内信件。对于更大的研究,在过去5年内被诊断出患有癌症的讲德语的成年人通过网络在瑞士招募。奥地利,和德国。在100名研究参与者的样本中分析了参与度,在微观层面上进行了多项测量(完成的辅导课程,使用应用程序练习放松练习,和对应用程序的反馈)和宏观水平(在没有应用程序和自我效能感的情况下练习放松练习,以实现自我设定的每周放松目标)。
    结果:在第10周,共有62%(62/100)的参与者积极使用CanRelax应用程序2.0。在基线时,没有发现参与度和痛苦程度之间的关联,出生时分配的性别,教育程度,或年龄。在微观层面,71.88%(3520/4897)的所有放松练习和714个教练课程在应用程序中完成,所有提供反馈的参与者(52/100,52%)都表达了积极的应用体验。在宏观层面,28.12%(1377/4897)的放松练习是在没有应用程序的情况下完成的,参与者的自我效能感稳定在较高水平。同时,参与者提高了他们每周的放松目标,这表明自我效能感的潜在相对增加。
    结论:CanRelax应用程序2.0取得了有希望的参与,尽管它没有提供人类支持。完全自动化的社会组件可能已经弥补了人类参与的不足,应该进一步调查。超过四分之一(1377/4897,28.12%)的所有放松练习是在没有应用程序的情况下进行的,强调评估多层次参与的重要性。
    BACKGROUND: Mobile health (mHealth) apps offer unique opportunities to support self-care and behavior change, but poor user engagement limits their effectiveness. This is particularly true for fully automated mHealth apps without any human support. Human support in mHealth apps is associated with better engagement but at the cost of reduced scalability.
    OBJECTIVE: This work aimed to (1) describe the theory-informed development of a fully automated relaxation and mindfulness app to reduce distress in people with cancer (CanRelax app 2.0), (2) describe engagement with the app on multiple levels within a fully automated randomized controlled trial over 10 weeks, and (3) examine whether engagement was related to user characteristics.
    METHODS: The CanRelax app 2.0 was developed in iterative processes involving input from people with cancer and relevant experts. The app includes evidence-based relaxation exercises, personalized weekly coaching sessions with a rule-based conversational agent, 39 self-enactable behavior change techniques, a self-monitoring dashboard with gamification elements, highly tailored reminder notifications, an educational video clip, and personalized in-app letters. For the larger study, German-speaking adults diagnosed with cancer within the last 5 years were recruited via the web in Switzerland, Austria, and Germany. Engagement was analyzed in a sample of 100 study participants with multiple measures on a micro level (completed coaching sessions, relaxation exercises practiced with the app, and feedback on the app) and a macro level (relaxation exercises practiced without the app and self-efficacy toward self-set weekly relaxation goals).
    RESULTS: In week 10, a total of 62% (62/100) of the participants were actively using the CanRelax app 2.0. No associations were identified between engagement and level of distress at baseline, sex assigned at birth, educational attainment, or age. At the micro level, 71.88% (3520/4897) of all relaxation exercises and 714 coaching sessions were completed in the app, and all participants who provided feedback (52/100, 52%) expressed positive app experiences. At the macro level, 28.12% (1377/4897) of relaxation exercises were completed without the app, and participants\' self-efficacy remained stable at a high level. At the same time, participants raised their weekly relaxation goals, which indicates a potential relative increase in self-efficacy.
    CONCLUSIONS: The CanRelax app 2.0 achieved promising engagement even though it provided no human support. Fully automated social components might have compensated for the lack of human involvement and should be investigated further. More than one-quarter (1377/4897, 28.12%) of all relaxation exercises were practiced without the app, highlighting the importance of assessing engagement on multiple levels.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:大型语言模型(LLM),例如ChatGPT(开放式AI),越来越多地用于医学和补充标准搜索引擎作为信息源。这导致了关于个人医疗症状的LLM的更多“咨询”。
    目的:本研究旨在评估ChatGPT在回答耳鼻咽喉科(ORL)临床病例问题方面的表现,并与ORL顾问的回答进行比较。
    方法:我们使用了41个基于案例的问题,这些问题来自已建立的ORL研究书籍和过去的德国州医生考试。ORL顾问和ChatGPT3都回答了这些问题。ORL顾问对所有回复进行了评级,除了自己的,关于医疗充分性,简洁,连贯性,和可理解性使用6点Likert量表。他们还确定(在盲区)答案是否由ORL顾问或ChatGPT创建。此外,比较了字符计数。由于技术的快速发展,通过对ChatGPT3和ChatGPT4产生的反应进行比较,以深入了解LLM的发展潜力。
    结果:ORL顾问在所有类别中的评分均显著较高(P<.001)。尽管低于ORL顾问的分数,ChatGPT的分数在语义类别中相对较高(简洁性,连贯性,和可理解性)与医疗充分性相比。ORL顾问在98.4%(121/123)的病例中正确确定了ChatGPT为来源。与ORL顾问相比,ChatGPT的答案具有明显更高的字符数(P<.001)。ChatGPT3和ChatGPT4产生的响应之间的比较显示,医疗准确性略有提高,所提供的答案也有更好的连贯性。相反,尽管字符的平均数量显着增加了52.5%(n=(1470-964)/964;P<.001),但简洁性(P=.06)和可理解性(P=.08)均未显着改善。
    结论:虽然ChatGPT为医疗问题提供了更长的答案,与ORL顾问的答案相比,医疗充分性和简洁性明显较低。LLM有潜力作为医疗保健的增强工具,但是他们对医疗问题的“咨询”具有很高的错误信息风险,因为他们的高语义质量可能掩盖上下文缺陷。
    BACKGROUND: Large language models (LLMs), such as ChatGPT (Open AI), are increasingly used in medicine and supplement standard search engines as information sources. This leads to more \"consultations\" of LLMs about personal medical symptoms.
    OBJECTIVE: This study aims to evaluate ChatGPT\'s performance in answering clinical case-based questions in otorhinolaryngology (ORL) in comparison to ORL consultants\' answers.
    METHODS: We used 41 case-based questions from established ORL study books and past German state examinations for doctors. The questions were answered by both ORL consultants and ChatGPT 3. ORL consultants rated all responses, except their own, on medical adequacy, conciseness, coherence, and comprehensibility using a 6-point Likert scale. They also identified (in a blinded setting) if the answer was created by an ORL consultant or ChatGPT. Additionally, the character count was compared. Due to the rapidly evolving pace of technology, a comparison between responses generated by ChatGPT 3 and ChatGPT 4 was included to give an insight into the evolving potential of LLMs.
    RESULTS: Ratings in all categories were significantly higher for ORL consultants (P<.001). Although inferior to the scores of the ORL consultants, ChatGPT\'s scores were relatively higher in semantic categories (conciseness, coherence, and comprehensibility) compared to medical adequacy. ORL consultants identified ChatGPT as the source correctly in 98.4% (121/123) of cases. ChatGPT\'s answers had a significantly higher character count compared to ORL consultants (P<.001). Comparison between responses generated by ChatGPT 3 and ChatGPT 4 showed a slight improvement in medical accuracy as well as a better coherence of the answers provided. Contrarily, neither the conciseness (P=.06) nor the comprehensibility (P=.08) improved significantly despite the significant increase in the mean amount of characters by 52.5% (n= (1470-964)/964; P<.001).
    CONCLUSIONS: While ChatGPT provided longer answers to medical problems, medical adequacy and conciseness were significantly lower compared to ORL consultants\' answers. LLMs have potential as augmentative tools for medical care, but their \"consultation\" for medical problems carries a high risk of misinformation as their high semantic quality may mask contextual deficits.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:ChatGPT由于其在生成广泛的信息和即时检索任何类型的数据方面的高性能,最近引起了全球关注。ChatGPT还通过了美国医学执照考试(USMLE)的测试,并成功清除了它。因此,它在医学教育中的可用性现在是全球范围内的关键讨论之一。
    目的:本研究的目的是使用临床案例插图评估ChatGPT在医学生物化学中的表现。
    方法:使用10个临床病例小插曲在医学生物化学中评估了ChatGPT的性能。随机选择临床病例插图,并与反应选项一起输入ChatGPT。我们测试了每个临床病例的反应两次。ChatGPT生成的答案被保存并使用我们的参考资料进行检查。
    结果:ChatGPT在第一次尝试时就产生了4个问题的正确答案。对于其他情况,ChatGPT在第一次和第二次尝试中产生的应答存在差异.在第二次尝试中,ChatGPT为使用的10个案例中的6个问题提供了正确答案,而4个问题提供了错误答案。但是,令我们惊讶的是,对于病例3,通过多次尝试获得了不同的答案.我们认为这是由于案件的复杂性而发生的,其中涉及以平衡的方法解决与氨基酸代谢相关的各种关键医学方面。
    结论:根据我们的研究结果,ChatGPT可能不被视为用于医学教育以改善学习和评估的准确信息提供者。然而,我们的研究受到样本量小(10例临床病例小插曲)和使用公开版本的ChatGPT(3.5版)的限制.尽管人工智能(AI)有能力改变医学教育,我们强调,在实际实施之前,要验证由此类AI系统产生的此类数据的正确性和可靠性。
    BACKGROUND: ChatGPT has gained global attention recently owing to its high performance in generating a wide range of information and retrieving any kind of data instantaneously. ChatGPT has also been tested for the United States Medical Licensing Examination (USMLE) and has successfully cleared it. Thus, its usability in medical education is now one of the key discussions worldwide.
    OBJECTIVE: The objective of this study is to evaluate the performance of ChatGPT in medical biochemistry using clinical case vignettes.
    METHODS: The performance of ChatGPT was evaluated in medical biochemistry using 10 clinical case vignettes. Clinical case vignettes were randomly selected and inputted in ChatGPT along with the response options. We tested the responses for each clinical case twice. The answers generated by ChatGPT were saved and checked using our reference material.
    RESULTS: ChatGPT generated correct answers for 4 questions on the first attempt. For the other cases, there were differences in responses generated by ChatGPT in the first and second attempts. In the second attempt, ChatGPT provided correct answers for 6 questions and incorrect answers for 4 questions out of the 10 cases that were used. But, to our surprise, for case 3, different answers were obtained with multiple attempts. We believe this to have happened owing to the complexity of the case, which involved addressing various critical medical aspects related to amino acid metabolism in a balanced approach.
    CONCLUSIONS: According to the findings of our study, ChatGPT may not be considered an accurate information provider for application in medical education to improve learning and assessment. However, our study was limited by a small sample size (10 clinical case vignettes) and the use of the publicly available version of ChatGPT (version 3.5). Although artificial intelligence (AI) has the capability to transform medical education, we emphasize the validation of such data produced by such AI systems for correctness and dependability before it could be implemented in practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:随着人工智能(AI)在医疗保健中的日益融合,像ChatGPT-4这样的人工智能聊天机器人正被用来提供健康信息。
    目的:本研究旨在评估ChatGPT-4在回答与腹部成形术相关的常见问题方面的能力,评估其作为患者教育和术前咨询辅助工具的潜力。
    方法:对ChatGPT-4提交了关于腹部成形术的各种常见问题。这些问题来自美国整形外科学会提供的问题列表,以确保它们的相关性和全面性。一位经验丰富的整形外科医生仔细评估了ChatGPT-4在信息深度方面产生的反应,反应衔接,和能力,以确定人工智能在提供以患者为中心的信息方面的熟练程度。
    结果:研究表明ChatGPT-4可以给出明确的答案,使其对回答常见的查询有用。然而,它挣扎着个性化的建议,有时提供不正确或过时的参考。总的来说,ChatGPT-4可以有效地共享腹部成形术信息,这可以帮助患者更好地理解手术。尽管有这些积极的发现,人工智能需要更多的改进,特别是在提供个性化和准确的信息方面,充分满足整形外科患者的教育需求。
    结论:尽管ChatGPT-4有望成为患者教育的资源,持续的改进和严格的检查对于将其有利地融入医疗保健环境至关重要。研究强调需要进一步研究,特别侧重于提高人工智能响应的个性化和准确性。
    方法:本期刊要求作者为每篇文章分配一定程度的证据。对于这些循证医学评级的完整描述,请参阅目录或在线作者说明www。springer.com/00266.
    BACKGROUND: With the increasing integration of artificial intelligence (AI) in health care, AI chatbots like ChatGPT-4 are being used to deliver health information.
    OBJECTIVE: This study aimed to assess the capability of ChatGPT-4 in answering common questions related to abdominoplasty, evaluating its potential as an adjunctive tool in patient education and preoperative consultation.
    METHODS: A variety of common questions about abdominoplasty were submitted to ChatGPT-4. These questions were sourced from a question list provided by the American Society of Plastic Surgery to ensure their relevance and comprehensiveness. An experienced plastic surgeon meticulously evaluated the responses generated by ChatGPT-4 in terms of informational depth, response articulation, and competency to determine the proficiency of the AI in providing patient-centered information.
    RESULTS: The study showed that ChatGPT-4 can give clear answers, making it useful for answering common queries. However, it struggled with personalized advice and sometimes provided incorrect or outdated references. Overall, ChatGPT-4 can effectively share abdominoplasty information, which may help patients better understand the procedure. Despite these positive findings, the AI needs more refinement, especially in providing personalized and accurate information, to fully meet patient education needs in plastic surgery.
    CONCLUSIONS: Although ChatGPT-4 shows promise as a resource for patient education, continuous improvements and rigorous checks are essential for its beneficial integration into healthcare settings. The study emphasizes the need for further research, particularly focused on improving the personalization and accuracy of AI responses.
    METHODS: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    我们评估了人工智能聊天机器人ChatGPT是否可以在扫描前后充分回答与[18F]FDGPET/CT相关的患者问题。方法:向ChatGPT提交关于[18F]FDGPET/CT的13个问题。ChatGPT还被要求解释6份PET/CT报告(肺癌,霍奇金淋巴瘤)并回答6个后续问题(例如,在肿瘤分期或推荐治疗中)。被评为“有用”或“适当”,“按照核医学工作人员的标准,回应必须是足够的。通过再生反应评估不一致性。结果:在25项任务中,92%的响应被评为“适当”,96%的响应被评为“有用”。在16%的任务的再生响应之间发现了相当大的不一致。回答83%的敏感问题(例如,分期/治疗方案)被评为“同情”。“结论:ChatGPT可能足以替代核医学人员在被调查环境中给予患者的建议。改善ChatGPT的一致性将进一步提高可靠性。
    We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [18F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [18F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated \"useful\" or \"appropriate,\" a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated \"appropriate\" for 92% of 25 tasks and \"useful\" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated \"empathetic.\" Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:儿童和青少年心理健康问题的患病率增长速度快于可用的服务数量,导致短缺。心理健康聊天机器人是解决这一差距的一种高度可扩展的方法。管理你的生活在线(MYLO)是一个人工智能聊天机器人,模拟水平治疗的方法。水平方法是一种使用好奇的提问来支持对当前问题的持续认识和探索的疗法。
    目的:本研究旨在评估MYLO在16至24岁有心理健康问题的年轻人中共同设计界面的可行性和可接受性。
    方法:进行了4个月的迭代协同设计阶段,其中反馈来自一群有心理健康问题经历的年轻人(n=7)。这导致了可以在移动电话上使用的MYLO的渐进式Web应用程序版本的开发。我们进行了一系列病例,以评估13名年轻人在2周内使用MYLO的可行性和可接受性。在此期间,参与者测试了MYLO,并完成了包括临床结局和可接受性指标在内的调查.然后,我们进行了焦点小组和访谈,并使用主题分析来获得有关MYLO的反馈,并确定进一步改进的建议。
    结果:大多数参与者对使用MYLO的经验持肯定态度,并将MYLO推荐给其他人。参与者喜欢界面的简单性,发现它易于使用,并使用系统可用性量表将其评为可接受。对使用数据的检查发现,MYLO可以学习并适应其询问以响应用户输入的证据。我们发现,在测试阶段,参与者与问题相关的痛苦减少的效应大小很大,而在他们自我报告的解决目标冲突的倾向(拟议的改变机制)增加的效应大小中等。在2周内,一些患者的临床结果指标也发生了可靠的变化。
    结论:我们确定了MYLO的可行性和可接受性。初步结果表明,MYLO有可能支持年轻人的心理健康并帮助他们解决自己的问题。我们的目标是确定使用MYLO是否导致参与者的抑郁和焦虑症状有意义的减少,以及这些症状是否随着时间的推移而保持通过进行随机对照评估试验。
    BACKGROUND: The prevalence of child and adolescent mental health issues is increasing faster than the number of services available, leading to a shortfall. Mental health chatbots are a highly scalable method to address this gap. Manage Your Life Online (MYLO) is an artificially intelligent chatbot that emulates the method of levels therapy. Method of levels is a therapy that uses curious questioning to support the sustained awareness and exploration of current problems.
    OBJECTIVE: This study aimed to assess the feasibility and acceptability of a co-designed interface for MYLO in young people aged 16 to 24 years with mental health problems.
    METHODS: An iterative co-design phase occurred over 4 months, in which feedback was elicited from a group of young people (n=7) with lived experiences of mental health issues. This resulted in the development of a progressive web application version of MYLO that could be used on mobile phones. We conducted a case series to assess the feasibility and acceptability of MYLO in 13 young people over 2 weeks. During this time, the participants tested MYLO and completed surveys including clinical outcomes and acceptability measures. We then conducted focus groups and interviews and used thematic analysis to obtain feedback on MYLO and identify recommendations for further improvements.
    RESULTS: Most participants were positive about their experience of using MYLO and would recommend MYLO to others. The participants enjoyed the simplicity of the interface, found it easy to use, and rated it as acceptable using the System Usability Scale. Inspection of the use data found evidence that MYLO can learn and adapt its questioning in response to user input. We found a large effect size for the decrease in participants\' problem-related distress and a medium effect size for the increase in their self-reported tendency to resolve goal conflicts (the proposed mechanism of change) in the testing phase. Some patients also experienced a reliable change in their clinical outcome measures over the 2 weeks.
    CONCLUSIONS: We established the feasibility and acceptability of MYLO. The initial outcomes suggest that MYLO has the potential to support the mental health of young people and help them resolve their own problems. We aim to establish whether the use of MYLO leads to a meaningful reduction in participants\' symptoms of depression and anxiety and whether these are maintained over time by conducting a randomized controlled evaluation trial.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:当我们生活在人工智能(AI)时代时,世界正在快速向数字化转型迈进。COVID-19大流行加速了这一运动。聊天机器人被成功地用于帮助研究人员收集数据用于研究目的。
    目标:要在Facebook®平台上实施聊天机器人,以与订阅聊天机器人的医疗保健专业人员建立联系,提供医学和药学教育内容,并为在线药学研究项目收集数据。Facebook®之所以被选中,是因为它每天有数十亿的活跃用户,为研究项目提供了大量的潜在受众。
    方法:聊天机器人是在Facebook®平台上成功实现的,遵循三个连续步骤。首先,在Pharmind网站上安装了ChatPion脚本,以建立聊天机器人系统。其次,PharmindBot应用程序是在Facebook®上开发的。最后,PharmindBot应用程序与聊天机器人系统集成。
    方法:聊天机器人自动响应公众评论,并使用人工智能向订阅者发送私人响应。聊天机器人以最小的成本收集定量和定性数据。
    方法:使用Facebook®特定页面上发布的帖子测试了聊天机器人的自动回复功能。要求测试人员留下预定义的关键字来测试其功能。通过要求测试人员在FacebookMessenger®中填写定量数据的在线调查并回答定性数据的预定义问题,测试了聊天机器人收集和保存数据的能力。
    结果:聊天机器人在与之互动的1000个订阅者上进行了测试。几乎所有测试人员(n=990,99%)在发送预定义的关键字后,从聊天机器人那里获得了成功的私人回复。此外,聊天机器人私下回复了几乎所有的公众意见(n=985,98.5%),这有助于增加有机覆盖,并与聊天机器人用户建立连接。当使用聊天机器人收集定量和定性数据时,没有发现缺失的数据。
    结论:聊天机器人覆盖了数千名医疗保健专业人员,并为他们提供了自动响应。以低成本,聊天机器人能够收集定性和定量数据,而无需依赖Facebook®广告来接触目标受众。数据收集是高效和有效的。药学和医学研究人员使用聊天机器人将有助于使用AI进行更可行的在线研究,以推进医疗保健研究。
    The world is moving fast toward digital transformation as we live in the artificial intelligence (AI) era. The COVID-19 pandemic accelerates this movement. Chatbots were used successfully to help researchers collect data for research purposes.
    To implement a chatbot on the Facebook platform to establish connections with health care professionals who had subscribed to the chatbot, provide medical and pharmaceutical educational content, and collect data for online pharmacy research projects. Facebook was chosen because it has billions of daily active users, which offers a massive potential audience for research projects.
    The chatbot was successfully implemented on the Facebook platform following 3 consecutive steps. Firstly, the ChatPion script was installed on the Pharmind website to establish the chatbot system. Secondly, the PharmindBot application was developed on Facebook. Finally, the PharmindBot app was integrated with the chatbot system.
    The chatbot responds automatically to public comments and sends subscribers private responses using AI. The chatbot collected quantitative and qualitative data with minimal costs.
    The chatbot\'s auto-reply function was tested using a post published on a specific page on Facebook. Testers were asked to leave predefined keywords to test its functionality. The chatbot\'s ability to collect and save data was tested by asking testers to fill out an online survey within Facebook Messenger for quantitative data and answer predefined questions for qualitative data.
    The chatbot was tested on 1000 subscribers who interacted with it. Almost all testers (n = 990, 99%) obtained a successful private reply from the chatbot after sending a predefined keyword. Also, the chatbot replied privately to almost all public comments (n = 985, 98.5%) which helped to increase the organic reach and to establish a connection with the chatbot subscribers. No missing data were found when the chatbot was used to collect quantitative and qualitative data.
    The chatbot reached thousands of health care professionals and provided them with automated responses. At a low cost, the chatbot was able to gather both qualitative and quantitative data without relying on Facebook ads to reach the intended audience. The data collection was efficient and effective. Using chatbots by pharmacy and medical researchers will help do more feasible online studies using AI to advance health care research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号