Patient Education

患者教育
  • 文章类型: Journal Article
    儿童心理健康问题涉及全世界大量儿童,是一项重大的公共卫生挑战。父母和照顾者缺乏这方面的知识阻碍了有效的管理。赋予家庭权力可以增强他们解决子女困难的能力,提高健康素养,促进积极的变化。然而,由于恐惧,寻求可靠的心理健康信息仍然具有挑战性,污名,以及对信息来源的不信任。
    这项研究评估了网站的接受度,CléPsy,旨在为关注儿童心理健康和育儿的家庭提供可靠的信息和实用工具。
    这项研究检查了用户特征并评估了易用性,有用性,可信度,以及使用网站的态度。平台用户可以通过邮件列表访问自我管理的问卷,社交网络,和2022年5月至7月之间的海报。
    研究结果表明,317名响应者中的大多数同意或有些同意,该网站使与专业人士(n=264,83.3%)或其亲属(n=260,82.1%)的有关心理健康的讨论更容易。根据方差分析,受教育程度和感知信任(F6=3.03;P=.007)以及使用频率和感知有用性(F2=4.85;P=.008)之间存在显著影响。
    该研究强调了用户体验和设计在基于Web的健康信息传播中的重要性,并强调了对可访问和基于证据的信息的需求。虽然这项研究有局限性,它为网站的可接受性和实用性提供了初步支持。未来的努力应侧重于与用户的包容性共建,并解决来自不同文化和教育背景的家庭的信息需求。
    UNASSIGNED: Childhood mental health issues concern a large amount of children worldwide and represent a major public health challenge. The lack of knowledge among parents and caregivers in this area hinders effective management. Empowering families enhances their ability to address their children\'s difficulties, boosts health literacy, and promotes positive changes. However, seeking reliable mental health information remains challenging due to fear, stigma, and mistrust of the sources of information.
    UNASSIGNED: This study evaluates the acceptance of a website, CléPsy, designed to provide reliable information and practical tools for families concerned about child mental health and parenting.
    UNASSIGNED: This study examines user characteristics and assesses ease of use, usefulness, trustworthiness, and attitude toward using the website. Platform users were given access to a self-administered questionnaire by means of mailing lists, social networks, and posters between May and July 2022.
    UNASSIGNED: Findings indicate that the wide majority of the 317 responders agreed or somewhat agreed that the website made discussions about mental health easier with professionals (n=264, 83.3%) or with their relatives (n=260, 82.1%). According to the ANOVA, there was a significant effect between educational level and perceived trust (F6=3.03; P=.007) and between frequency of use and perceived usefulness (F2=4.85; P=.008).
    UNASSIGNED: The study underlines the importance of user experience and design in web-based health information dissemination and emphasizes the need for accessible and evidence-based information. Although the study has limitations, it provides preliminary support for the acceptability and usefulness of the website. Future efforts should focus on inclusive co-construction with users and addressing the information needs of families from diverse cultural and educational backgrounds.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:评估可读性,问责制,可访问性,以及用于治疗年龄相关性黄斑变性(AMD)的在线患者教育材料的来源,并在美国食品和药物管理局(FDA)批准后量化公众对Syfovre和地理萎缩的兴趣。方法:根据信息来源将网站分为4类。使用5个有效的可读性指数评估可读性。使用《美国医学会杂志》(JAMA)的4个基准来评估问责制。使用3个既定标准评估可达性。在FDA批准后的几个月中,Google趋势工具用于评估“Syfovre”和“地理萎缩”中公共利益的时间趋势。结果:在分析的100个网站中,22%的学生写得低于推荐的六年级阅读水平。分析文章的平均(±SD)等级为9.76±3.35。网站平均1.40±1.39(4个)JAMA责任指标。大多数文章(67%)来自私人执业/独立组织。使用GoogleTrends工具(P<.001)发现FDA批准后,“Syfovre”和“地理萎缩”一词的公共利益显着增加。结论:与AMD治疗相关的患者教育材料通常以不适当的阅读水平编写,并且缺乏既定的问责制和可及性指标。来自国家组织的文章在可访问性指标上排名最高,但在谷歌搜索中不太明显。建议需要采取提高能见度的措施。与“Syfovre”一词相关的患者教育材料具有最高的平均阅读水平和较低的责任感,建议需要修改资源,以最好地满足日益好奇的公众的需求。
    Purpose: To evaluate the readability, accountability, accessibility, and source of online patient education materials for treatment of age-related macular degeneration (AMD) and to quantify public interest in Syfovre and geographic atrophy after US Food and Drug Administration (FDA) approval. Methods: Websites were classified into 4 categories by information source. Readability was assessed using 5 validated readability indices. Accountability was assessed using 4 benchmarks of the Journal of the American Medical Association (JAMA). Accessibility was evaluated using 3 established criteria. The Google Trends tool was used to evaluate temporal trends in public interest in \"Syfovre\" and \"geographic atrophy\" in the months after FDA approval. Results: Of 100 websites analyzed, 22% were written below the recommended sixth-grade reading level. The mean (±SD) grade level of analyzed articles was 9.76 ± 3.35. Websites averaged 1.40 ± 1.39 (of 4) JAMA accountability metrics. The majority of articles (67%) were from private practice/independent organizations. A significant increase in the public interest in the terms \"Syfovre\" and \"geographic atrophy\" after FDA approval was found with the Google Trends tool (P < .001). Conclusions: Patient education materials related to AMD treatment are often written at inappropriate reading levels and lack established accountability and accessibility metrics. Articles from national organizations ranked highest on accessibility metrics but were less visible on a Google search, suggesting the need for visibility-enhancing measures. Patient education materials related to the term \"Syfovre\" had the highest average reading level and low accountability, suggesting the need to modify resources to best address the needs of an increasingly curious public.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:我们查询了ChatGPT(OpenAI)和GoogleAssistant有关弱视的信息,并将其答案与美国小儿眼科和斜视协会(AAPOS)网站上找到的关键字进行了比较,特别是关于弱视的部分。从网站选择的26个关键词中,ChatGPT在其回答中包括11个(42%),而Google包括8(31%)。
    目的:我们的研究调查了ChatGPT-3.5和GoogleAssistant对AAPOS弱视患者教育指南的依从性。
    方法:使用ChatGPT-3.5。来自AAPOS网站的四个问题,特别是弱视的词汇表部分,如下:(1)什么是弱视?(2)什么导致弱视?(3)弱视如何治疗?(4)如果弱视未经治疗会发生什么?眼科医生(GW和DL)批准和选择,AAPOS的关键词是认为对弱视患者的教育有重要意义的单词或短语.“Flesch-Kincaid等级”公式,由美国教育部批准,用于评估ChatGPT回答的阅读理解水平,GoogleAssistant,和AAPOS。
    结果:在他们的回答中,ChatGPT没有提到“眼科医生,“而GoogleAssistant和AAPOS都提到过一次和两次。分别。ChatGPT做到了,然而,使用术语“眼科医生”一次。根据Flesch-Kincaid测试,AAPOS的平均阅读水平为11.4(SD2.1;最低水平),而Google的平均阅读水平为13.1(SD4.8;最高要求的阅读水平),也显示了其反应中年级水平的最大变化。ChatGPT的答案,平均而言,评分12.4(SD1.1)年级。他们在阅读难度方面都相似。对于关键字,在4个回答中,ChatGPT使用了42%(11/26)的关键字,而GoogleAssistant使用了31%(8/26)。
    结论:ChatGPT训练文本和短语,并生成新的句子,而GoogleAssistant会自动复制网站链接。作为眼科医生,我们应该考虑在我们的网站和期刊上加入“看眼科医生”。当ChatGPT留下来的时候,我们,作为医生,需要监视它的答案。
    BACKGROUND: We queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%).
    OBJECTIVE: Our study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia.
    METHODS: ChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows: (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The \"Flesch-Kincaid Grade Level\" formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS.
    RESULTS: In their responses, ChatGPT did not mention the term \"ophthalmologist,\" whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term \"eye doctors\" once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT\'s answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26).
    CONCLUSIONS: ChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including \"see an ophthalmologist\" on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:人工智能(AI)聊天机器人,比如ChatGPT,取得了重大进展。这些聊天机器人,在医疗保健专业人员和患者中特别受欢迎,正在通过个性化信息改变患者教育和疾病体验。准确,及时的病人教育对于知情决策至关重要,特别是关于前列腺特异性抗原筛查和治疗方案。然而,必须严格评估人工智能聊天机器人医疗信息的准确性和可靠性。测试ChatGPT对前列腺癌知识的研究正在兴起,但需要持续评估,以确保向患者提供的信息的质量和安全性.
    目的:本研究旨在评估质量,准确度,以及ChatGPT-4对患者提出的常见前列腺癌问题的反应的可读性。
    方法:总的来说,根据同行评审文献中的信息主题和Google趋势数据,采用归纳方法制定了8个问题。适用于AI的患者教育材料评估工具(PEMAT-AI)的改编版本,全球质量评分,4名独立审稿人使用DISCERN-AI工具来评估AI反应的质量。这8个人工智能输出由7位泌尿科专家判断,使用开发的评估框架来评估准确性,安全,适当性,可操作性,和有效性。人工智能反应的可读性是使用既定的算法评估的(FleschReadingEase评分,GunningFogIndex,Flesch-Kincaid等级,Coleman-Liau指数,和Gobbledygook[SMOG]指数的简单度量)。开发了一个简短的工具(参考评估AI[REF-AI])来分析AI输出提供的参考,评估参考幻觉,相关性,和参考文献的质量。
    结果:PEMAT-AI可理解性得分非常好(平均79.44%,SD10.44%),DISCERN-AI评分为“良好”质量(平均13.88,标准差0.93),总体质量评分较高(平均4.46/5,SD0.50)。人工智能自然语言评估工具的合并平均准确率为3.96(SD0.91),安全性为4.32(SD0.86),适当性4.45(SD0.81),可操作性为4.05(SD1.15),和有效性4.09(SD0.98)。可读性算法的共识是“难以阅读”(FleschReadingEase得分平均45.97,SD8.69;GunningFogIndex平均14.55,SD4.79),平均11年级的阅读水平,相当于15至17岁的青少年(Flesch-Kincaid等级平均12.12,SD4.34;Coleman-Liau指数平均12.75,SD1.98;SMOG指数平均11.06,SD3.20)。REF-AI识别出2种参考幻觉,而大多数参考文献(28/30,93%)适当地补充了文本。大多数参考文献(26/30,86%)来自信誉良好的政府组织,少数是科学文献的直接引用。
    结论:我们的分析发现,ChatGPT-4对常见前列腺癌查询提供了普遍良好的响应,使其成为前列腺癌护理中患者教育的潜在有价值的工具。客观的质量评估工具表明,自然语言处理输出通常是可靠和适当的,但是还有改进的空间。
    BACKGROUND: Artificial intelligence (AI) chatbots, such as ChatGPT, have made significant progress. These chatbots, particularly popular among health care professionals and patients, are transforming patient education and disease experience with personalized information. Accurate, timely patient education is crucial for informed decision-making, especially regarding prostate-specific antigen screening and treatment options. However, the accuracy and reliability of AI chatbots\' medical information must be rigorously evaluated. Studies testing ChatGPT\'s knowledge of prostate cancer are emerging, but there is a need for ongoing evaluation to ensure the quality and safety of information provided to patients.
    OBJECTIVE: This study aims to evaluate the quality, accuracy, and readability of ChatGPT-4\'s responses to common prostate cancer questions posed by patients.
    METHODS: Overall, 8 questions were formulated with an inductive approach based on information topics in peer-reviewed literature and Google Trends data. Adapted versions of the Patient Education Materials Assessment Tool for AI (PEMAT-AI), Global Quality Score, and DISCERN-AI tools were used by 4 independent reviewers to assess the quality of the AI responses. The 8 AI outputs were judged by 7 expert urologists, using an assessment framework developed to assess accuracy, safety, appropriateness, actionability, and effectiveness. The AI responses\' readability was assessed using established algorithms (Flesch Reading Ease score, Gunning Fog Index, Flesch-Kincaid Grade Level, The Coleman-Liau Index, and Simple Measure of Gobbledygook [SMOG] Index). A brief tool (Reference Assessment AI [REF-AI]) was developed to analyze the references provided by AI outputs, assessing for reference hallucination, relevance, and quality of references.
    RESULTS: The PEMAT-AI understandability score was very good (mean 79.44%, SD 10.44%), the DISCERN-AI rating was scored as \"good\" quality (mean 13.88, SD 0.93), and the Global Quality Score was high (mean 4.46/5, SD 0.50). Natural Language Assessment Tool for AI had pooled mean accuracy of 3.96 (SD 0.91), safety of 4.32 (SD 0.86), appropriateness of 4.45 (SD 0.81), actionability of 4.05 (SD 1.15), and effectiveness of 4.09 (SD 0.98). The readability algorithm consensus was \"difficult to read\" (Flesch Reading Ease score mean 45.97, SD 8.69; Gunning Fog Index mean 14.55, SD 4.79), averaging an 11th-grade reading level, equivalent to 15- to 17-year-olds (Flesch-Kincaid Grade Level mean 12.12, SD 4.34; The Coleman-Liau Index mean 12.75, SD 1.98; SMOG Index mean 11.06, SD 3.20). REF-AI identified 2 reference hallucinations, while the majority (28/30, 93%) of references appropriately supplemented the text. Most references (26/30, 86%) were from reputable government organizations, while a handful were direct citations from scientific literature.
    CONCLUSIONS: Our analysis found that ChatGPT-4 provides generally good responses to common prostate cancer queries, making it a potentially valuable tool for patient education in prostate cancer care. Objective quality assessment tools indicated that the natural language processing outputs were generally reliable and appropriate, but there is room for improvement.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    这项研究的目的是比较专家和有经验的非专家物理治疗师在对心力衰竭(HF)患者进行有关身体活动的患者教育方面的知识和实践。
    对全国匿名在线调查的回应,用于比较专家和有经验的非专家物理治疗师对急性失代偿性HF住院患者提供体育活动相关教育的知识和频率。对调查项目的回答按5分制进行评分,范围从“强烈同意”到“强烈不同意”或“总是”到“从不”。“Mann-WhitneyU统计数据用于比较专家和有经验的非专家的反应,Wilcoxon签名等级测试用于检查知识与实践之间的差距。
    27名专家和43名经验丰富的非专家完成了调查。两组的年龄相似,治疗急性失代偿性心力衰竭住院患者的经验。两组“非常同意”,他们拥有必要的知识和技能来教育HF患者的身体活动主题。然而,专家比经验丰富的非专家更经常提供有关如何在身体活动期间监测生命体征等主题的教育(“大部分时间”与“大约一半的时间”)在锻炼过程中提高了患者的信心和安全性。在四个患者教育主题中的三个方面,与经验丰富的非专家相比,专家在提供患者教育的知识和频率之间的差距较小。
    在住院医院环境中治疗HF患者的专业物理治疗师提供的患者身体活动教育水平比经验丰富的非专家更接近他们的技能和临床实践指南。在住院医院环境中执业的物理治疗临床专家可以通过提高身体活动的依从性来改善患者的预后并降低医疗保健系统的成本,从而可以减少可避免的再入院。
    UNASSIGNED: The purpose of this study was to compare the knowledge and practices of specialist and experienced nonspecialist physical therapists in performing patient education about physical activity with patients with heart failure (HF).
    UNASSIGNED: Responses on a nationwide anonymous online survey were used to compare specialist and experienced nonspecialist physical therapists on knowledge and frequency of providing physical activity related education to patients hospitalized with acutely decompensated HF. Responses to survey items were scored on 5-point scales ranging from \"Strongly agree\" to \"Strongly disagree\" or \"Always\" to \"Never.\" Mann-Whitney U statistics were used to compare specialist and experienced nonspecialist responses and Wilcoxon signed-ranks tests were used to examine the gap between knowledge and practice.
    UNASSIGNED: Twenty-seven specialists and 43 experienced nonspecialists completed the survey. Both groups were similar in age, and experience treating patients hospitalized with acutely decompensated HF. Both groups \"strongly agree\" that they had the required knowledge and skills to educate patients with HF on the physical activity topics. However, specialists more often than experienced nonspecialists provided education on topics such as how to monitor vital signs during physical activity (\"most of the time\" vs. \"about half of the time\") that promoted patient confidence and safety during exercise. Specialists demonstrated a smaller gap between knowledge and frequency of providing patient education than experienced nonspecialists on three of the four patient education topics.
    UNASSIGNED: Specialist physical therapists treating patients with HF in the inpatient hospital setting provided patient education on physical activity at a level more closely matching their skills and the clinical practice guideline than did experienced nonspecialists. Physical therapy clinical specialists practicing in the inpatient hospital setting may improve patient outcomes and lower costs to the health care system by improving physical activity adherence and thereby may reduce avoidable hospital readmissions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:手术部位感染(SSI)是脊柱手术中常见且昂贵的并发症。识别风险因素和预防策略对于减少SSI至关重要。GPT-4已经从一个简单的基于文本的工具发展成为一个复杂的多模态数据专家,对临床医生来说是非常宝贵的。本研究探讨了GPT-4在各种临床场景中的SSI管理应用。
    方法:GPT-4用于脊柱手术中与SSIs相关的各种临床场景。研究人员为GPT-4设计了特定问题,以生成量身定制的响应。六名评估人员使用5点Likert量表评估了这些响应的逻辑性和准确性。使用Fleiss\'kappa测量评分者之间的一致性,和雷达图可视化GPT-4的性能。
    结果:评分者之间的一致性,由Fleiss\'kappa测量,范围从0.62到0.83。在5分Likert量表上,逻辑和准确性的总体平均得分分别为24.27±0.4和24.46±0.25。雷达图显示GPT-4在各种标准下始终具有高性能。GPT-4在创建针对不同临床患者记录的个性化治疗计划方面表现出了很高的熟练程度,并提供了交互式患者教育。它显著改善了SSI管理策略,感染预测模型,并确定了新兴的研究趋势。然而,它在微调抗生素治疗和定制患者教育材料方面存在局限性.
    结论:GPT-4代表了脊柱手术中处理SSIs的重大进展,促进以患者为中心的护理和精准医疗。尽管在抗生素定制和患者教育方面存在一些限制,GPT-4的持续学习,注意数据隐私和安全,与医疗保健专业人员合作,患者对人工智能建议的接受表明,它有可能彻底改变SSI管理,需要进一步开发和临床整合。
    BACKGROUND: Surgical site infection (SSI) is a common and costly complication in spinal surgery. Identifying risk factors and preventive strategies is crucial for reducing SSIs. GPT-4 has evolved from a simple text-based tool to a sophisticated multimodal data expert, invaluable for clinicians. This study explored GPT-4\'s applications in SSI management across various clinical scenarios.
    METHODS: GPT-4 was employed in various clinical scenarios related to SSIs in spinal surgery. Researchers designed specific questions for GPT-4 to generate tailored responses. Six evaluators assessed these responses for logic and accuracy using a 5-point Likert scale. Inter-rater consistency was measured with Fleiss\' kappa, and radar charts visualized GPT-4\'s performance.
    RESULTS: The inter-rater consistency, measured by Fleiss\' kappa, ranged from 0.62 to 0.83. The overall average scores for logic and accuracy were 24.27±0.4 and 24.46±0.25 on 5-point Likert scale. Radar charts showed GPT-4\'s consistently high performance across various criteria. GPT-4 demonstrated high proficiency in creating personalized treatment plans tailored to diverse clinical patient records and offered interactive patient education. It significantly improved SSI management strategies, infection prediction models, and identified emerging research trends. However, it had limitations in fine-tuning antibiotic treatments and customizing patient education materials.
    CONCLUSIONS: GPT-4 represents a significant advancement in managing SSIs in spinal surgery, promoting patient-centered care and precision medicine. Despite some limitations in antibiotic customization and patient education, GPT-4\'s continuous learning, attention to data privacy and security, collaboration with healthcare professionals, and patient acceptance of AI recommendations suggest its potential to revolutionize SSI management, requiring further development and clinical integration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    患者在临床治疗前通过多种信息渠道获取有关其骨科手术资源的相关信息。最近,人工智能(AI)驱动的聊天机器人已经成为患者的另一个信息来源。当前开发的AI聊天技术ChatGPT(OpenAILP)是用于此类目的的应用程序,并且已迅速普及,包括患者教育。这项研究旨在评估ChatGPT是否可以正确回答有关假体周围感染(PJI)的常见问题(FAQ)。
    在15个国际临床专家中心的网站上发现了12个关于髋关节和膝关节置换术后PJI的常见问题。ChatGPT面临这些问题,一个多学科团队使用基于证据的方法分析了其回答的准确性。反应分为四组:(1)不需要额外改善的出色反应;(2)需要少量改善的满意反应;(3)需要适度改善的满意反应;(4)需要大量改善的不满意反应。
    通过对聊天机器人给出的响应的分析,没有答复收到“不满意”评级;一个不需要任何更正;大多数答复要求低(12个中的7个)或中等(12个中的4个)澄清。尽管一些答复需要最少的澄清,聊天机器人的反应通常是公正的,以证据为基础的,即使被问到有争议的问题。
    AI聊天机器人ChatGPT能够有效地回答寻求PJI诊断和治疗信息的患者的常见问题。给定的信息也以可以被认为是患者可理解的方式编写。聊天机器人可能是未来患者教育和理解PJI治疗的宝贵临床工具。进一步的研究应评估其使用和接受PJI患者。
    UNASSIGNED: Patients access relevant information concerning their orthopaedic surgery resources through multiple information channels before presenting for clinical treatment. Recently, artificial intelligence (AI)-powered chatbots have become another source of information for patients. The currently developed AI chat technology ChatGPT (OpenAI LP) is an application for such purposes and it has been rapidly gaining popularity, including for patient education. This study sought to evaluate whether ChatGPT can correctly answer frequently asked questions (FAQ) regarding periprosthetic joint infection (PJI).
    UNASSIGNED: Twelve FAQs about PJI after hip and knee arthroplasty were identified from the websites of fifteen international clinical expert centres. ChatGPT was confronted with these questions and its responses were analysed for their accuracy using an evidence-based approach by a multidisciplinary team. Responses were categorised in four groups: (1) Excellent response that did not require additional improvement; (2) Satisfactory responses that required a small amount of improvement; (3) Satisfactory responses that required moderate improvement; and (4) Unsatisfactory responses that required a large amount of improvement.
    UNASSIGNED: From the analysis of the responses given by the chatbot, no reply received an \'unsatisfactory\' rating; one did not require any correction; and the majority of the responses required low (7 out of 12) or moderate (4 out of 12) clarification. Although a few responses required minimal clarification, the chatbot responses were generally unbiased and evidence-based, even when asked controversial questions.
    UNASSIGNED: The AI-chatbot ChatGPT was able to effectively answer the FAQs of patients seeking information around PJI diagnosis and treatment. The given information was also written in a manner that can be assumed to be understandable by patients. The chatbot could be a valuable clinical tool for patient education and understanding around PJI treatment in the future. Further studies should evaluate its use and acceptance by patients with PJI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    临终关怀(EOLC)是医疗保健的一个关键方面,然而,获取可靠的信息仍然具有挑战性,特别是在像印度这样的多元文化环境中。
    这项研究通过分析EOLC上AI聊天机器人生成的患者信息传单(PIL),调查了人工智能(AI)在解决信息差距方面的潜力。
    使用比较研究设计,对ChatGPT和GoogleGemini生成的PIL进行了可读性评估,情绪,准确度,完整性,和适用性。使用既定的指标评估可读性,情绪分析确定情绪基调,准确度,和完整性由主题专家评估,和适用性使用患者教育材料评估工具(PEMAT)进行评估.
    与ChatGPTPIL相比,GoogleGeminiPIL具有更高的可读性和可操作性。两者都传达了积极的情绪以及高度的准确性和完整性,GoogleGeminiPILs的准确度得分略低。
    这些发现凸显了AI在EOLC中增强患者教育的有希望的作用,对改善不同文化背景下的护理结果和促进知情决策具有重要意义。需要在AI驱动的患者教育策略中持续改进和创新,以确保富有同情心和文化敏感性的EOLC。
    GondudePG,KhannaP,夏尔马·P,DuggalS,GargN.临终护理患者信息传单-人工智能生成内容可读性的比较评估,情绪,准确性,完整性,和适用性:ChatGPTvsGoogleGemini。印度J暴击护理中心2024;28(6):561-568。
    UNASSIGNED: End-of-life care (EOLC) is a critical aspect of healthcare, yet accessing reliable information remains challenging, particularly in culturally diverse contexts like India.
    UNASSIGNED: This study investigates the potential of artificial intelligence (AI) in addressing the informational gap by analyzing patient information leaflets (PILs) generated by AI chatbots on EOLC.
    UNASSIGNED: Using a comparative research design, PILs generated by ChatGPT and Google Gemini were evaluated for readability, sentiment, accuracy, completeness, and suitability. Readability was assessed using established metrics, sentiment analysis determined emotional tone, accuracy, and completeness were rated by subject experts, and suitability was evaluated using the Patient Education Materials Assessment Tool (PEMAT).
    UNASSIGNED: Google Gemini PILs exhibited superior readability and actionability compared to ChatGPT PILs. Both conveyed positive sentiments and high levels of accuracy and completeness, with Google Gemini PILs showing slightly lower accuracy scores.
    UNASSIGNED: The findings highlight the promising role of AI in enhancing patient education in EOLC, with implications for improving care outcomes and promoting informed decision-making in diverse cultural settings. Ongoing refinement and innovation in AI-driven patient education strategies are needed to ensure compassionate and culturally sensitive EOLC.
    UNASSIGNED: Gondode PG, Khanna P, Sharma P, Duggal S, Garg N. End-of-life Care Patient Information Leaflets-A Comparative Evaluation of Artificial Intelligence-generated Content for Readability, Sentiment, Accuracy, Completeness, and Suitability: ChatGPT vs Google Gemini. Indian J Crit Care Med 2024;28(6):561-568.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    针对胃食管反流病(GERD)患者的智能手机应用程序已被下载超过100,000次,然而,尚未完成对其质量的系统评估。这项研究旨在客观评估GERD智能手机应用程序的质量,以进行患者教育和疾病管理。
    系统地搜索了AppleAppStore和GooglePlayStore中的相关应用程序。两名独立审核员进行了申请筛选和资格评估。包含的应用程序使用经过验证的移动应用程序评级量表进行评级,其中包括4个领域(参与,功能,美学,和信息)以及整体应用程序质量评分。整体应用程序质量之间的关联,用户评级和下载数量进行了评估。
    在确定的4816个独特应用程序中,46符合纳入标准(患者教育=37,疾病管理=9)。平均总体申请质量评分为3.02±0.40分,满分5分(“可接受”),61%(28/46)被评为“差”(得分2.0-2.9)。应用程序在美学(3.24±0.48)和功能(3.88±0.37)方面得分最高,在信息(2.58±0.64)和参与度(2.39±0.65)方面得分最低。疾病管理应用的质量明显高于以教育为中心的应用(3.59±0.38vs2.88±0.26,P<.001)。分级质量与用户评分或下载次数之间没有相关性。
    虽然存在许多智能手机应用程序来支持GERD患者,质量是可变的。患者教育应用的质量特别低。我们的发现有助于告知患者选择应用并指导临床医生的建议。这项研究还强调了对更高质量的需求,针对GERD患者教育的循证应用。
    UNASSIGNED: Smartphone applications aimed at patients with gastroesophageal reflux disease (GERD) have been downloaded more than 100,000 times, yet no systematic assessment of their quality has been completed. This study aimed to objectively assess the quality of GERD smartphone applications for patient education and disease management.
    UNASSIGNED: The Apple App Store and Google Play Store were systematically searched for relevant applications. Two independent reviewers performed the application screening and eligibility assessment. Included applications were graded using the validated Mobile Application Rating Scale, which encompasses 4 domains (engagement, functionality, aesthetics, and information) as well as an overall application quality score. The associations between overall application quality, user ratings and download numbers were evaluated.
    UNASSIGNED: Of the 4816 unique applications identified, 46 met inclusion criteria (patient education = 37, disease management = 9). Mean overall application quality score was 3.02 ± 0.40 out of 5 (\"acceptable\"), with 61% (28/46) rated as \"poor\" (score 2.0-2.9). Applications scored highest for aesthetics (3.24 ± 0.48) and functionality (3.88 ± 0.37) and lowest for information (2.58 ± 0.64) and engagement (2.39 ± 0.65). Disease management applications were of significantly higher quality than education-focused applications (3.59 ± 0.38 vs 2.88 ± 0.26, P < .001). There was no correlation between graded quality and either user ratings or the number of downloads.
    UNASSIGNED: While numerous smartphone applications exist to support patients with GERD, their quality is variable. Patient education applications are of particularly low quality. Our findings can help to inform the selection of applications by patients and guide clinicians\' recommendations. This study also highlights the need for higher-quality, evidence-informed applications aimed at GERD patient education.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:人工智能(AI)和大型语言模型(LLM)改变了患者告知自己的方式。LLM提供潜在的教育工具,但是它们的质量取决于所产生的信息。当前研究人工智能作为皮肤病学信息工具的文献在评估人工智能的多方面角色和意见多样性方面受到限制。这里,我们利用国际专家小组评估LLM作为临床内外Mohs显微外科手术(MMS)的患者教育工具.
    方法:从Google中提取最常见的患者彩信问题,并将其转换为两个LLM和Google的搜索引擎。15名MMS外科医生评估了产生的反应,检查它们作为面向患者的信息平台的适当性,在临床环境中反应充足,和生成内容的准确性。采用验证的量表来评估每个反应的可理解性。
    结果:大多数审稿人认为所有LLM回答都是合适的。75%的反应被评为大多数准确或更高。ChatGPT的平均准确度最高。大多数小组认为33%的反应足以用于临床实践。所有平台的平均可理解性分数表示所需的10年级阅读水平。
    结论:LLM产生的反应被评为适当的患者信息源,其内容大多准确。然而,这些平台可能无法提供足够的信息来在临床环境中运行,复杂的可理解性可能是利用的障碍。随着这些平台的普及,皮肤科医生必须意识到这些局限性。
    Artificial intelligence (AI) and large language models (LLMs) transform how patients inform themselves. LLMs offer potential as educational tools, but their quality depends upon the information generated. Current literature examining AI as an informational tool in dermatology has been limited in evaluating AI\'s multifaceted roles and diversity of opinions. Here, we evaluate LLMs as a patient-educational tool for Mohs micrographic surgery (MMS) in and out of the clinic utilizing an international expert panel.
    The most common patient MMS questions were extracted from Google and transposed into two LLMs and Google\'s search engine. 15 MMS surgeons evaluated the generated responses, examining their appropriateness as a patient-facing informational platform, sufficiency of response in a clinical environment, and accuracy of content generated. Validated scales were employed to assess the comprehensibility of each response.
    The majority of reviewers deemed all LLM responses appropriate. 75% of responses were rated as mostly accurate or higher. ChatGPT had the highest mean accuracy. The majority of the panel deemed 33% of responses sufficient for clinical practice. The mean comprehensibility scores for all platforms indicated a required 10th-grade reading level.
    LLM-generated responses were rated as appropriate patient informational sources and mostly accurate in their content. However, these platforms may not provide sufficient information to function in a clinical environment, and complex comprehensibility may represent a barrier to utilization. As the popularity of these platforms increases, it is important for dermatologists to be aware of these limitations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号