patient education material

患者教育材料
  • 文章类型: Journal Article
    目的:患者教育材料的可读性对于确保肿瘤学中医疗保健信息的可理解性和传播至关重要。我们旨在调查欧洲泌尿外科协会(EAU)和美国泌尿外科协会(AUA)官方患者教育材料的可读性。
    方法:前列腺患者教育材料,膀胱,肾,睾丸,阴茎,从各自的组织中检索到尿道癌。通过WebFX在线工具评估FleschKincaid阅读轻松评分(FRES)和FleschKincaid等级等级(FKGL)的阅读等级水平,GunningFogScore(GFS),烟雾指数(SI)科尔曼廖氏指数(CLI),和自动可读性指数(ARI)。根据欧盟的建议,外行人可读性定义为FRES≥70,其他可读性指数<7。这项研究只评估了客观的可读性,没有评估其他指标,如可理解性。
    大多数患者教育材料未能达到外行人的推荐阈值。EAU患者教育材料的平均可读性如下:FRES50.9(标准误差[SE]:3.0),和FKGL,GFS,SI,CLI,ARI评分均≥7分。AUA患者材料的平均可读性如下:FRES64.0(SE:1.4),所有的FKGL,GFS,SI,ARI评分≥7可读性。70份患者教育材料中只有13份(18.6%)符合可读性要求。膀胱癌患者教育材料的平均可读性最低,FRES为36.7(SE:4.1)。
    结论:来自主要泌尿外科协会的患者教育材料显示,可读性水平超出了外行人的推荐阈值,患者可能不容易理解。未来需要更多对患者友好的阅读材料。
    结果:这项研究检查了有关不同癌症的健康信息是否易于阅读。大多数患者很难理解。
    OBJECTIVE: Readability of patient education materials is of utmost importance to ensure understandability and dissemination of health care information in uro-oncology. We aimed to investigate the readability of the official patient education materials of the European Association of Urology (EAU) and American Urology Association (AUA).
    METHODS: Patient education materials for prostate, bladder, kidney, testicular, penile, and urethral cancers were retrieved from the respective organizations. Readability was assessed via the WebFX online tool for Flesch Kincaid Reading Ease Score (FRES) and for reading grade levels by Flesch Kincaid Grade Level (FKGL), Gunning Fog Score (GFS), Smog Index (SI), Coleman Liau Index (CLI), and Automated Readability Index (ARI). Layperson readability was defined as a FRES of ≥70 and with the other readability indexes <7 according to European Union recommendations. This study assessed only objective readability and no other metrics such as understandability.
    UNASSIGNED: Most patient education materials failed to meet the recommended threshold for laypersons. The mean readability for EAU patient education material was as follows: FRES 50.9 (standard error [SE]: 3.0), and FKGL, GFS, SI, CLI, and ARI all with scores ≥7. The mean readability for AUA patient material was as follows: FRES 64.0 (SE: 1.4), with all of FKGL, GFS, SI, and ARI scoring ≥7 readability. Only 13 out of 70 (18.6%) patient education materials\' paragraphs met the readability requirements. The mean readability for bladder cancer patient education materials was the lowest, with a FRES of 36.7 (SE: 4.1).
    CONCLUSIONS: Patient education materials from leading urological associations reveal readability levels beyond the recommended thresholds for laypersons and may not be understood easily by patients. There is a future need for more patient-friendly reading materials.
    RESULTS: This study checked whether health information about different cancers was easy to read. Most of it was too hard for patients to understand.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:皮肤科患者教育材料(PEM)的书写水平通常高于全国平均水平的七至八年级阅读水平。ChatGPT-3.5,GPT-4,DermGPT,和DocsGPT是响应用户提示的大型语言模型(LLM)。我们的项目评估了它们在指定阅读水平下生成皮肤病学PEM的用途。
    目的:本研究旨在评估在未指定和指定的阅读水平下,选择LLM在常见和罕见皮肤病学中产生PEM的能力。Further,该研究旨在评估这些LLM生成的PEM的意义保存情况,由皮肤科住院医师评估。
    方法:当前美国皮肤病学会PEMs的Flesch-Kincaid阅读水平(FKRL)评估了4种常见(特应性皮炎,寻常痤疮,牛皮癣,和带状疱疹)和4例罕见(大疱性表皮松解症,大疱性类天疱疮,层状鱼鳞病,和扁平苔藓)皮肤病。我们提示ChatGPT-3.5,GPT-4,DermGPT,和DocsGPT以“在[FKRL]中创建关于[条件]的患者教育讲义”,以在未指定的五年级和七年级FKRL中每个条件迭代生成10个PEM,使用MicrosoftWord可读性统计进行评估。由2名皮肤科住院医师评估了LLM中意义的保留。
    结果:当前的美国皮肤病学会PEMs对常见和罕见疾病的平均(SD)FKRL为9.35(1.26)和9.50(2.3),分别。对于常见疾病,LLM生产的PEM的FKRL介于9.8和11.21之间(未指定提示),在4.22和7.43之间(五年级提示),在5.98和7.28之间(七年级提示)。对于罕见疾病,LLM生产的PEM的FKRL范围在9.85和11.45之间(未指定提示),在4.22和7.43之间(五年级提示),在5.98和7.28之间(七年级提示)。在五年级阅读水平,与ChatGPT-3.5相比,GPT-4在常见和罕见条件下都能更好地生产PEM(分别为P=.001和P=.01),DermGPT(分别为P<.001和P=.03),和DocsGPT(分别为P<.001和P=.02)。在七年级的阅读水平,ChatGPT-3.5、GPT-4、DocsGPT、或DermGPT在生产常见条件下的PEM(所有P>.05);然而,对于罕见的情况,ChatGPT-3.5和DocsGPT的表现优于GPT-4(分别为P=.003和P<.001)。意义分析的保留表明,对于共同条件,DermGPT在整体阅读便利性方面排名最高,患者的可理解性,和准确性(14.75/15,98%);对于罕见的情况,GPT-4产生的施舍排名最高(14.5/15,97%)。
    结论:GPT-4的表现似乎优于ChatGPT-3.5,DocsGPT,和DermGPT在五年级FKRL的常见和罕见的情况下,尽管ChatGPT-3.5和DocsGPT在7级FKRL中在罕见情况下的表现均优于GPT-4。LLM生产的PEM可以可靠地满足七级FKRL的选择常见和罕见的皮肤病,并且易于阅读,患者可以理解,而且大多是准确的。LLM可能在提高健康素养和传播无障碍方面发挥作用,在皮肤病学中可以理解的PEM。
    BACKGROUND: Dermatologic patient education materials (PEMs) are often written above the national average seventh- to eighth-grade reading level. ChatGPT-3.5, GPT-4, DermGPT, and DocsGPT are large language models (LLMs) that are responsive to user prompts. Our project assesses their use in generating dermatologic PEMs at specified reading levels.
    OBJECTIVE: This study aims to assess the ability of select LLMs to generate PEMs for common and rare dermatologic conditions at unspecified and specified reading levels. Further, the study aims to assess the preservation of meaning across such LLM-generated PEMs, as assessed by dermatology resident trainees.
    METHODS: The Flesch-Kincaid reading level (FKRL) of current American Academy of Dermatology PEMs was evaluated for 4 common (atopic dermatitis, acne vulgaris, psoriasis, and herpes zoster) and 4 rare (epidermolysis bullosa, bullous pemphigoid, lamellar ichthyosis, and lichen planus) dermatologic conditions. We prompted ChatGPT-3.5, GPT-4, DermGPT, and DocsGPT to \"Create a patient education handout about [condition] at a [FKRL]\" to iteratively generate 10 PEMs per condition at unspecified fifth- and seventh-grade FKRLs, evaluated with Microsoft Word readability statistics. The preservation of meaning across LLMs was assessed by 2 dermatology resident trainees.
    RESULTS: The current American Academy of Dermatology PEMs had an average (SD) FKRL of 9.35 (1.26) and 9.50 (2.3) for common and rare diseases, respectively. For common diseases, the FKRLs of LLM-produced PEMs ranged between 9.8 and 11.21 (unspecified prompt), between 4.22 and 7.43 (fifth-grade prompt), and between 5.98 and 7.28 (seventh-grade prompt). For rare diseases, the FKRLs of LLM-produced PEMs ranged between 9.85 and 11.45 (unspecified prompt), between 4.22 and 7.43 (fifth-grade prompt), and between 5.98 and 7.28 (seventh-grade prompt). At the fifth-grade reading level, GPT-4 was better at producing PEMs for both common and rare conditions than ChatGPT-3.5 (P=.001 and P=.01, respectively), DermGPT (P<.001 and P=.03, respectively), and DocsGPT (P<.001 and P=.02, respectively). At the seventh-grade reading level, no significant difference was found between ChatGPT-3.5, GPT-4, DocsGPT, or DermGPT in producing PEMs for common conditions (all P>.05); however, for rare conditions, ChatGPT-3.5 and DocsGPT outperformed GPT-4 (P=.003 and P<.001, respectively). The preservation of meaning analysis revealed that for common conditions, DermGPT ranked the highest for overall ease of reading, patient understandability, and accuracy (14.75/15, 98%); for rare conditions, handouts generated by GPT-4 ranked the highest (14.5/15, 97%).
    CONCLUSIONS: GPT-4 appeared to outperform ChatGPT-3.5, DocsGPT, and DermGPT at the fifth-grade FKRL for both common and rare conditions, although both ChatGPT-3.5 and DocsGPT performed better than GPT-4 at the seventh-grade FKRL for rare conditions. LLM-produced PEMs may reliably meet seventh-grade FKRLs for select common and rare dermatologic conditions and are easy to read, understandable for patients, and mostly accurate. LLMs may play a role in enhancing health literacy and disseminating accessible, understandable PEMs in dermatology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:由于许多因素,护理教育中需要创新的教学实践,例如全球变化,技术的快速发展,越来越多的学生,以及最近的大流行。
    目的:本研究使用标准化患者进行,以评估高级护理学生在实施患者教育培训计划后对患者教育实践的态度和技能。
    方法:混合方法设计。
    方法:该研究是在护理学院进行的。
    方法:该研究的样本包括47名高级护理学生。
    方法:对参与研究的学生进行了四小时的患者教育培训,包括准备患者教育,准备材料,和有效的演示。
    方法:研究人员准备的描述性信息表,患者教育实施量表(PEIS),可打印材料的患者教育材料评估工具(PEMATTR-P)的土耳其语版本,并使用演示技能评估表(PSEF)收集定量数据。半结构化访谈表格用于收集定性数据。使用Windowsv.25.0的SPSS和MAXQDA20进行数据分析。p值<0.05的结果被认为具有统计学意义。
    结果:学生的测试后平均PEIS分数在总量表和所有子维度中都增加了。发现用PEMATTR-P评估的患者教育材料的可理解性和可操作性存在显着差异(p<0.05)。学生的PSEF平均得分为85.14±9.25分。在研究范围内,两大主题,即情绪和功效,决心。
    结论:这项研究证实了结构化患者教育培训,包括标准化患者的使用,对于支持和发展护理高年级学生对患者教育的态度和技能很重要。
    There is a need for innovative teaching practices in nursing education due to many factors, such as global changes, the rapid development of technology, the increasing number of students, and the recent pandemic.
    This research was conducted using standardized patients to evaluate the attitudes and skills of senior nursing students toward patient education practices following the implementation of a patient education training program.
    Mixed-methods design.
    The study was conducted at a nursing faculty.
    The sample of the study consisted of 47 senior nursing students.
    The students participating in the study were given a four-hour patient education training that included the preparation of patient education, preparation of materials, and effective presentation.
    A descriptive information form prepared by the researchers, the Patient Education Implementation Scale (PEIS), the Turkish version of the Patient Education Materials Assessment Tool for Printable Materials (PEMATTR-P), and the presentation skill evaluation form (PSEF) were used to collect quantitative data. Semi-structured interview forms were utilized to collect qualitative data. SPSS for Windows v. 25.0 and MAXQDA20 were used for the data analyses. Results with a p value of <0.05 were considered statistically significant.
    The post-test mean PEIS scores of the students increased in the total scale and in all subdimensions. A significant difference was found in the understandability and actionability of patient education materials evaluated with PEMATTR-P (p < 0.05). The mean PSEF score of the students was 85.14 ± 9.25 points. Within the scope of the research, two main themes, namely emotions and efficacy, were determined.
    This study confirms that structured patient education training, including the use of standardized patients, is important for supporting and developing nursing senior students\' attitudes and skills toward patient education.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:本研究旨在回顾用于小儿慢性肾脏病的磷酸盐教育材料的质量和内容。
    方法:使用经过验证的健康素养需求工具(材料的适宜性评估[SAM]和患者教育材料评估工具[PEMAT-P])和可读性(Flesch阅读方便[FRE]和Flesch-Kincaid[FK]等级)评估基于文本的儿科磷酸盐教育材料的质量。代码是通过归纳推导来分析格式的,外观,目标受众,资源类型和内容,目标是编码器间可靠性(ICR)>80%。将内容与儿科肾脏营养工作组(PRNT)建议进行比较。
    结果:获得了65份磷酸盐教育材料,37例集中于儿科,其中28例基于文本。32%的基于文本的材料是针对护理人员的,25%的儿童和43%的未指定。大多数(75%)包括生产日期,其中75%的生产日期>2年前。FRE测试分数中位数为68.2(四分位距[IQR]61.1-75.3),FK等级为5.6(IQR4.5-7.7)。使用SAM,54%评级“优越”(≥70),38%“足够”(40-69)和8%“不合适”(≤39)。得分低的材料缺乏总结(12%),覆盖图形(35%)或包含无关插图(50%)。PEMAT-P的可理解性评分为70%(IQR50-82),可操作性评分为50%(IQR33-67)。实现了87%的ICR。超过一半的食物与PRNT一致(包括89%建议避免磷酸盐添加剂)。与PRNT相冲突的建议包括减少豆类和全谷物。超过三分之一的人不准确,超过三分之二的人没有实际建议。
    结论:基于文本的儿科磷酸盐教育材料为护理人员提供了适当的水平,但这对10岁以下的儿童来说可能太高了。包含相关插图可以改善这一点。四分之三的材料的可操作性得分较低。建议并不总是与PRNT一致,这(加上报告的不准确性)可能导致给患者及其家人的信息冲突。
    OBJECTIVE: This study aimed to review the quality and content of phosphate educational materials used in pediatric chronic kidney disease.
    METHODS: The quality of text-based (TB) pediatric phosphate educational materials was assessed using validated instruments for health literacy demands (Suitability Assessment of Materials, Patient Education Material Assessment Tool [PEMAT-P]) readability (Flesch Reading Ease, and Flesch-Kincaid Grade Level). Codes were inductively derived to analyse format, appearance, target audience, resource type, and content, aiming for intercoder reliability > 80%. The content was compared to Pediatric Renal Nutrition Taskforce (PRNT) recommendations.
    RESULTS: Sixty-five phosphate educational materials were obtained; 37 were pediatric-focused, including 28 TB. Thirty-two percent of TB materials were directed at caregivers, 25% at children, and 43% were unspecified. Most (75%) included a production date, with 75% produced >2 years ago. The median Flesch Reading Easetest score was 68.2 (interquartile range [IQR] 61.1-75.3) and Flesch-Kincaid Grade Level was 5.6 (IQR 4.5-7.7). Using Suitability Assessment of Materials, 54% rated \"superior\" (≥70), 38% rated \"adequate\" (40-69), and 8% rated \"not suitable\" (≤39). Low-scoring materials lacked a summary (12%), cover graphics (35%), or included irrelevant illustrations (50%). Patient Education Material Assessment Tool-P scores were 70% (IQR 50-82) for understandability and 50% (IQR 33-67) for actionability. An intercoder reliability of 87% was achieved. Over half of limited foods are in agreement with PRNT (including 89% suggesting avoiding phosphate additives). Recommendations conflicting with PRNT included reducing legumes and whole grains. Over a third contained inaccuracies, and over two-thirds included no practical advice.
    CONCLUSIONS: TB pediatric phosphate educational materials are pitched at an appropriate level for caregivers, but this may be too high for children under 10 years. The inclusion of relevant illustrations may improve this. Three-quarters of materials scored low for actionability. The advice does not always align with the PRNT, which (together with the inaccuracies reported) could result in conflicting messages to patients and their families.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:生成人工智能(AI)对话平台和大型语言模型(LLM)的出现可能有助于促进正在进行的提高健康素养的努力。此外,最近的研究强调了心脏病患者的健康素养不足。本研究的目的是确定两个免费提供的生成AI对话平台是否可以重写在线主动脉瓣狭窄(AS)患者教育材料(PEM),以满足公众推荐的阅读技能水平。
    方法:在线PEMs来自美国专业的心胸外科学会和学术机构。然后将PEM输入到两个AI驱动的LLM中,ChatGPT-3.5和Bard,提示“翻译成5年级阅读水平”。使用经过验证的Flesch阅读速度(FRE)测量AI转换前后PEM的可读性,Flesch-Kincaid等级(FKGL),心形怪状指数(SMOGI)的简单测量,和Gunning-Fog指数(GFI)得分。
    结果:总体而言,收集了关于AS的21个PEM。原始的可读性度量表明,在10至12年级的阅读水平上,可读性很难。ChatGPT-3.5成功地提高了所有四个指标的可读性(p<0.001),达到了大约6-7年级的阅读水平。Bard成功地将除SMOGI(p=0.729)外的所有措施(p<0.001)的可读性提高到大约8-9年级。两个平台都没有生成低于推荐的6年级阅读水平的PEM。ChatGPT-3.5显示出明显更有利的转换后可读性分数,可读性分数的百分比变化,和与Bard相比的转化时间(所有p<0.001)。
    结论:AI对话平台可以增强AS患者PEM的可读性,但可能无法完全达到推荐的阅读技能水平。强调未来有助于加强心脏健康素养的潜在工具。
    BACKGROUND: The advent of generative artificial intelligence (AI) dialogue platforms and large language models (LLMs) may help facilitate ongoing efforts to improve health literacy. Additionally, recent studies have highlighted inadequate health literacy among patients with cardiac disease. The aim of the present study was to ascertain whether two freely available generative AI dialogue platforms could rewrite online aortic stenosis (AS) patient education materials (PEMs) to meet recommended reading skill levels for the public.
    METHODS: Online PEMs were gathered from a professional cardiothoracic surgical society and academic institutions in the USA. PEMs were then inputted into two AI-powered LLMs, ChatGPT-3.5 and Bard, with the prompt \"translate to 5th-grade reading level\". Readability of PEMs before and after AI conversion was measured using the validated Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook Index (SMOGI), and Gunning-Fog Index (GFI) scores.
    RESULTS: Overall, 21 PEMs on AS were gathered. Original readability measures indicated difficult readability at the 10th-12th grade reading level. ChatGPT-3.5 successfully improved readability across all four measures (p < 0.001) to the approximately 6th-7th grade reading level. Bard successfully improved readability across all measures (p < 0.001) except for SMOGI (p = 0.729) to the approximately 8th-9th grade level. Neither platform generated PEMs written below the recommended 6th-grade reading level. ChatGPT-3.5 demonstrated significantly more favorable post-conversion readability scores, percentage change in readability scores, and conversion time compared to Bard (all p < 0.001).
    CONCLUSIONS: AI dialogue platforms can enhance the readability of PEMs for patients with AS but may not fully meet recommended reading skill levels, highlighting potential tools to help strengthen cardiac health literacy in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:对由两个人工智能聊天机器人产生的阻塞性睡眠呼吸暂停的患者教育材料进行首次头对头比较评估,ChatGPT及其主要竞争对手GoogleBard。
    方法:从四个主要睡眠组织的患者信息网页中提取了50个关于阻塞性睡眠呼吸暂停的英文常见问题,并归类为输入提示。选择ChatGPT和GoogleBard响应,并由两名耳鼻喉科医师使用患者教育材料评估工具-可打印(PEMAT-P)自动评分表进行独立评分,拥有皇家外科医学院(FRCS)的奖学金,并对睡眠医学和外科特别感兴趣。主观筛选任何不正确或危险的信息作为次要结果。Flesch-Kincaid计算器用于评估ChatGPT和GoogleBard的响应的可读性。
    结果:总共收集了46个问题,并将其分为三个领域:条件(n=14),调查(n=9)和治疗(n=23)。ChatGPT与GoogleBard在各个领域的可理解性得分如下:条件90.86%vs.76.32%(p<0.001);调查89.94%与71.67%(p<0.001);治疗90.78%vs.73.74%(p<0.001)。ChatGPT与GoogleBard在各个领域的可操作性得分如下:条件77.14%与51.43%(p<0.001);调查72.22%与54.44%(p=0.05);治疗73.04%vs.54.78%(p=0.002)。ChatGPT的平均Flesch-Kincaid等级为9.0,GoogleBard为5.9。在ChatGPT和GoogleBard生成的任何响应中均未发现错误或危险的信息。
    结论:对ChatGPT和GoogleBardOSA患者教育材料的评估表明,前者在多个领域提供了更好的信息。
    OBJECTIVE: To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard.
    METHODS: Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool-Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard.
    RESULTS: A total of 46 questions were curated and categorized into three domains: condition (n = 14), investigation (n = 9) and treatment (n = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% (p < 0.001); investigation 89.94% vs. 71.67% (p < 0.001); treatment 90.78% vs.73.74% (p < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% (p < 0.001); investigation 72.22% vs. 54.44% (p = 0.05); treatment 73.04% vs. 54.78% (p = 0.002). The mean Flesch-Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard.
    CONCLUSIONS: Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Systematic Review
    中风教育材料对于中风患者的康复至关重要,但是它们的有效性取决于它们的可读性。美国医学协会(AMA)建议患者教育材料应在六年级水平上编写。研究表明,现有的纸质和在线材料超过了患者的阅读水平,并破坏了他们的健康素养。卒中患者健康素养低与健康状况恶化和卒中康复疗效下降相关。
    我们回顾了论文的可读性(i。e小册子,概况介绍,海报)和在线(i。e美国中风协会,Google,雅虎!)中风患者教育材料,中风患者的阅读水平,在线健康信息的可访问性,患者对中风信息差距的看法,并提供了提高可读性的建议。
    使用PUBMED进行了PRISMA指导的系统文献综述,谷歌学者,和EbscoHost数据库和\"stroke\",“中风病人教育的可读性”,和“笔触可读性”搜索词以发现英语文章。共审查了12篇文章。
    纸张和在线材料的SMOG分数分别为11.0-12.0年级和7.8-13.95年级。中风患者的阅读水平范围从3级到9级或以上。在线中风信息的可及性很高。结构化的患者访谈说明了患者教育材料中的差距和理解困难。
    论文和在线患者教育材料超过了中风患者的阅读水平和AMA推荐的6年级水平。由于可读性的限制,中风患者没有充分了解他们的病情。
    UNASSIGNED: Stroke education materials are crucial for the recovery of stroke patients, but their effectiveness depends on their readability. The American Medical Association (AMA) recommends patient education materials be written at a sixth-grade level. Studies show existing paper and online materials exceed patients\' reading levels and undermine their health literacy. Low health literacy among stroke patients is associated with worse health outcomes and decreased efficacy of stroke rehabilitation.
    UNASSIGNED: We reviewed the readability of paper (i.e brochures, factsheets, posters) and online (i.e American Stroke Association, Google, Yahoo!) stroke patient education materials, reading level of stroke patients, accessibility of online health information, patients\' perceptions on gaps in stroke information, and provided recommendations for improving readability.
    UNASSIGNED: A PRISMA-guided systematic literature review was conducted using PUBMED, Google Scholar, and EbscoHost databases and \"stroke\", \"readability of stroke patient education\", and \"stroke readability\" search terms to discover English-language articles. A total of 12 articles were reviewed.
    UNASSIGNED: SMOG scores for paper and online material ranged from 11.0 - 12.0 grade level and 7.8 - 13.95 grade level respectively. Reading level of stroke patients ranged from 3rd grade to 9th grade level or above. Accessibility of online stroke information was high. Structured patient interviews illustrated gaps in patient education materials and difficulty with comprehension.
    UNASSIGNED: Paper and online patient education materials exceed the reading level of stroke patients and the AMA recommended 6th grade level. Due to limitations in readability, stroke patients are not being adequately educated about their condition.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:评估提供痛风饮食建议的YouTube®视频与针对英国(UK)人群的循证指南的一致性,并确定其质量。
    方法:对提供痛风饮食建议的YouTube®视频进行内容分析。视频按视频源分类。每个视频的痛风饮食建议与三个基于证据的痛风指南进行了比较,产生合规性评分。评估了非指南建议的存在。使用视听材料的患者教育材料评估工具评估可理解性和可操作性。使用适应的DISCERN工具评估可靠性,并使用全球质量评分五点量表评估教育质量。使用单向Kruskal-WallisH检验评估视频源和连续变量之间的差异。对于分类变量,使用Fisher-Freeman-Halton检验对相关性进行了调查.
    方法:在线,2020年5-6月。
    方法:131个视频。
    结果:视频与循证指南的一致性较差(中位依从性得分27%(IQR17-37%))。此外,57%的视频包含非指南建议。健康专业来源小组的视频中包含非指南建议的视频最少,但这仅显著低于自然疗法组(31%vs81%,p=0.009)。几乎70%的视频被认为可操作性差,50%的视频难以理解。大多数视频在可靠性方面被评为较差(79%),在教育质量方面被评为较差至一般较差(49%)。
    结论:提供痛风饮食建议的YouTube®视频经常不符合循证指南及其教育质量,可靠性,可理解性,行动能力往往很差。更多的高品质,全面,英国痛风患者需要基于证据的YouTube®视频。
    OBJECTIVE: To assess the alignment of YouTube® videos providing dietary recommendations for gout with evidence-based guidelines targeted at the United Kingdom (UK) population and to establish their quality.
    METHODS: A content analysis of YouTube® videos providing dietary recommendations for gout was undertaken. Videos were categorised by video source. Each video\'s dietary recommendations for gout were compared with three evidence-based guidelines for gout, producing a compliance score. Presence of non-guideline advice was assessed. Understandability and actionability were evaluated using the Patient Education Material Assessment Tool for Audio-Visual Materials. Reliability was assessed using an adapted-DISCERN tool and educational quality using the Global Quality Score Five-Point Scale. Differences between video source and continuous variables were assessed using one-way Kruskal-Wallis H tests. For categorical variables, associations were investigated using Fisher-Freeman-Halton tests.
    METHODS: Online, May-June 2020.
    METHODS: One-hundred thirty-one videos.
    RESULTS: Alignment of videos with evidence-based guidelines was poor (median compliance score 27 % (interquartile range 17-37 %)). Additionally, 57 % of videos contained non-guideline advice. The health professional source group had the fewest videos containing non-guideline advice, but this was only significantly lower than the naturopath group (31 % v. 81 %, P = 0·009). Almost 70 % of videos were considered poorly actionable and 50 % poorly understandable. Most videos were rated poor for reliability (79 %) and poor to generally poor for educational quality (49 %).
    CONCLUSIONS: YouTube® videos providing dietary recommendations for gout frequently fail to conform to evidence-based guidelines, and their educational quality, reliability, understandability and actionability are often poor. More high-quality, comprehensive, evidence-based YouTube® videos are required for UK gout patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在调查遗传性听力障碍(HHI)的在线资源的可读性。
    2022年8月,搜索词“遗传性听力障碍”,“遗传性耳聋”,遗传性听力损失“,和“遗传起源的感觉神经性听力损失”被输入到谷歌搜索引擎中,并确定了教育材料。每次搜索确定前50个网站。删除了双重点击,并排除了仅包含图形或表格的网站。网站被归类为专业社会,临床实践或一般健康信息网站。评估网站的可读性测试包括:FleschReadingEase,Flesch-Kincaid等级,Gunning-Fog指数,Gobbledygook的简单测量,Coleman-Liau指数,自动化可读性指数。
    包含了来自4个专业协会的Twentynine网站并将其分类,11来自临床实践,14来自提供一般信息。所有经过分析的网站都需要比六年级更高的阅读水平。平均需要12-16年的教育才能阅读和理解专注于HHI的网站。尽管一般健康信息网站具有更好的可读性,差异无统计学意义。
    关于HHI的每种在线教育材料的可读性得分都高于建议水平,表明并非所有患者和父母都能理解他们在这些网站上寻求的信息。
    UNASSIGNED: The present study aimed at investigating the readability of online sources on hereditary hearing impairment (HHI).
    UNASSIGNED: In August 2022, the search terms \"hereditary hearing impairment\", \"genetic deafness\", hereditary hearing loss\", and \"sensorineural hearing loss of genetic origin\" were entered into the Google search engine and educational materials were determined. The first 50 websites were determined for each search. The double hits were removed and websites with only graphics or tables were excluded. Websites were categorized into either a professional society, a clinical practice or a general health information website. The readability tests to evaluate the websites included: Flesch Reading Ease, Flesch-Kincaid grade level, Gunning-Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, Automated Readability Index.
    UNASSIGNED: Twentynine websites were included and categorized as from 4 professional societies, 11 from clinical practices and 14 providing general information. All analyzed websites required higher reading levels than sixth grade. On average 12-16 years of education is required to read and understand the websites focused on HHI. Although general health information websites have better readability, the difference was not statistically significant.
    UNASSIGNED: The readability scores of every type of online educational materials on HHI are above the recommended level indicating that not all patients and parents can comprehend the information they seek for on these websites.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    引言随着信息技术的革命,信息和错误信息更容易在网上找到。YouTube是世界上最大,最常搜索的视频内容网站。据推测,由于冠状病毒大流行,大多数患者试图通过互联网了解疾病,并减少医院暴露的数量,除非另有规定。为了评估在线免费提供的此类YouTube视频的可理解性和可操作性,新生儿溶血病(HDN),这项研究是有计划的。方法这是一项横断面研究,使用2021年5月14日提供的前160个视频,搜索关键字“HDN”带有相关性过滤器,持续时间为4至20分钟。进一步筛选了有关信息内容和语言的视频。这些视频由三名独立评估员使用患者教育材料评估工具对视听内容进行评估。结果在筛选的前160个视频中,由于缺乏有关搜索疾病“HDN”的内容,因此排除了58个视频。由于教学语言不是英语,另外63个视频被排除在外。最后,39个视频由三名评估员评估。检查了可理解性和可操作性响应的可靠性,发现Cronbachα为93.6%,表明良好的数据可靠性。为了减少主观性,可理解性和可操作性的平均得分是根据这三个评估者的得分得出的。有8个和34个视频的平均可理解性和可操作性得分分别<70%。中位平均可理解性和可操作性得分分别为84.4%和50%。可理解性和可操作性得分之间存在统计学上的显着差异,而YouTube视频中有关该疾病的可操作性得分要低得多,HDN(p<0.001)。结论非常需要在视频中包含内容开发者的可操作信息。大多数可用的信息都有足够的可理解内容,使公众更容易了解这些疾病。因此,YouTube和类似的社交网站可能正在帮助传播信息,以提高公众,特别是患者的意识。
    Introduction With revolutions in Information Technology, information and misinformation are easier to be found online. YouTube is the largest and most commonly searched video content website in the world. It is assumed that, due to the coronavirus pandemic, most patients try to know about diseases through the internet and reduce the number of hospital exposures unless otherwise. In order to assess the understandability and actionability of such YouTube videos available freely online about the disease, Hemolytic disease of the newborn (HDN), this study was planned. Methods This is a cross-sectional study conducted with the first 160 videos available on May 14, 2021, with the search keyword \"HDN\" with a relevance filter and a duration of 4 to 20 minutes. The videos were further screened regarding the information content and language. These videos were assessed by three independent assessors using the patient educational materials assessment tool for audio-visual content. Results Out of the first 160 videos selected for screening, 58 videos were excluded due to a lack of content about the searched disease \"HDN\". Another 63 videos were excluded due to the language of instruction not being in English. Finally, 39 videos were assessed by three assessors. The understandability and actionability responses were checked for reliability and a Cronbach\'s alpha of 93.6% was found, indicating good data reliability. To reduce subjectivity, average scores of understandability and actionability were taken based on the scores of these three assessors. There were eight and 34 videos with average understandability and actionability scores of <70% respectively. The median average understandability and actionability scores were 84.4% and 50% respectively. There was a statistically significant difference between understandability and actionability scores with considerably lower actionability scores of YouTube videos on the disease, HDN (p<0.001). Conclusion There is a great need to include actionable information by content developers in videos. Most information available has adequate understandable content making it easier for the general public to know about the diseases. YouTube and similar social sites thus possibly are helping in the dissemination of information promoting awareness among the public in general and patients in particular.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号