patient education materials

病人教育材料
  • 文章类型: Journal Article
    目的:使用人工智能(AI)驱动的大型语言模型(LLM)来提高患者讲义的可读性。
    方法:审查AI修改的在线材料。
    方法:学术中心。
    方法:使用经过验证的可读性指标评估了从美国鼻科学会(ARS)和美国面部整形和重建外科学会网站获得的五种讲义材料。提示后,讲义被输入到OpenAI的ChatGPT-4中:“以6年级的阅读水平重写以下内容。“使用患者教育材料评估工具(PEMAT)评估了本地版本和LLM修订版本的可理解性和可操作性。使用Wilcoxon秩和检验比较结果。
    结果:标准的平均可读性得分(ARS,美国面部整形和重建外科学会)材料对应于“困难,“阅读类别介于高中和大学年级之间。相反,LLM修订的讲义具有平均七年级的阅读水平。LLM修订的讲义在几乎所有测试的指标中都具有更好的可读性:Flesch-Kincaid易于阅读(70.8vs43.9;P<.05),GunningFogScore(10.2vs14.42;P<.05),Gobbledygook的简单测量(9.9vs13.1;P<.05),Coleman-Liau(8.8vs12.6;P<.05),和自动可读性指数(8.2对10.7;P=.06)。与标准材料相比,在LLM修订的讲义中,PEMAT分数显着高于可理解性(91vs74%;P<.05),可操作性相似(42vs34%;P=.15)。
    结论:ChatGPT可以通过简单的提示来增强面向患者的讲义,从而提高信息的可读性。这项研究证明了LLM有助于重写患者讲义的实用性,并且可以作为帮助优化教材的工具。
    方法:VI级。
    OBJECTIVE: To use an artificial intelligence (AI)-powered large language model (LLM) to improve readability of patient handouts.
    METHODS: Review of online material modified by AI.
    METHODS: Academic center.
    METHODS: Five handout materials obtained from the American Rhinologic Society (ARS) and the American Academy of Facial Plastic and Reconstructive Surgery websites were assessed using validated readability metrics. The handouts were inputted into OpenAI\'s ChatGPT-4 after prompting: \"Rewrite the following at a 6th-grade reading level.\" The understandability and actionability of both native and LLM-revised versions were evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results were compared using Wilcoxon rank-sum tests.
    RESULTS: The mean readability scores of the standard (ARS, American Academy of Facial Plastic and Reconstructive Surgery) materials corresponded to \"difficult,\" with reading categories ranging between high school and university grade levels. Conversely, the LLM-revised handouts had an average seventh-grade reading level. LLM-revised handouts had better readability in nearly all metrics tested: Flesch-Kincaid Reading Ease (70.8 vs 43.9; P < .05), Gunning Fog Score (10.2 vs 14.42; P < .05), Simple Measure of Gobbledygook (9.9 vs 13.1; P < .05), Coleman-Liau (8.8 vs 12.6; P < .05), and Automated Readability Index (8.2 vs 10.7; P = .06). PEMAT scores were significantly higher in the LLM-revised handouts for understandability (91 vs 74%; P < .05) with similar actionability (42 vs 34%; P = .15) when compared to the standard materials.
    CONCLUSIONS: Patient-facing handouts can be augmented by ChatGPT with simple prompting to tailor information with improved readability. This study demonstrates the utility of LLMs to aid in rewriting patient handouts and may serve as a tool to help optimize education materials.
    METHODS: Level VI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:人工智能(AI)是一个新兴的新领域,在过去几年中越来越受欢迎,与大型语言模型(LLM)驱动的聊天机器人的公开发布相吻合。这些聊天机器人,比如ChatGPT,可以直接参与对话,允许用户向他们提问或发出其他命令。由于LLM是在大量文本数据上训练的,他们还可以可靠和真实地回答问题,这种能力使他们能够作为医疗查询的来源。这项研究旨在评估四种最常见的聊天机器人:ChatGPT,MicrosoftCopilot,谷歌双子座,元AI。
    方法:使用基于网站的关于该主题的患者教育材料,开发了一组10个关于心导管插入的问题。然后,我们连续向四个最常见的聊天机器人提出了这些问题:ChatGPT,MicrosoftCopilot,谷歌双子座,元AI。Flesch阅读轻松评分(FRES)用于评估可读性评分。使用六种工具评估可读性等级水平:Flesch-Kincaid等级水平(FKGL),GunningFogIndex(GFI),科尔曼-廖氏指数(CLI),Gobbledygook(SMOG)指数的简单测量,自动可读性指数(ARI)和强制等级。
    结果:在FKGL中,所有四个聊天机器人的平均FRES为40.2,而四个聊天机器人的总体平均等级为11.2、13.7、13.7、13.3、11.2和11.6。GFI,CLI,SMOG,ARI,和FORCAST指数,分别。ChatGPT的六个工具的平均阅读等级为14.8,12.3对于MicrosoftCopilot,13.1适用于GoogleGemini,元AI为9.6。Further,四个聊天机器人的FRES值分别为31、35.8、36.4和57.7。
    结论:这项研究表明,人工智能聊天机器人能够为有关心脏导管插入的医学问题提供答案。然而,四个聊天机器人的反应总体平均阅读等级为11-13年级,取决于所使用的工具。这意味着这些材料处于高中甚至大学阅读水平,远远超过推荐的六年级患者教育材料。Further,不同聊天机器人提供的可读性水平存在显著差异,在所有六个年级评估中,MetaAI得分最低,ChatGPT通常得分最高。
    BACKGROUND: Artificial intelligence (AI) is a burgeoning new field that has increased in popularity over the past couple of years, coinciding with the public release of large language model (LLM)-driven chatbots. These chatbots, such as ChatGPT, can be engaged directly in conversation, allowing users to ask them questions or issue other commands. Since LLMs are trained on large amounts of text data, they can also answer questions reliably and factually, an ability that has allowed them to serve as a source for medical inquiries. This study seeks to assess the readability of patient education materials on cardiac catheterization across four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI.
    METHODS: A set of 10 questions regarding cardiac catheterization was developed using website-based patient education materials on the topic. We then asked these questions in consecutive order to four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. The Flesch Reading Ease Score (FRES) was used to assess the readability score. Readability grade levels were assessed using six tools: Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), and FORCAST Grade Level.
    RESULTS: The mean FRES across all four chatbots was 40.2, while overall mean grade levels for the four chatbots were 11.2, 13.7, 13.7, 13.3, 11.2, and 11.6 across the FKGL, GFI, CLI, SMOG, ARI, and FORCAST indices, respectively. Mean reading grade levels across the six tools were 14.8 for ChatGPT, 12.3 for Microsoft Copilot, 13.1 for Google Gemini, and 9.6 for Meta AI. Further, FRES values for the four chatbots were 31, 35.8, 36.4, and 57.7, respectively.
    CONCLUSIONS: This study shows that AI chatbots are capable of providing answers to medical questions regarding cardiac catheterization. However, the responses across the four chatbots had overall mean reading grade levels at the 11th-13th-grade level, depending on the tool used. This means that the materials were at the high school and even college reading level, which far exceeds the recommended sixth-grade level for patient education materials. Further, there is significant variability in the readability levels provided by different chatbots as, across all six grade-level assessments, Meta AI had the lowest scores and ChatGPT generally had the highest.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:了解与健康相关的材料,称为健康素养,影响膀胱癌治疗的决策要素和结果。美国国立卫生研究院建议在六年级至七年级的阅读水平上编写教育材料。这项研究的目的是评估在线提供的膀胱癌材料的可读性。
    目的:本研究的目的是在线表征有关膀胱癌的可用信息并评估可读性。
    方法:从美国泌尿外科协会泌尿外科护理基金会(AUA-UCF)收集有关膀胱癌的材料,并通过搜索引擎结果与前50个网站进行比较。使用四种不同的经过验证的可读性评估量表对资源进行了分析。计算了材料的平均值和标准偏差,以及用于评估两组患者教育材料之间的显著性的双尾t检验。
    结果:AUA材料的平均可读性为8.5(8-9年级阅读水平)。对于排名前50的网站,平均可读性为11.7(11-12年级阅读水平)。AUA和前50个网站之间的双尾t检验表明两组资源的可读性之间有统计学意义(P=0.0001),顶级搜索引擎结果比推荐的6-7年级阅读水平高几个年级。
    结论:AUA提供的大多数关于膀胱癌的健康信息都是以与大多数美国成年人一致的阅读能力编写的,搜索引擎搜索结果的顶级网站超过平均阅读水平几个年级。通过关注健康素养,泌尿科医生可能有助于降低健康素养的障碍,改善医疗支出和围手术期并发症。
    BACKGROUND: Understanding of health-related materials, termed health literacy, affects decision makings and outcomes in the treatment of bladder cancer. The National Institutes of Health recommend writing education materials at a sixth-seventh grade reading level. The goal of this study is to assess readability of bladder cancer materials available online.
    OBJECTIVE: The goal of this study is to characterize available information about bladder cancer online and evaluate readability.
    METHODS: Materials on bladder cancer were collected from the American Urological Association\'s Urology Care Foundation (AUA-UCF) and compared to top 50 websites by search engine results. Resources were analyzed using four different validated readability assessment scales. The mean and standard deviation of the materials was calculated, and a two-tailed t test for used to assess for significance between the two sets of patient education materials.
    RESULTS: The average readability of AUA materials was 8.5 (8th-9th grade reading level). For the top 50 websites, average readability was 11.7 (11-12th grade reading level). A two-tailed t test between the AUA and top 50 websites demonstrated statistical significance between the readability of the two sets of resources (P = 0.0001), with the top search engine results being several grade levels higher than the recommended 6-7th grade reading level.
    CONCLUSIONS: Most health information provided by the AUA on bladder cancer is written at a reading ability that aligns with most US adults, with top websites for search engine results exceeding the average reading level by several grade levels. By focusing on health literacy, urologists may contribute lowering barriers to health literacy, improving health care expenditure and perioperative complications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:健康素养是患者整体健康状况的关键决定因素,研究表明,健康素养差和负面健康结果之间存在一致的联系。美国疾病控制和预防中心(CDC)和美国国立卫生研究院(NIH)建议患者的教育材料(PEM)应以八年级或更低的阅读水平编写,与成年美国人的平均阅读水平相匹配。这项研究的目的是调查生成人工智能(AI)编辑骨科机构的PEM以满足CDC和NIH指南的能力。
    方法:收集了2022年《美国新闻与世界报道》最佳医院专科排名前25名骨科机构中有关外上髁炎(LE)的PEM。然后指示ChatGPTPlus(版本4.0)在这些机构的LE上重写PEM,以符合CDC和NIH推荐的指南。计算原始和重写PEM的可读性分数,和配对t检验用于确定统计学意义。
    结果:对有关LE的原始和编辑的PEM的分析显示,阅读等级和字数显着降低,分别为3.70±1.84(p<0.001)和346.72±364.63(p<0.001),分别。
    结论:我们的研究表明,生成AI能够在符合CDC和NIH指南的阅读理解水平上重写关于LE的PEM。医院管理人员和整形外科医生在制作自己的PEM时应考虑本研究的发现以及人工智能的潜在用途。
    BACKGROUND: Health literacy is a critical determinant of a patient\'s overall health status, and studies have demonstrated a consistent link between poor health literacy and negative health outcomes. The Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) advise that patient educational materials (PEMs) should be written at an eighth-grade reading level or lower, matching the average reading level of adult Americans. The purpose of this study was to investigate the ability of generative artificial intelligence (AI) to edit PEMs from orthopaedic institutions to meet the CDC and NIH guidelines.
    METHODS: PEMs about lateral epicondylitis (LE) from the top 25 ranked orthopaedic institutions from the 2022 U.S. News & World Report Best Hospitals Specialty Ranking were gathered. ChatGPT Plus (version 4.0) was then instructed to rewrite PEMs on LE from these institutions to comply with CDC and NIH-recommended guidelines. Readability scores were calculated for the original and rewritten PEMs, and paired t-tests were used to determine statistical significance.
    RESULTS: Analysis of the original and edited PEMs about LE revealed significant reductions in reading grade level and word count of 3.70 ± 1.84 (p<0.001) and 346.72 ± 364.63 (p<0.001), respectively.
    CONCLUSIONS: Our study demonstrated generative AI\'s ability to rewrite PEM about LE at a reading comprehension level that conforms to the CDC and NIH guidelines. Hospital administrators and orthopaedic surgeons should consider the findings of this study and the potential utility of artificial intelligence when crafting PEMs of their own.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    美国国立卫生研究院和美国医学协会建议患者教育材料(EM)达到或低于六年级阅读水平。美国癌症协会,白血病和淋巴瘤协会,和国家综合癌症网络有准确的血癌EMs。方法使用以下方法对来自上述组织的一百一十(101)个血癌EMs进行评估:FleschReadingEaseFormula(FREF),Flesch-Kincaid等级(FKGL),GunningFogIndex(GFI),巨形虫指数(SMOG)的简单测量,和Coleman-Liau指数(CLI)。结果在所有模式中,只有3.96%的患者EMs得分达到或低于七年级阅读水平。医疗保健专业教育材料(HPEM)平均为大学至研究生水平。对于白血病和淋巴瘤患者,FKGL与FKGL之间存在显着差异SMOG,FKGL与GFI,FKGL与CLI,SMOGvs.CLI,和GFIvs.CLI。对于HPEM,FKGL与FKGL之间存在显着差异GFI和GFIvs.CLI。结论大多数EMs患者的阅读水平高于七年级。缺乏易于阅读的患者EM可能导致对疾病的理解不足,因此,不良健康结果。总的来说,患者EMs不应取代医生咨询.医生必须缩小患者在整个癌症治疗过程中理解的差距。
    Introduction The National Institutes of Health and the American Medical Association recommend patient education materials (EMs) be at or below the sixth-grade reading level. The American Cancer Society, Leukemia & Lymphoma Society, and National Comprehensive Cancer Network have accurate blood cancer EMs. Methods One hundred one (101) blood cancer EMs from the above organizations were assessed using the following: Flesch Reading Ease Formula (FREF), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Simple Measure of Gobbledygook Index (SMOG), and the Coleman-Liau Index (CLI). Results Only 3.96% of patient EMs scored at or below the seventh-grade reading level in all modalities. Healthcare professional education materials (HPEMs) averaged around the college to graduate level. For leukemia and lymphoma patient EMs, there were significant differences for FKGL vs. SMOG, FKGL vs. GFI, FKGL vs. CLI, SMOG vs. CLI, and GFI vs. CLI. For HPEMs, there were significant differences for FKGL vs. GFI and GFI vs. CLI. Conclusion The majority of patient EMs were above the seventh-grade reading level. A lack of easily readable patient EMs could lead to a poor understanding of disease and, thus, adverse health outcomes. Overall, patient EMs should not replace physician counseling. Physicians must close the gaps in patients\' understanding throughout their cancer treatment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:皮肤科患者教育材料(PEM)的书写水平通常高于全国平均水平的七至八年级阅读水平。ChatGPT-3.5,GPT-4,DermGPT,和DocsGPT是响应用户提示的大型语言模型(LLM)。我们的项目评估了它们在指定阅读水平下生成皮肤病学PEM的用途。
    目的:本研究旨在评估在未指定和指定的阅读水平下,选择LLM在常见和罕见皮肤病学中产生PEM的能力。Further,该研究旨在评估这些LLM生成的PEM的意义保存情况,由皮肤科住院医师评估。
    方法:当前美国皮肤病学会PEMs的Flesch-Kincaid阅读水平(FKRL)评估了4种常见(特应性皮炎,寻常痤疮,牛皮癣,和带状疱疹)和4例罕见(大疱性表皮松解症,大疱性类天疱疮,层状鱼鳞病,和扁平苔藓)皮肤病。我们提示ChatGPT-3.5,GPT-4,DermGPT,和DocsGPT以“在[FKRL]中创建关于[条件]的患者教育讲义”,以在未指定的五年级和七年级FKRL中每个条件迭代生成10个PEM,使用MicrosoftWord可读性统计进行评估。由2名皮肤科住院医师评估了LLM中意义的保留。
    结果:当前的美国皮肤病学会PEMs对常见和罕见疾病的平均(SD)FKRL为9.35(1.26)和9.50(2.3),分别。对于常见疾病,LLM生产的PEM的FKRL介于9.8和11.21之间(未指定提示),在4.22和7.43之间(五年级提示),在5.98和7.28之间(七年级提示)。对于罕见疾病,LLM生产的PEM的FKRL范围在9.85和11.45之间(未指定提示),在4.22和7.43之间(五年级提示),在5.98和7.28之间(七年级提示)。在五年级阅读水平,与ChatGPT-3.5相比,GPT-4在常见和罕见条件下都能更好地生产PEM(分别为P=.001和P=.01),DermGPT(分别为P<.001和P=.03),和DocsGPT(分别为P<.001和P=.02)。在七年级的阅读水平,ChatGPT-3.5、GPT-4、DocsGPT、或DermGPT在生产常见条件下的PEM(所有P>.05);然而,对于罕见的情况,ChatGPT-3.5和DocsGPT的表现优于GPT-4(分别为P=.003和P<.001)。意义分析的保留表明,对于共同条件,DermGPT在整体阅读便利性方面排名最高,患者的可理解性,和准确性(14.75/15,98%);对于罕见的情况,GPT-4产生的施舍排名最高(14.5/15,97%)。
    结论:GPT-4的表现似乎优于ChatGPT-3.5,DocsGPT,和DermGPT在五年级FKRL的常见和罕见的情况下,尽管ChatGPT-3.5和DocsGPT在7级FKRL中在罕见情况下的表现均优于GPT-4。LLM生产的PEM可以可靠地满足七级FKRL的选择常见和罕见的皮肤病,并且易于阅读,患者可以理解,而且大多是准确的。LLM可能在提高健康素养和传播无障碍方面发挥作用,在皮肤病学中可以理解的PEM。
    BACKGROUND: Dermatologic patient education materials (PEMs) are often written above the national average seventh- to eighth-grade reading level. ChatGPT-3.5, GPT-4, DermGPT, and DocsGPT are large language models (LLMs) that are responsive to user prompts. Our project assesses their use in generating dermatologic PEMs at specified reading levels.
    OBJECTIVE: This study aims to assess the ability of select LLMs to generate PEMs for common and rare dermatologic conditions at unspecified and specified reading levels. Further, the study aims to assess the preservation of meaning across such LLM-generated PEMs, as assessed by dermatology resident trainees.
    METHODS: The Flesch-Kincaid reading level (FKRL) of current American Academy of Dermatology PEMs was evaluated for 4 common (atopic dermatitis, acne vulgaris, psoriasis, and herpes zoster) and 4 rare (epidermolysis bullosa, bullous pemphigoid, lamellar ichthyosis, and lichen planus) dermatologic conditions. We prompted ChatGPT-3.5, GPT-4, DermGPT, and DocsGPT to \"Create a patient education handout about [condition] at a [FKRL]\" to iteratively generate 10 PEMs per condition at unspecified fifth- and seventh-grade FKRLs, evaluated with Microsoft Word readability statistics. The preservation of meaning across LLMs was assessed by 2 dermatology resident trainees.
    RESULTS: The current American Academy of Dermatology PEMs had an average (SD) FKRL of 9.35 (1.26) and 9.50 (2.3) for common and rare diseases, respectively. For common diseases, the FKRLs of LLM-produced PEMs ranged between 9.8 and 11.21 (unspecified prompt), between 4.22 and 7.43 (fifth-grade prompt), and between 5.98 and 7.28 (seventh-grade prompt). For rare diseases, the FKRLs of LLM-produced PEMs ranged between 9.85 and 11.45 (unspecified prompt), between 4.22 and 7.43 (fifth-grade prompt), and between 5.98 and 7.28 (seventh-grade prompt). At the fifth-grade reading level, GPT-4 was better at producing PEMs for both common and rare conditions than ChatGPT-3.5 (P=.001 and P=.01, respectively), DermGPT (P<.001 and P=.03, respectively), and DocsGPT (P<.001 and P=.02, respectively). At the seventh-grade reading level, no significant difference was found between ChatGPT-3.5, GPT-4, DocsGPT, or DermGPT in producing PEMs for common conditions (all P>.05); however, for rare conditions, ChatGPT-3.5 and DocsGPT outperformed GPT-4 (P=.003 and P<.001, respectively). The preservation of meaning analysis revealed that for common conditions, DermGPT ranked the highest for overall ease of reading, patient understandability, and accuracy (14.75/15, 98%); for rare conditions, handouts generated by GPT-4 ranked the highest (14.5/15, 97%).
    CONCLUSIONS: GPT-4 appeared to outperform ChatGPT-3.5, DocsGPT, and DermGPT at the fifth-grade FKRL for both common and rare conditions, although both ChatGPT-3.5 and DocsGPT performed better than GPT-4 at the seventh-grade FKRL for rare conditions. LLM-produced PEMs may reliably meet seventh-grade FKRLs for select common and rare dermatologic conditions and are easy to read, understandable for patients, and mostly accurate. LLMs may play a role in enhancing health literacy and disseminating accessible, understandable PEMs in dermatology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    虽然互联网提供可访问的医疗信息,通常情况下,它不迎合普通患者的能力,以理解医学文本在6和8年级的阅读水平,根据美国医学协会(AMA)/美国国立卫生研究院(NIH)的建议。这项研究旨在分析与后交叉韧带(PCL)手术相关的当前在线材料及其可读性,可理解性,和可操作性。
    对“PCL手术”的前100名Google搜索进行了编译。研究论文,程序协议,广告,和视频被排除在数据收集之外。使用7种算法检查了可读性:Flesch阅读轻松评分,GunningFog,Flesch-Kincaid等级,Coleman-Liau指数,SMOG指数,自动可读性指数和Linsear写入公式。两名评估人员使用患者教育材料评估工具(PEMAT)评估了结果的可理解性和可操作性。成果衡量指标包括阅读等级,读者的最小和最大年龄,可理解性,和可操作性。
    在100个结果中,根据排除标准排除16例。所有算法的结果的可读性与AMA和NIH当前的建议之间存在统计学上的显着差异。亚组分析表明,可读性没有差异,因为它与它们在Google搜索中出现的页面有关。单个网站与组织网站(医院和非医院教育网站)之间的可读性也没有差异。三篇文章处于八年级推荐阅读水平,这三个人都来自医疗机构。
    AMA/NIH的建议与关于PCL手术的在线教育材料之间的可读性存在差异,无论他们出现在谷歌和不同的论坛。可理解性和可操作性同样差。未来的研究可以集中在视频和社交媒体的可读性和有效性,因为它们正成为越来越受欢迎的医疗信息来源。
    UNASSIGNED: While the internet provides accessible medical information, often times it does not cater to the average patient\'s ability to understand medical text at a 6th and 8th grade reading level, per American Medical Association (AMA)/National Institute of Health (NIH) recommendations. This study looks to analyze current online materials relating to posterior cruciate ligament (PCL) surgery and their readability, understandability, and actionability.
    UNASSIGNED: The top 100 Google searchs for \"PCL surgery\" were compiled. Research papers, procedural protocols, advertisements, and videos were excluded from the data collection. The readability was examined using 7 algorithms: the Flesch Reading Ease Score, Gunning Fog, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG index, Automated Readability Index and the Linsear Write Formula. Two evaluators assessed Understandability and Actionability of the results with the Patient Educational Materials Assessment Tool (PEMAT). Outcome measures included Reading Grade Level, Reader\'s age minimum and maximum, Understandability, and Actionability.
    UNASSIGNED: Of the 100 results, 16 were excluded based on the exclusion criteria. There was a statistically significant difference between the readability of the results from all algorithms and the current recommendation by AMA and NIH. Subgroup analysis demonstrated that there was no difference in readability as it pertained to which page they appeared on Google search. There was also no difference in readability between individual websites versus organizational websites (hospital and non-hospital educational websites). Three articles were at the 8th grade recommended reading level, and all three were from healthcare institutes.
    UNASSIGNED: There is a discrepancy in readability between the recommendation of AMA/NIH and online educational materials regarding PCL surgeries, regardless of where they appear on Google and across different forums. The understandability and actionability were equally poor. Future research can focus on the readability and validity of video and social media as they are becoming increasingly popular sources of medical information.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    引言流感是全球主要的健康问题,其快速传播和突变率对公共卫生教育和传播构成了重大挑战。有效的患者教育材料(PEM)对于明智的决策和改善健康结果至关重要。本研究使用传统的可读性工具评估在线流感PEM的功效,并引入上下文健康教育可读性评分(CHERS)来解决现有方法的局限性,这些方法无法捕获所显示的各种视觉和主题手段。材料和方法进行了全面的搜索,以选择相关的在线流感PEM。这涉及浏览Google的前两页按相关性排序的结果,共20个结果。使用已建立的可读性工具对这些材料进行了评估(例如,Flesch阅读轻松,Flesch-Kincaid等级)和患者教育材料评估工具(PEMAT)的可理解性和可操作性。该研究还涉及CHERS的创建,整合因素,如语义复杂性,文化相关性,和视觉辅助的有效性。CHERS的开发包括根据其对可读性和可理解性的影响对每个组成部分进行加权。结果传统的可读性工具在所选材料的可读性方面表现出明显的差异。PEMAT分析揭示了目的和日常语言使用清晰的总体趋势,但表明需要改进摘要和视觉辅助工具。CHERS公式计算如下:CHERS=(0.4×平均句子长度)+(0.3×平均每词音节)+(0.15×语义复杂度得分)+(0.1×文化关联得分)+(0.05×视觉辅助效果得分),整合超越传统可读性指标的多个维度。讨论该研究强调了传统可读性工具在评估健康信息的复杂性和文化相关性方面的局限性。CHERS的引入通过纳入对医疗保健背景下的理解至关重要的其他维度来解决这些差距。关于创建有效的流感PEM的建议侧重于语言简单性,文化敏感性,和可操作性。这可能有助于进一步研究评估当前的PEM,并阐明将来创建更有效内容的方法。ThecreationofCHERSmarksasignificantprovementinthisfield,提供更全面的方法来评估健康素养材料。它的应用可以导致更具包容性和有效性的教育内容的发展,从而改善公共卫生结果并减轻全球流感负担。未来的研究应该集中在进一步验证CHERS并探索其对其他健康状况的适用性。
    Introduction Influenza is a major global health concern, with its rapid spread and mutation rate posing significant challenges in public health education and communication. Effective patient education materials (PEMs) are crucial for informed decision-making and improved health outcomes. This study evaluates the efficacy of online influenza PEMs using traditional readability tools and introduces the Contextual Health Education Readability Score (CHERS) to address the limitations of existing methods that do not capture the diverse array of visual and thematic means displayed. Materials and methods A comprehensive search was conducted to select relevant online influenza PEMs. This involved looking through Google\'s first two pages of results sorted by relevance, for a total of 20 results. These materials were evaluated using established readability tools (e.g., Flesch Reading Ease, Flesch-Kincaid Grade Level) and the Patient Education Materials Assessment Tool (PEMAT) for understandability and actionability. The study also involved the creation of CHERS, integrating factors such as semantic complexity, cultural relevance, and visual aid effectiveness. The development of CHERS included weighting each component based on its impact on readability and comprehension. Results The traditional readability tools demonstrated significant variability in the readability of the selected materials. The PEMAT analysis revealed general trends toward clarity in purpose and use of everyday language but indicated a need for improvement in summaries and visual aids. The CHERS formula was calculated as follows: CHERS = (0.4 × Average Sentence Length) + (0.3 × Average Syllables per Word) + (0.15 × Semantic Complexity Score) + (0.1 × Cultural Relevance Score) + (0.05 × Visual Aid Effectiveness Score), integrating multiple dimensions beyond traditional readability metrics. Discussion The study highlighted the limitations of traditional readability tools in assessing the complexity and cultural relevance of health information. The introduction of CHERS addressed these gaps by incorporating additional dimensions crucial for understanding in a healthcare context. The recommendations provided for creating effective influenza PEMs focused on language simplicity, cultural sensitivity, and actionability. This may enable further research into evaluating current PEMs and clarifying means of creating more effective content in the future. Conclusions The study underscores the need for comprehensive readability assessments in PEMs. The creation of CHERS marks a significant advancement in this field, providing a more holistic approach to evaluating health literacy materials. Its application could lead to the development of more inclusive and effective educational content, thereby improving public health outcomes and reducing the global burden of influenza. Future research should focus on further validating CHERS and exploring its applicability to other health conditions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    固定面罩(IM)用于接受放射治疗(RT)治疗的头颈部癌症(HNC)患者,以确保治疗之间的准确性和可重复性。HNC患者中与幽闭恐怖症相关的面罩焦虑很常见,并且由于患者的困扰而可能会损害治疗。此范围审查旨在描述针对接受RT的HNC患者的公开患者教育材料(PEM)的内容。三个搜索引擎(Bing,雅虎,和谷歌)使用标准术语进行了系统搜索。如果目标读者是患有HNC的成年人,并且在RT的IM上包含内容,则视听或书面格式的PEM有资格纳入。使用可打印和视听材料的患者教育材料评估工具评估内容,以评估可理解性和可操作性。总的来说,确定了304个PEM,其中20个符合纳入标准。十六个PEM是网页,三个是PDF格式,一个是独立的视频.PEM的可理解性和可操作性介于47%至100%和0%至80%之间,分别。基金会/组织撰写的PEM在可理解性方面得分更高(80-100%),并且更有可能讨论掩盖焦虑的应对策略。相比之下,新闻网站和IM制造商发布的PEM的可理解性得分最低(20-80%)。ThesignificantvariationsinthequalityofIMPEMidentifiedsuggeststhatsomesourcesmaybemoreeffectiveatinformingpatientsaboutIMs.AlthoughmultipleaspectsofthePEMwereconsistentacrossthereviewedmaterials,许多PEM缺乏信息,需要更加注重可理解性和可操作性。
    Immobilisation masks (IMs) are used for people with head and neck cancer (HNC) undergoing radiation therapy (RT) treatment to ensure accuracy and reproducibility between treatments. Claustrophobia-related mask anxiety in HNC patients is common and can compromise treatment due to patient distress. This scoping review aimed to describe the content of publicly available Patient Education Materials (PEMs) for people with HNC undergoing RT. Three search engines (Bing, Yahoo, and Google) were systematically searched using standard terms. PEMs in audio-visual or written formats were eligible for inclusion if the target readership was adults with HNC and included content on IMs for RT. Content was appraised using the Patient Education Materials Assessment Tool for Printable and Audio-Visual Materials to assess understandability and actionability. In total, 304 PEMs were identified of which 20 met the inclusion criteria. Sixteen PEMs were webpages, three were PDF format, and one was a standalone video. The understandability and actionability of PEMs ranged between 47 to 100% and 0 to 80%, respectively. PEMs authored by Foundations/Organisations scored higher in understandability (80-100%) and were more likely to discuss mask anxiety coping strategies. In comparison, News sites and IM manufacturers published PEMs with the lowest understandability scores (20-80%). The significant variations in the quality of IM PEMs identified suggest that some sources may be more effective at informing patients about IMs. Although multiple aspects of the PEMs were consistent across the reviewed materials, many PEMs lacked information, and a stronger focus on understandability and actionability is required.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:存在在线患者教育材料,以告知患者医疗决策,然而,美国成年人的平均阅读水平为八年级,50%的医疗补助患者的阅读水平为或低于五年级。为了适当满足美国的健康素养需求,美国医学会和美国国立卫生研究院建议患者教育材料不超过六年级水平.这项研究的目的是评估和比较与肩部不稳定手术有关的英语和西班牙语在线患者教育材料的可读性。
    方法:Google搜索了术语“肩部不稳定手术”和“cirugíadeinestabilidaddehombro”,每种语言包括25个合格的OPEM。使用Flesch-Kincaid等级水平计算英语OPEM可读性,Flesch阅读轻松,Flesch阅读轻松等级,Gunning-Fog指数,Coleman-Liau指数,和Gobbledygook的简单测量。使用Fernandez-Huerta指数(西班牙语相当于FleschReadingEase)评估西班牙的OPEM可读性,Fernandez-Huerta指数等级,GutiérrezdePolini'sFórmuladecomprensibilidad,和INFLESZ。
    结果:可读性指数分析表明,英语在线患者教育材料的平均Flesch阅读容易度明显低于西班牙在线患者教育材料的平均Fernandez-Huerta指数。还发现英语材料的年级比西班牙语材料高得多。
    结论:肩关节不稳手术在线患者英语和西班牙语教育材料的阅读水平高于AMA和NIH的推荐水平,尽管西班牙在线患者教育材料平均可读性更高。
    BACKGROUND: Online patient education materials (OPEMs) exist to inform patient medical decisions, yet the average adult in the United States reads at an eighth-grade level and 50% of Medicaid patients read at or below a fifth-grade level. To appropriately meet US health literacy needs, the American Medical Association and National Institutes of Health recommend that patient education materials not exceed a sixth-grade level. The purpose of this study was to assess and compare the readability of English and Spanish online patient education materials pertaining to shoulder instability surgery.
    METHODS: Google searches of the terms \"shoulder instability surgery\" and \"cirugía de inestabilidad de hombro\'\' were conducted to include 25 eligible online patient education materials OPEMs per language. English OPEM readability was calculated using Flesch-Kincaid Grade Level, Flesch Reading Ease, Flesch Reading Ease Grade Level, Gunning-Fog Index, Coleman-Liau Index, and Simple Measure of Gobbledygook. Spanish OPEM readability was assessed using Fernandez-Huerta Index (FHI) (the Spanish equivalent of Flesch Reading Ease), FHI Grade Level, Gutiérrez de Polini\'s Fórmula de comprensibilidad, and INFLESZ.
    RESULTS: Readability index analysis revealed that the mean Flesch Reading Ease of English online patient education materials was significantly lower than the mean FHI of Spanish online patient education materials. English materials were also found to be written at a significantly higher grade level than Spanish materials.
    CONCLUSIONS: Shoulder instability surgery online patient education materials in both English and Spanish are written at higher reading levels than recommended by the American Medical Association and National Institutes of Health, though Spanish online patient education materials were more readable on average.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号