Google

google
  • 文章类型: Journal Article
    背景:我们查询了ChatGPT(OpenAI)和GoogleAssistant有关弱视的信息,并将其答案与美国小儿眼科和斜视协会(AAPOS)网站上找到的关键字进行了比较,特别是关于弱视的部分。从网站选择的26个关键词中,ChatGPT在其回答中包括11个(42%),而Google包括8(31%)。
    目的:我们的研究调查了ChatGPT-3.5和GoogleAssistant对AAPOS弱视患者教育指南的依从性。
    方法:使用ChatGPT-3.5。来自AAPOS网站的四个问题,特别是弱视的词汇表部分,如下:(1)什么是弱视?(2)什么导致弱视?(3)弱视如何治疗?(4)如果弱视未经治疗会发生什么?眼科医生(GW和DL)批准和选择,AAPOS的关键词是认为对弱视患者的教育有重要意义的单词或短语.“Flesch-Kincaid等级”公式,由美国教育部批准,用于评估ChatGPT回答的阅读理解水平,GoogleAssistant,和AAPOS。
    结果:在他们的回答中,ChatGPT没有提到“眼科医生,“而GoogleAssistant和AAPOS都提到过一次和两次。分别。ChatGPT做到了,然而,使用术语“眼科医生”一次。根据Flesch-Kincaid测试,AAPOS的平均阅读水平为11.4(SD2.1;最低水平),而Google的平均阅读水平为13.1(SD4.8;最高要求的阅读水平),也显示了其反应中年级水平的最大变化。ChatGPT的答案,平均而言,评分12.4(SD1.1)年级。他们在阅读难度方面都相似。对于关键字,在4个回答中,ChatGPT使用了42%(11/26)的关键字,而GoogleAssistant使用了31%(8/26)。
    结论:ChatGPT训练文本和短语,并生成新的句子,而GoogleAssistant会自动复制网站链接。作为眼科医生,我们应该考虑在我们的网站和期刊上加入“看眼科医生”。当ChatGPT留下来的时候,我们,作为医生,需要监视它的答案。
    BACKGROUND: We queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%).
    OBJECTIVE: Our study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia.
    METHODS: ChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows: (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The \"Flesch-Kincaid Grade Level\" formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS.
    RESULTS: In their responses, ChatGPT did not mention the term \"ophthalmologist,\" whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term \"eye doctors\" once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT\'s answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26).
    CONCLUSIONS: ChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including \"see an ophthalmologist\" on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景技术每天有数百万人转向互联网以帮助了解他们的手部状况和潜在的治疗。虽然在线教育资源显得丰富,有人担心资源是否符合美国医学协会(AMA)和美国国立卫生研究院(NIH)商定的可读性建议.识别可供大多数患者阅读的教育资源可以提高患者对其医疗状况的理解,改善他们的健康结果。方法使用Flesch-Kincaid(FK)分析检查前五名网站对10种最常见的手部疾病的可读性,包括FK阅读容易度和FK等级。FK阅读轻松评分是文本理解难度的指标,而FK等级分数是阅读特定文本的个人需要完全理解文本的等级。结果平均FK阅读容易度为56.00,这与“相当困难(高中)”相关。平均FK对应于八年级的阅读水平,远高于AMA和NIH设定的六年级阅读水平建议。结论患者教育,满意,通过向患者提供更具可读性的教育材料,可以改善医患关系。我们的研究表明,在线教育材料的可读性有了大幅提高的机会。用有效的搜索技术指导患者,倡导创造更易读的材料,更好地了解患者面临的健康素养障碍将使手外科医师能够为患者提供更全面的护理。
    Background Millions of individuals every day turn to the internet for assistance in understanding their hand conditions and potential treatments. While online educational resources appear abundant, there are concerns about whether resources meet the readability recommendations agreed upon by the American Medical Association (AMA) and the National Institutes of Health (NIH). Identifying educational resources that are readable for the majority of patients could improve a patient\'s understanding of their medical condition, subsequently improving their health outcomes. Methods The readability of the top five websites for the 10 most common hand conditions was examined using the Flesch-Kincaid (FK) analysis, comprising the FK reading ease and FK grade level. The FK reading ease score is an indicator of how difficult a text is to comprehend, while the FK grade level score is the grade level an individual reading a particular text would need to fully understand the text. Results The average FK reading ease was 56.00, which correlates with \"fairly difficult (high school)\". The average FK corresponded to an eighth-grade reading level, far above the sixth-grade reading level recommendation set by the AMA and NIH. Conclusion Patient education, satisfaction, and the patient-physician relationship can all be improved by providing patients with more readable educational materials. Our study shows there is an opportunity for drastic improvement in the readability of online educational materials. Guiding patients with effective search techniques, advocating for the creation of more readable materials, and having a better understanding of the health literacy barriers patients face will allow hand surgeons to provide more comprehensive care to patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    猖獗的社交媒体使用允许个人和组织向范围受众广播他们的观点,而对审查或验证共享信息的要求最低。我们讨论了通过社交媒体传播的虚假信息的影响,并使用了《华尔街日报》最近报道的有关防晒霜的虚假信息广播的最新示例。我们还强调了社交媒体影响者传播未经检查的信息的道德后果,以及医疗保健专业人员参与加强问责制的必要性。善意,和真实。
    Rampant social media use allows individuals and organizations to broadcast their views to scoping audiences with minimal requirements for vetting or validating shared information. We discuss the impact of disinformation transmitted via social media and use the recent example of false information broadcast concerning sunscreens recently reported in the Wall Street Journal. We also highlight the ethical consequences of social media influencers who disseminate unchecked information and the need for healthcare professionals to be involved to enhance accountability, goodwill, and truthfulness.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:桥本甲状腺炎(HT)是一种自身免疫性甲状腺疾病,是碘摄入充足地区甲状腺功能减退的主要原因。甲状腺功能减退症和HT对生活质量的影响和经济负担凸显了对疾病病因进行进一步研究的必要性,目的是揭示潜在的可改变的危险因素。
    目标:实施针对此类风险因素的措施,一旦被确认,有可能减轻经济负担,同时提高许多人的生活质量。因此,我们旨在使用Google趋势数据来检查欧洲HT的潜在季节性,以探索Google搜索是否存在有关HT的季节性特征,检查国家地理位置对潜在季节性的潜在影响,并确定HT的潜在可修改风险因素,从而激发了未来对该主题的研究。
    方法:在2004年1月至2020年12月的17年时间范围内,检索了36个欧洲国家的“桥本甲状腺炎”搜索主题的每月Google趋势数据。进行了cosinor模型分析以评估潜在的季节性。使用简单的线性回归来估计纬度和经度对模型输出的季节性振幅和相位的潜在影响。
    结果:在36个欧洲国家中,在30个国家(83%)观察到显著的季节性。大多数阶段高峰出现在春季(14/30,46.7%)和冬季(8/30,26.7%)。关于地理纬度对余弦模型振幅的影响,观察到了统计学上的显着影响(y=-3.230.13x;R2=0.29;P=0.002)。因此,HT搜索量的季节性增加可能是发病率增加或疾病活动增加的结果。特别有趣的是,在大多数国家,季节性高峰出现在春季和冬季;从地理纬度对季节性振幅的统计显着影响来看,这可能表明维生素D水平在HT季节性中的潜在作用。
    结论:在我们的研究中观察到HTGoogle趋势搜索量的显着季节性,大多数国家的季节高峰出现在春季和冬季,纬度对季节振幅的影响很大。需要进一步研究HT的季节性及其影响因素。
    BACKGROUND: Hashimoto thyroiditis (HT) is an autoimmune thyroid disease and the leading cause of hypothyroidism in areas with sufficient iodine intake. The quality-of-life impact and financial burden of hypothyroidism and HT highlight the need for additional research investigating the disease etiology with the aim of revealing potential modifiable risk factors.
    OBJECTIVE: Implementation of measures against such risk factors, once identified, has the potential to lessen the financial burden while also improving the quality of life of many individuals. Therefore, we aimed to examine the potential seasonality of HT in Europe using the Google Trends data to explore whether there is a seasonal characteristic of Google searches regarding HT, examine the potential impact of the countries\' geographic location on the potential seasonality, and identify potential modifiable risk factors for HT, thereby inspiring future research on the topic.
    METHODS: Monthly Google Trends data on the search topic \"Hashimoto thyroiditis\" were retrieved in a 17-year time frame from January 2004 to December 2020 for 36 European countries. A cosinor model analysis was conducted to evaluate potential seasonality. Simple linear regression was used to estimate the potential effect of latitude and longitude on seasonal amplitude and phase of the model outputs.
    RESULTS: Of 36 included European countries, significant seasonality was observed in 30 (83%) countries. Most phase peaks occurred in spring (14/30, 46.7%) and winter (8/30, 26.7%). A statistically significant effect was observed regarding the effect of geographical latitude on cosinor model amplitude (y = -3.23 + 0.13 x; R2=0.29; P=.002). Seasonal increases in HT search volume may therefore be a consequence of an increased incidence or higher disease activity. It is particularly interesting that in most countries, a seasonal peak occurred in spring and winter months; when viewed in the context of the statistically significant impact of geographical latitude on seasonality amplitude, this may indicate the potential role of vitamin D levels in the seasonality of HT.
    CONCLUSIONS: Significant seasonality of HT Google Trends search volume was observed in our study, with seasonal peaks in most countries occurring in spring and winter and with a significant impact of latitude on seasonality amplitude. Further studies on the topic of seasonality in HT and factors impacting it are required.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ChatGenerativePre-trainedTransformer(ChatGPT)是一种新的机器学习工具,可让患者在线访问健康信息,与Google相比,美国最常用的搜索引擎。患者可以使用ChatGPT更好地了解医疗问题。这项研究比较了两个搜索引擎:(i)关于股骨髋臼撞击综合征(FAI)的常见问题(FAQ),(ii)这些常见问题的相应答案,和(Iii)产生数值响应的最常见问题解答。
    通过复制他们的互联网搜索来评估ChatGPT作为患者在线健康信息资源的适用性。
    横断面研究。
    在Google和ChatGPT上使用相同的关键字来搜索有关FAI的10个最常见问题。记录并分析了来自两个搜索引擎的响应。
    在20个问题中,8(40%)相似。在Google上搜索的10个问题中,7是由医疗实践提供的。对于数字问题,在前5个最常见的问题(60%)中,谷歌和ChatGPT的答案存在显著差异。专家评估表明,67.5%的专家对ChatGPT对FAI的保守和手术治疗方案的描述的准确性感到满意或高度满意。此外,62.5%的专家对所提供信息的安全性感到满意或高度满意。关于FAI的病因,包括凸轮和钳冲击,52.5%的专家对ChatGPT的解释表示满意或高度满意。总的来说,62.5%的专家肯定ChatGPT可以有效地作为初始信息检索的可靠医疗资源。
    这项研究证实了ChatGPT,尽管是一个新工具,显示出作为FAI健康信息补充资源的巨大潜力。专家评价赞扬其提供准确和全面对策的能力,医疗专业人员重视相关性和安全性。尽管如此,为了持续的可靠性,建议持续改进其医疗内容的深度和精度。虽然ChatGPT为传统搜索引擎提供了一个有希望的替代方案,在它被完全接受为值得信赖的医疗资源之前,必须进行细致的验证。
    UNASSIGNED: Chat Generative Pre-trained Transformer (ChatGPT) is a new machine learning tool that allows patients to access health information online, specifically compared to Google, the most commonly used search engine in the United States. Patients can use ChatGPT to better understand medical issues. This study compared the two search engines based on: (i) frequently asked questions (FAQs) about Femoroacetabular Impingement Syndrome (FAI), (ii) the corresponding answers to these FAQs, and (iii) the most FAQs yielding a numerical response.
    UNASSIGNED: To assess the suitability of ChatGPT as an online health information resource for patients by replicating their internet searches.
    UNASSIGNED: Cross-sectional study.
    UNASSIGNED: The same keywords were used to search the 10 most common questions about FAI on both Google and ChatGPT. The responses from both search engines were recorded and analyzed.
    UNASSIGNED: Of the 20 questions, 8 (40%) were similar. Among the 10 questions searched on Google, 7 were provided by a medical practice. For numerical questions, there was a notable difference in answers between Google and ChatGPT for 3 out of the top 5 most common questions (60%). Expert evaluation indicated that 67.5% of experts were satisfied or highly satisfied with the accuracy of ChatGPT\'s descriptions of both conservative and surgical treatment options for FAI. Additionally, 62.5% of experts were satisfied or highly satisfied with the safety of the information provided. Regarding the etiology of FAI, including cam and pincer impingements, 52.5% of experts expressed satisfaction or high satisfaction with ChatGPT\'s explanations. Overall, 62.5% of experts affirmed that ChatGPT could serve effectively as a reliable medical resource for initial information retrieval.
    UNASSIGNED: This study confirms that ChatGPT, despite being a new tool, shows significant potential as a supplementary resource for health information on FAI. Expert evaluations commend its capacity to provide accurate and comprehensive responses, valued by medical professionals for relevance and safety. Nonetheless, continuous improvements in its medical content\'s depth and precision are recommended for ongoing reliability. While ChatGPT offers a promising alternative to traditional search engines, meticulous validation is imperative before it can be fully embraced as a trusted medical resource.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:远程医疗提供了许多潜在的优势,例如增强医疗保健的可及性,降低成本,改善患者预后。COVID-19大流行强调了远程医疗的重要性,因为它在维持不间断的护理同时将病毒暴露的风险降至最低方面起着至关重要的作用。然而,在某些领域,远程医疗的采用和实施相对缓慢。评估对远程医疗的兴趣水平可以为需要增强的领域提供有价值的见解。
    目的:本研究的目的是全面分析2017年至2022年公众对远程医疗的兴趣和研究水平,并考虑COVID-19大流行的任何潜在影响。
    方法:使用搜索主题“远程医疗”或“电子健康”检索Google趋势数据,以评估公众的兴趣,地理分布,通过连接点回归分析和趋势。Scopus的书目数据用于绘制引用术语“远程医疗”或“电子健康”的出版物(在标题中,abstract,和关键词)在科学生产方面,关键国家,和突出的关键词,以及合作和共现网络。
    结果:全球,与eHealth(相对搜索量=17.6%)相比,远程医疗产生了更高的平均公共利益(相对搜索量=26.3%)。对远程医疗的兴趣在2020年1月之前保持稳定,经历了突然的激增(月百分比变化=95.7%),在2020年4月达到顶峰,随后下降(月百分比变化=-22.7%),直到2020年8月,然后恢复稳定。关于电子健康的公共利益也注意到了类似的趋势。智利,澳大利亚,加拿大,美国对远程医疗的兴趣最大。在这些国家,谷歌趋势和COVID-19数据之间存在中等到强的相关性(即,新病例,新的死亡,和住院患者)。检查Scopus数据库中的19,539篇原始医学文章揭示了远程医疗相关出版物的大幅增长,显示,2017年至2022年累计增长2015%,年均增长24.7%。最显著的激增发生在2019年至2020年之间。值得注意的是,大多数出版物来自一个国家,20.8%涉及国际合作。作为生产力最高的国家,美国领导了一个集群,其中包括加拿大和澳大利亚。欧洲,亚洲人,拉丁美洲国家组成了其余的3个集群。共现网络将流行的关键词分为2个集群,第一个集群主要专注于应用电子健康,移动健康(mHealth),或数字健康对非传染性疾病或慢性疾病的影响;第二个集群集中在COVID-19大流行背景下远程医疗和远程健康的应用。
    结论:我们对一段时间和跨区域的搜索和书目数据的分析使我们能够衡量对该主题的兴趣,提供有关潜在应用的证据,并确定其他研究和提高认识举措的领域。
    BACKGROUND: Telemedicine offers a multitude of potential advantages, such as enhanced health care accessibility, cost reduction, and improved patient outcomes. The significance of telemedicine has been underscored by the COVID-19 pandemic, as it plays a crucial role in maintaining uninterrupted care while minimizing the risk of viral exposure. However, the adoption and implementation of telemedicine have been relatively sluggish in certain areas. Assessing the level of interest in telemedicine can provide valuable insights into areas that require enhancement.
    OBJECTIVE: The aim of this study is to provide a comprehensive analysis of the level of public and research interest in telemedicine from 2017 to 2022 and also consider any potential impact of the COVID-19 pandemic.
    METHODS: Google Trends data were retrieved using the search topics \"telemedicine\" or \"e-health\" to assess public interest, geographic distribution, and trends through a joinpoint regression analysis. Bibliographic data from Scopus were used to chart publications referencing the terms \"telemedicine\" or \"eHealth\" (in the title, abstract, and keywords) in terms of scientific production, key countries, and prominent keywords, as well as collaboration and co-occurrence networks.
    RESULTS: Worldwide, telemedicine generated higher mean public interest (relative search volume=26.3%) compared to eHealth (relative search volume=17.6%). Interest in telemedicine remained stable until January 2020, experienced a sudden surge (monthly percent change=95.7%) peaking in April 2020, followed by a decline (monthly percent change=-22.7%) until August 2020, and then returned to stability. A similar trend was noted in the public interest regarding eHealth. Chile, Australia, Canada, and the United States had the greatest public interest in telemedicine. In these countries, moderate to strong correlations were evident between Google Trends and COVID-19 data (ie, new cases, new deaths, and hospitalized patients). Examining 19,539 original medical articles in the Scopus database unveiled a substantial rise in telemedicine-related publications, showing a total increase of 201.5% from 2017 to 2022 and an average annual growth rate of 24.7%. The most significant surge occurred between 2019 and 2020. Notably, the majority of the publications originated from a single country, with 20.8% involving international coauthorships. As the most productive country, the United States led a cluster that included Canada and Australia as well. European, Asian, and Latin American countries made up the remaining 3 clusters. The co-occurrence network categorized prevalent keywords into 2 clusters, the first cluster primarily focused on applying eHealth, mobile health (mHealth), or digital health to noncommunicable or chronic diseases; the second cluster was centered around the application of telemedicine and telehealth within the context of the COVID-19 pandemic.
    CONCLUSIONS: Our analysis of search and bibliographic data over time and across regions allows us to gauge the interest in this topic, offer evidence regarding potential applications, and pinpoint areas for additional research and awareness-raising initiatives.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:随着机器学习在医疗保健中应用的兴起,预计在不久的将来,医疗领域将发生依赖精确预后模型和模式检测工具的转变。聊天生成预训练转换器(ChatGPT)是最近的机器学习创新,以产生模仿人类对话的文本而闻名。为了衡量ChatGPT处理患者询问的能力,作者着手将它与谷歌搜索并列,美国的主要搜索引擎。他们的比较集中在:1)按类别和主题与美国家庭医师学会的临床实践指南相关的顶级问题;2)对这些普遍问题的回答;3)引起数字答复的顶级问题。
    方法:利用新安装的GoogleChrome浏览器(版本109.0.5414.119),作者进行了谷歌网络搜索(www.google.com),2023年3月4日,确保个性化搜索算法的影响最小。搜索短语源自美国家庭医师学会的临床指南。作者提示ChatGPT:\“使用术语\'(请参阅搜索词)\'搜索Google,并记录与该术语相关的前四个问题。“使用了相同的25个搜索词。作者列出了每个学期的主要4个问题及其答案,产生100个问题和答案。
    结果:在100个问题中,42%(42个问题)在所有搜索术语中保持一致。ChatGPT主要来自学术(38%对15%,p=0.0002)和政府(50%对39%,p=0.12)域,而谷歌网络搜索倾向于商业来源(32%对11%,p=0.0002)。39%(39个问题)的问题在两个平台之间产生了不同的答案。值得注意的是,ChatGPT的39个不同答案中有16个没有数字答复,相反,建议咨询医疗专业人员进行健康指导。
    结论:GoogleSearch和ChatGPT针对广泛和特定的查询提出了不同的问题和答案。在考虑将ChatGPT作为数字健康顾问时,患者和医生都应谨慎行事。对于医疗专业人员来说,帮助患者准确地传达他们的在线发现并随后进行全面讨论的询问是至关重要的。
    BACKGROUND: With the rise of machine learning applications in health care, shifts in medical fields that rely on precise prognostic models and pattern detection tools are anticipated in the near future. Chat Generative Pretrained Transformer (ChatGPT) is a recent machine learning innovation known for producing text that mimics human conversation. To gauge ChatGPT\'s capability in addressing patient inquiries, the authors set out to juxtapose it with Google Search, America\'s predominant search engine. Their comparison focused on: 1) the top questions related to clinical practice guidelines from the American Academy of Family Physicians by category and subject; 2) responses to these prevalent questions; and 3) the top questions that elicited a numerical reply.
    METHODS: Utilizing a freshly installed Google Chrome browser (version 109.0.5414.119), the authors conducted a Google web search (www.google.com) on March 4, 2023, ensuring minimal influence from personalized search algorithms. Search phrases were derived from the clinical guidelines of the American Academy of Family Physicians. The authors prompted ChatGPT with: \"Search Google using the term \'(refer to search terms)\' and document the top four questions linked to the term.\" The same 25 search terms were employed. The authors cataloged the primary 4 questions and their answers for each term, resulting in 100 questions and answers.
    RESULTS: Of the 100 questions, 42% (42 questions) were consistent across all search terms. ChatGPT predominantly sourced from academic (38% vs 15%, p = 0.0002) and government (50% vs 39%, p = 0.12) domains, whereas Google web searches leaned toward commercial sources (32% vs 11%, p = 0.0002). Thirty-nine percent (39 questions) of the questions yielded divergent answers between the 2 platforms. Notably, 16 of the 39 distinct answers from ChatGPT lacked a numerical reply, instead advising a consultation with a medical professional for health guidance.
    CONCLUSIONS: Google Search and ChatGPT present varied questions and answers for both broad and specific queries. Both patients and doctors should exercise prudence when considering ChatGPT as a digital health adviser. It\'s essential for medical professionals to assist patients in accurately communicating their online discoveries and ensuing inquiries for a comprehensive discussion.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:为了评估质量,可读性,大型语言模型(LLM)生成的儿童青光眼患者教育材料(PEM)的准确性,以及他们提高现有在线信息可读性的能力。
    方法:横断面比较研究。
    方法:我们评估了ChatGPT-3.5,ChatGPT-4和Bard对三个不同提示的反应,要求他们在儿童青光眼上写PEM。普通美国人很容易理解“提示A要求的PEM”。使用简单的Gobbledygook(SMOG)可读性公式在6年级水平上编写“提示B要求的PEM”。“然后我们比较了回答的质量(DISCERN问卷,患者教育材料评估工具(PEMAT)),可读性(SMOG,Flesch-Kincaid分级等级(FKGL)),和准确性(李克特错误信息量表)。为了评估现有在线信息可读性的提高,提示C要求LLM从Google搜索关键字“儿童青光眼”到美国医学会推荐的“六年级”的20个资源。“对重写进行了关键指标的比较,如可读性,复杂单词(≥3个音节),和句子计数。
    结果:所有3个LLM生成的PEM都是高质量的,可理解性,和准确性(DISCERN≥4,PEMAT可理解性≥70%,错误信息得分=1)。对于所有3个LLM,提示B响应比提示A响应更具可读性(p≤0.001)。与ChatGPT-3.5和Bard相比,ChatGPT-4生成了最易读的PEM(p≤0.001)。尽管提示C反应显示平均SMOG和FKGL得分一致降低,只有ChatGPT-4达到指定的6级阅读水平(分别为4.8±0.8和3.7±1.9)。
    结论:LLM可以作为生成高质量的强大补充工具,准确,和新颖的PEM,并提高现有PEM对儿童青光眼的可读性。
    OBJECTIVE: To evaluate the quality, readability, and accuracy of large language model (LLM)-generated patient education materials (PEMs) on childhood glaucoma, and their ability to improve existing the readability of online information.
    METHODS: Cross-sectional comparative study.
    METHODS: We evaluated responses of ChatGPT-3.5, ChatGPT-4, and Bard to 3 separate prompts requesting that they write PEMs on \"childhood glaucoma.\" Prompt A required PEMs be \"easily understandable by the average American.\" Prompt B required that PEMs be written \"at a 6th-grade level using Simple Measure of Gobbledygook (SMOG) readability formula.\" We then compared responses\' quality (DISCERN questionnaire, Patient Education Materials Assessment Tool [PEMAT]), readability (SMOG, Flesch-Kincaid Grade Level [FKGL]), and accuracy (Likert Misinformation scale). To assess the improvement of readability for existing online information, Prompt C requested that LLM rewrite 20 resources from a Google search of keyword \"childhood glaucoma\" to the American Medical Association-recommended \"6th-grade level.\" Rewrites were compared on key metrics such as readability, complex words (≥3 syllables), and sentence count.
    RESULTS: All 3 LLMs generated PEMs that were of high quality, understandability, and accuracy (DISCERN ≥4, ≥70% PEMAT understandability, Misinformation score = 1). Prompt B responses were more readable than Prompt A responses for all 3 LLM (P ≤ .001). ChatGPT-4 generated the most readable PEMs compared to ChatGPT-3.5 and Bard (P ≤ .001). Although Prompt C responses showed consistent reduction of mean SMOG and FKGL scores, only ChatGPT-4 achieved the specified 6th-grade reading level (4.8 ± 0.8 and 3.7 ± 1.9, respectively).
    CONCLUSIONS: LLMs can serve as strong supplemental tools in generating high-quality, accurate, and novel PEMs, and improving the readability of existing PEMs on childhood glaucoma.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于计算机视觉的街景图像分析对环境评估具有变革性影响。交互式Web服务,特别是谷歌街景,在使图像数据无处不在方面发挥着越来越重要的作用。尽管在技术上可以轻松利用数百万张Google街景图片,本文从欧洲的角度质疑使用这种专有数据源的当前做法。我们关注的是谷歌的服务条款,这限制了批量图像下载和基于街景图像的索引的生成。通过开创性的研究来调和推动社会发展的挑战,同时保持数据许可协议和法律完整性,我们认为这是至关重要的1)包括作者的声明使用专有的街景数据和它所需要的指令,2)协商学术专用许可,以使Google街景数据访问民主化,3)坚持开放数据原则,利用开放图像源进行未来研究。
    Computer vision-based analysis of street view imagery has transformative impacts on environmental assessments. Interactive web services, particularly Google Street View, play an ever-important role in making imagery data ubiquitous. Despite the technical ease of harnessing millions of Google Street View images, this article questions the current practices in using this proprietary data source from a European viewpoint. Our concern lies with Google\'s terms of service, which restrict bulk image downloads and the generation of street view image-based indices. To reconcile the challenge of advancing society through groundbreaking research while maintaining data license agreements and legal integrity, we believe it is crucial to 1) include an author\'s statement on using proprietary street view data and the directives it entails, 2) negotiate academic-specific license to democratize Google Street View data access, and 3) adhere to open data principles and utilize open image sources for future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号