关键词: American Association for Pediatric Ophthalmology and Strabismus ChatGPT Google Google Assistant amblyopia education health literacy monitoring ophthalmologist ophthalmology patient education pediatric

Mesh : Amblyopia / therapy Humans Patient Education as Topic / methods Internet Ophthalmology / education

来  源:   DOI:10.2196/52401   PDF(Pubmed)

Abstract:
BACKGROUND: We queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%).
OBJECTIVE: Our study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia.
METHODS: ChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows: (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The \"Flesch-Kincaid Grade Level\" formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS.
RESULTS: In their responses, ChatGPT did not mention the term \"ophthalmologist,\" whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term \"eye doctors\" once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT\'s answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26).
CONCLUSIONS: ChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including \"see an ophthalmologist\" on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.
摘要:
背景:我们查询了ChatGPT(OpenAI)和GoogleAssistant有关弱视的信息,并将其答案与美国小儿眼科和斜视协会(AAPOS)网站上找到的关键字进行了比较,特别是关于弱视的部分。从网站选择的26个关键词中,ChatGPT在其回答中包括11个(42%),而Google包括8(31%)。
目的:我们的研究调查了ChatGPT-3.5和GoogleAssistant对AAPOS弱视患者教育指南的依从性。
方法:使用ChatGPT-3.5。来自AAPOS网站的四个问题,特别是弱视的词汇表部分,如下:(1)什么是弱视?(2)什么导致弱视?(3)弱视如何治疗?(4)如果弱视未经治疗会发生什么?眼科医生(GW和DL)批准和选择,AAPOS的关键词是认为对弱视患者的教育有重要意义的单词或短语.“Flesch-Kincaid等级”公式,由美国教育部批准,用于评估ChatGPT回答的阅读理解水平,GoogleAssistant,和AAPOS。
结果:在他们的回答中,ChatGPT没有提到“眼科医生,“而GoogleAssistant和AAPOS都提到过一次和两次。分别。ChatGPT做到了,然而,使用术语“眼科医生”一次。根据Flesch-Kincaid测试,AAPOS的平均阅读水平为11.4(SD2.1;最低水平),而Google的平均阅读水平为13.1(SD4.8;最高要求的阅读水平),也显示了其反应中年级水平的最大变化。ChatGPT的答案,平均而言,评分12.4(SD1.1)年级。他们在阅读难度方面都相似。对于关键字,在4个回答中,ChatGPT使用了42%(11/26)的关键字,而GoogleAssistant使用了31%(8/26)。
结论:ChatGPT训练文本和短语,并生成新的句子,而GoogleAssistant会自动复制网站链接。作为眼科医生,我们应该考虑在我们的网站和期刊上加入“看眼科医生”。当ChatGPT留下来的时候,我们,作为医生,需要监视它的答案。
公众号