关键词: chatGPT google patients search terms web

Mesh : Humans Search Engine Internet Machine Learning Practice Guidelines as Topic

来  源:   DOI:10.7812/TPP/23.126   PDF(Pubmed)

Abstract:
BACKGROUND: With the rise of machine learning applications in health care, shifts in medical fields that rely on precise prognostic models and pattern detection tools are anticipated in the near future. Chat Generative Pretrained Transformer (ChatGPT) is a recent machine learning innovation known for producing text that mimics human conversation. To gauge ChatGPT\'s capability in addressing patient inquiries, the authors set out to juxtapose it with Google Search, America\'s predominant search engine. Their comparison focused on: 1) the top questions related to clinical practice guidelines from the American Academy of Family Physicians by category and subject; 2) responses to these prevalent questions; and 3) the top questions that elicited a numerical reply.
METHODS: Utilizing a freshly installed Google Chrome browser (version 109.0.5414.119), the authors conducted a Google web search (www.google.com) on March 4, 2023, ensuring minimal influence from personalized search algorithms. Search phrases were derived from the clinical guidelines of the American Academy of Family Physicians. The authors prompted ChatGPT with: \"Search Google using the term \'(refer to search terms)\' and document the top four questions linked to the term.\" The same 25 search terms were employed. The authors cataloged the primary 4 questions and their answers for each term, resulting in 100 questions and answers.
RESULTS: Of the 100 questions, 42% (42 questions) were consistent across all search terms. ChatGPT predominantly sourced from academic (38% vs 15%, p = 0.0002) and government (50% vs 39%, p = 0.12) domains, whereas Google web searches leaned toward commercial sources (32% vs 11%, p = 0.0002). Thirty-nine percent (39 questions) of the questions yielded divergent answers between the 2 platforms. Notably, 16 of the 39 distinct answers from ChatGPT lacked a numerical reply, instead advising a consultation with a medical professional for health guidance.
CONCLUSIONS: Google Search and ChatGPT present varied questions and answers for both broad and specific queries. Both patients and doctors should exercise prudence when considering ChatGPT as a digital health adviser. It\'s essential for medical professionals to assist patients in accurately communicating their online discoveries and ensuing inquiries for a comprehensive discussion.
摘要:
背景:随着机器学习在医疗保健中应用的兴起,预计在不久的将来,医疗领域将发生依赖精确预后模型和模式检测工具的转变。聊天生成预训练转换器(ChatGPT)是最近的机器学习创新,以产生模仿人类对话的文本而闻名。为了衡量ChatGPT处理患者询问的能力,作者着手将它与谷歌搜索并列,美国的主要搜索引擎。他们的比较集中在:1)按类别和主题与美国家庭医师学会的临床实践指南相关的顶级问题;2)对这些普遍问题的回答;3)引起数字答复的顶级问题。
方法:利用新安装的GoogleChrome浏览器(版本109.0.5414.119),作者进行了谷歌网络搜索(www.google.com),2023年3月4日,确保个性化搜索算法的影响最小。搜索短语源自美国家庭医师学会的临床指南。作者提示ChatGPT:\“使用术语\'(请参阅搜索词)\'搜索Google,并记录与该术语相关的前四个问题。“使用了相同的25个搜索词。作者列出了每个学期的主要4个问题及其答案,产生100个问题和答案。
结果:在100个问题中,42%(42个问题)在所有搜索术语中保持一致。ChatGPT主要来自学术(38%对15%,p=0.0002)和政府(50%对39%,p=0.12)域,而谷歌网络搜索倾向于商业来源(32%对11%,p=0.0002)。39%(39个问题)的问题在两个平台之间产生了不同的答案。值得注意的是,ChatGPT的39个不同答案中有16个没有数字答复,相反,建议咨询医疗专业人员进行健康指导。
结论:GoogleSearch和ChatGPT针对广泛和特定的查询提出了不同的问题和答案。在考虑将ChatGPT作为数字健康顾问时,患者和医生都应谨慎行事。对于医疗专业人员来说,帮助患者准确地传达他们的在线发现并随后进行全面讨论的询问是至关重要的。
公众号