关键词: Artificial intelligence ChatGPT Guideline Infection Urinary tract infection

Mesh : Humans Urinary Tract Infections / drug therapy diagnosis Reproducibility of Results Surveys and Questionnaires Cystitis / drug therapy diagnosis Male Practice Guidelines as Topic Urethritis / diagnosis Epididymitis / diagnosis drug therapy Orchitis / drug therapy diagnosis Female

来  源:   DOI:10.1016/j.idnow.2024.104884

Abstract:
BACKGROUND: For the first time, the accuracy and proficiency of ChatGPT answers on urogenital tract infection (UTIs) were evaluated.
METHODS: The study aimed to create two lists of questions: frequently asked questions (FAQs, public-based inquiries) on relevant topics, and questions based on guideline information (guideline-based inquiries). ChatGPT responses to FAQs and scientific questions were scored by two urologists and an infectious disease specialist. Quality and reliability of all ChatGPT answers were checked using the Global Quality Score (GQS). The reproducibility of ChatGPT answers was analyzed by asking each question twice.
RESULTS: All in all, 96.2 % of FAQs (75/78 inquiries) related to UTIs were correctly and adequately answered by ChatGPT, and scored GQS 5. None of the ChatGPT answers were classified as GQS 2 and GQS 1. Moreover, FAQs about cystitis, urethritis, and epididymo-orchitis were answered by ChatGPT with 100 % accuracy (GQS 5). ChatGPT answers for EAU urological infections guidelines showed that 61 (89.7 %), 5 (7.4 %), and 2 (2.9 %) ChatGPT responses were scored GQS 5, GQS 4, and GQS 3, respectively. None of the ChatGPT responses for EAU urological infections guidelines were categorized as GQS 2 and GQS 1. Comparison of mean GQS values of ChatGPT answers for FAQs and EAU urological guideline questions showed that ChatGPT was similarly able to respond to both question groups (p = 0.168). The ChatGPT response reproducibility rate was highest for the FAQ subgroups of cystitis, urethritis, and epididymo-orchitis (100 % for each subgroup).
CONCLUSIONS: The present study showed that ChatGPT gave accurate and satisfactory answers for both public-based inquiries, and EAU urological infection guideline-based questions. Reproducibility of ChatGPT answers exceeded 90% for both FAQs and scientific questions.
摘要:
背景:第一次,评估了ChatGPT对泌尿生殖道感染(UTI)答案的准确性和熟练程度。
方法:该研究旨在创建两个问题列表:常见问题(常见问题解答,基于公众的查询)相关主题,和基于指南信息的问题(基于指南的查询)。ChatGPT对常见问题解答和科学问题的回答由两名泌尿科医生和一名传染病专家打分。使用全球质量评分(GQS)检查所有ChatGPT答案的质量和可靠性。通过问每个问题两次来分析ChatGPT答案的可重复性。
结果:总而言之,ChatGPT正确且充分地回答了与UTI相关的96.2%的常见问题解答(75/78查询),并获得GQS5分。ChatGPT答案均未分类为GQS2和GQS1。此外,关于膀胱炎的常见问题,尿道炎,和附睾睾丸炎由ChatGPT回答,准确率为100%(GQS5)。EAU泌尿系统感染指南的ChatGPT答案显示61(89.7%),5(7.4%),2例(2.9%)ChatGPT应答分别为GQS5、GQS4和GQS3。EAU泌尿系统感染指南的ChatGPT反应均未归类为GQS2和GQS1。ChatGPT回答FAQ和EAU泌尿外科指南问题的平均GQS值的比较表明,ChatGPT对两个问题组的反应相似(p=0.168)。对于膀胱炎的FAQ亚组,ChatGPT反应重现率最高,尿道炎,和附睾睾丸炎(每个亚组100%)。
结论:本研究表明,ChatGPT对公众查询都给出了准确和令人满意的答案,和基于EAU泌尿系统感染指南的问题。对于常见问题和科学问题,ChatGPT答案的可重复性均超过90%。
公众号