关键词: artificial intelligence carpal tunnel release chatgpt consent large language models

来  源:   DOI:10.7759/cureus.63041   PDF(Pubmed)

Abstract:
Background Hand surgeons have been charged with the use of diverse modalities to enhance the consenting process following the Montgomery ruling. Artificial Intelligence language models have been suggested as patient education tools that may aid consent. Methods We compared the quality and readability of the Every Informed Decision Online (EIDO) patient information leaflet for carpal tunnel release with the artificial intelligence language model Chat Generative Pretrained Transformer (GPT). Results The quality of information by ChatGPT was significantly higher using the DISCERN score, 71/80 for ChatGPT compared to 62/80 for EIDO (p=0.014). DISCERN interrater observer reliability was high (0.65) using the kappa statistic. Flesch-Kincaid readability scoring was 12.3 for ChatGPT and 7.5 for EIDO, suggesting a more complex reading age for the ChatGPT information. Conclusion The artificial intelligence language model ChatGPT produces high-quality information at the expense of readability when compared to EIDO information leaflets for carpal tunnel release consent.
摘要:
背景技术手外科医师被指控使用多种方式来增强蒙哥马利裁决之后的同意过程。人工智能语言模型已被建议作为可能有助于同意的患者教育工具。方法我们将用于腕管释放的每个知情决策在线(EIDO)患者信息传单的质量和可读性与人工智能语言模型ChatGenerativePretrainedTransformer(GPT)进行了比较。结果使用DISCERN评分,ChatGPT的信息质量明显更高,ChatGPT为71/80,EIDO为62/80(p=0.014)。使用kappa统计量,DISCERN评分者间观察者的可靠性很高(0.65)。Flesch-Kincaid对ChatGPT的可读性评分为12.3,对EIDO的可读性评分为7.5,这表明ChatGPT信息的阅读年龄更为复杂。结论与EIDO信息传单相比,人工智能语言模型ChatGPT以可读性为代价产生了高质量的信息,用于腕管释放同意。
公众号