■人工智能(AI)聊天机器人,如ChatGPT-4,已经显示出在医学的各个方面的巨大应用潜力,包括医学教育,临床实践,和研究。
■本研究旨在评估ChatGPT-4在2023年台湾听力学家资格考试中的表现,从而初步探索AI聊天机器人在听力学和听力保健服务领域的潜在效用。
■ChatGPT-4的任务是为2023年台湾听力学家资格考试提供答案和推理。考试包括六个科目:(1)基础听觉科学,(2)行为听力学,(3)电生理听力学,(4)听力装置的原理和实践,(5)听觉和平衡系统的健康和康复,(6)听觉和言语交流障碍(包括职业道德)。每科包括50道选择题,除了行为听力学,有49个问题,共计299个问题。
■6个科目的正确回答率如下:基础听觉科学为88%,行为听力学占63%,58%用于电生理听力学,72%用于听力设备的原理和实践,80%用于听觉和平衡系统的健康和康复,86%为听觉和言语交流障碍(包括职业道德)。299个问题的总体准确率为75%,超过了所有科目的平均准确率为60%的考试及格标准。对ChatGPT-4的回答的全面审查表明,不正确的答案主要是由于信息错误。
■ChatGPT-4在台湾听力学家资格考试中表现出强劲的表现,展示有效的逻辑推理技能。我们的结果表明,随着信息准确性的提高,ChatGPT-4的性能可以进一步提高。这项研究表明,人工智能聊天机器人在听力学和听力护理服务中的应用具有巨大潜力。
UNASSIGNED: Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research.
UNASSIGNED: This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and
hearing care services.
UNASSIGNED: ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of
hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions.
UNASSIGNED: The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of
hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination\'s passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4\'s responses indicated that incorrect answers were predominantly due to information errors.
UNASSIGNED: ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4\'s performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and
hearing care services.