{Reference Type}: Journal Article {Title}: Evaluating the application of ChatGPT in China's residency training education: An exploratory study. {Author}: Shang L;Li R;Xue M;Guo Q;Hou Y; {Journal}: Med Teach {Volume}: 0 {Issue}: 0 {Year}: 2024 Jul 12 {Factor}: 4.277 {DOI}: 10.1080/0142159X.2024.2377808 {Abstract}: UNASSIGNED: The purpose of this study was to assess the utility of information generated by ChatGPT for residency education in China.
UNASSIGNED: We designed a three-step survey to evaluate the performance of ChatGPT in China's residency training education including residency final examination questions, patient cases, and resident satisfaction scores. First, 204 questions from the residency final exam were input into ChatGPT's interface to obtain the percentage of correct answers. Next, ChatGPT was asked to generate 20 clinical cases, which were subsequently evaluated by three instructors using a pre-designed Likert scale with 5 points. The quality of the cases was assessed based on criteria including clarity, relevance, logicality, credibility, and comprehensiveness. Finally, interaction sessions between 31 third-year residents and ChatGPT were conducted. Residents' perceptions of ChatGPT's feedback were assessed using a Likert scale, focusing on aspects such as ease of use, accuracy and completeness of responses, and its effectiveness in enhancing understanding of medical knowledge.
UNASSIGNED: Our results showed ChatGPT-3.5 correctly answered 45.1% of exam questions. In the virtual patient cases, ChatGPT received mean ratings of 4.57 ± 0.50, 4.68 ± 0.47, 4.77 ± 0.46, 4.60 ± 0.53, and 3.95 ± 0.59 points for clarity, relevance, logicality, credibility, and comprehensiveness from clinical instructors, respectively. Among training residents, ChatGPT scored 4.48 ± 0.70, 4.00 ± 0.82 and 4.61 ± 0.50 points for ease of use, accuracy and completeness, and usefulness, respectively.
UNASSIGNED: Our findings demonstrate ChatGPT's immense potential for personalized Chinese medical education.