{Reference Type}: Journal Article {Title}: Assessing the Competence of Artificial Intelligence Programs in Pediatric Ophthalmology and Strabismus and Comparing their Relative Advantages. {Author}: Sensoy E;Citirik M; {Journal}: Rom J Ophthalmol {Volume}: 67 {Issue}: 4 {Year}: 2023 Oct-Dec 暂无{DOI}: 10.22336/rjo.2023.61 {Abstract}: Objective: The aim of the study was to determine the knowledge levels of ChatGPT, Bing, and Bard artificial intelligence programs produced by three different manufacturers regarding pediatric ophthalmology and strabismus and to compare their strengths and weaknesses. Methods: Forty-four questions testing the knowledge levels of pediatric ophthalmology and strabismus were asked in ChatGPT, Bing, and Bard artificial intelligence programs. Questions were grouped as correct or incorrect. The accuracy rates were statistically compared. Results: ChatGPT chatbot gave 59.1% correct answers, Bing chatbot gave 70.5% correct answers, and Bard chatbot gave 72.7% correct answers to the questions asked. No significant difference was observed between the rates of correct answers to the questions in all 3 artificial intelligence programs (p=0.343, Pearson's chi-square test). Conclusion: Although information about pediatric ophthalmology and strabismus can be accessed using current artificial intelligence programs, the answers given may not always be accurate. Care should always be taken when evaluating this information.