%0 Journal Article %T The Development of an Artificial Intelligence Video Analysis-Based Web Application to Diagnose Oropharyngeal Dysphagia: A Pilot Study. %A Jeong CW %A Lee CS %A Lim DW %A Noh SH %A Moon HK %A Park C %A Kim MS %J Brain Sci %V 14 %N 6 %D 2024 May 27 %M 38928546 %F 3.333 %R 10.3390/brainsci14060546 %X The gold standard test for diagnosing dysphagia is the videofluoroscopic swallowing study (VFSS). However, the accuracy of this test varies depending on the specialist's skill level. We proposed a VFSS-based artificial intelligence (AI) web application to diagnose dysphagia. Video from the VFSS consists of multiframe data that contain approximately 300 images. To label the data, the server separated them into frames during the upload and stored them as a video for analysis. Then, the separated data were loaded into a labeling tool to perform the labeling. The labeled file was downloaded, and an AI model was developed by training with You Only Look Once (YOLOv7). Using a utility called SplitFolders, the entire dataset was divided according to a ratio of training (70%), test (10%), and validation (20%). When a VFSS video file was uploaded to an application equipped with the developed AI model, it was automatically classified and labeled as oral, pharyngeal, or esophageal. The dysphagia of a person was categorized as either penetration or aspiration, and the final analyzed result was displayed to the viewer. The following labeling datasets were created for the AI learning: oral (n = 2355), pharyngeal (n = 2338), esophageal (n = 1480), penetration (n = 1856), and aspiration (n = 1320); the learning results of the YOLO model, which analyzed dysphagia using the dataset, were predicted with accuracies of 0.90, 0.82, 0.79, 0.92, and 0.96, respectively. This is expected to help clinicians more efficiently suggest the proper dietary options for patients with oropharyngeal dysphagia.