关键词: 3-dimensional artificial intelligence dentofacial deformities facial and skeletal prediction orthognathic surgery virtual surgical planning

Mesh : Humans Deep Learning Face / anatomy & histology diagnostic imaging Imaging, Three-Dimensional / methods Orthognathic Surgical Procedures / methods Patient Care Planning Anatomic Landmarks Facial Bones / diagnostic imaging anatomy & histology surgery Male Female Adult Dentofacial Deformities / surgery diagnostic imaging

来  源:   DOI:10.1177/00220345241253186

Abstract:
The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.
摘要:
虚拟手术计划(VSP)在正颌手术中的应用日益广泛,这意味着迫切需要准确预测面部和骨骼形状。牙面畸形患者的颅面关系尚不清楚,由于复杂的解剖结构以及面部软组织和骨骼之间的非线性关系,面部和骨骼形状之间的转换仍然是一项具有挑战性的任务。在这项研究中,一种新颖的双向三维(3D)深度学习框架,名为P2P-ConvGC,是基于大规模数据集开发和验证的,用于在面部和骨骼形状之间进行准确的主题特定转换。具体来说,2阶段点采样策略用于生成多个不重叠的点子集,以表示高分辨率的面部和骨骼形状.将面部和骨骼点子集分别输入到预测系统中,通过骨骼预测子网络和面部预测子网络预测相应的骨骼和面部点子集。对于定量评估,精度是通过预测的骨骼或面部与相应的地面事实之间的形状误差和界标误差来计算的。通过将预测的点集与地面事实进行比较来计算形状误差,P2P-ConvGC优于现有的最先进的算法,包括P2P-Net,P2P-ASNL,和P2P-Conv。上颅骨中P2P-ConvGC的总界标误差(颅颌面界标的欧几里德距离),下颌骨,面部软组织为1.964±0.904mm,2.398±1.174mm,和2.226±0.774毫米,分别。此外,双向模型的临床可行性通过临床队列进行了验证.结果表明其预测能力,面部预测的平均表面偏差误差为0.895±0.175mm,骨骼预测的平均表面偏差误差为0.906±0.082mm。最后,我们提出的模型在特定主题的面部和骨骼形状预测方面取得了良好的性能,并在正颌手术的术后面部预测和VSP方面显示出临床应用潜力。
公众号