背景:现有文献强调了结构,生理,和腹部脂肪组织(AAT)子库之间的病理差异。这些子仓库的准确分离和定量对于提高我们对肥胖及其合并症的理解至关重要。然而,医学成像数据中的子仓库之间缺乏明确的界限,这对它们的分离提出了挑战,特别是内部脂肪组织(IAT)子仓库。迄今为止,AAT子仓库的量化仍然具有挑战性,以耗时为标志,昂贵的,和复杂的过程。
目的:为了实现和评估卷积神经网络,通过将皮下脂肪组织(SAT)划分为浅表皮下(SSAT)和深层皮下(DSAT)脂肪组织来实现AAT的颗粒评估,和IAT进入腹膜内(IPAT),腹膜后(RPAT),和椎旁(PSAT)脂肪组织。
方法:MRI数据集的回顾性收集来自新加坡孕前长期母婴结局研究(S-PRESTO:389名女性,年龄31.4±3.9岁)和新加坡成人代谢研究(SAMS:50名男性,年龄28.7±5.7岁)。对于所有数据集,地面实况分割掩码是通过手动分割创建的。通过对S-PRESTO数据(N=300)的5倍交叉验证来训练和评估基于Res-Net的3D-UNet。模型的最终性能在保持(N=89)和外部测试集(N=50,SAMS)上进行评估。
结果:所提出的方法能够可靠地分割3DMRI体积中的单个AAT子库,平均Dice相似性得分高达98.3%,97.2%,96.5%,96.3%,SSAT为95.9%,DSAT,IPAT,RPAT,和PSAT分别。
结论:卷积神经网络可以准确地将腹部SAT细分为SSAT和DSAT,和腹部IAT进入IPAT,RPAT,和PSAT精度高。所提出的方法具有显著促进肥胖成像和精密医学领域的进步的潜力。
Existing literature has highlighted structural, physiological, and pathological disparities among abdominal adipose tissue (AAT) sub-depots. Accurate separation and quantification of these sub-depots are crucial for advancing our understanding of obesity and its comorbidities. However, the absence of clear boundaries between the sub-depots in medical imaging data has challenged their separation, particularly for internal adipose tissue (IAT) sub-depots. To date, the quantification of AAT sub-depots remains challenging, marked by a time-consuming, costly, and complex process.
To implement and evaluate a convolutional neural network to enable granular assessment of AAT by compartmentalization of subcutaneous adipose tissue (SAT) into superficial subcutaneous (SSAT) and deep subcutaneous (DSAT) adipose tissue, and IAT into intraperitoneal (IPAT), retroperitoneal (RPAT), and paraspinal (PSAT) adipose tissue.
MRI datasets were retrospectively collected from Singapore Preconception Study for Long-Term Maternal and Child Outcomes (S-PRESTO: 389 women aged 31.4 ± 3.9 years) and Singapore Adult Metabolism Study (SAMS: 50 men aged 28.7 ± 5.7 years). For all datasets, ground truth segmentation masks were created through manual segmentation. A Res-Net based 3D-UNet was trained and evaluated via 5-fold cross-validation on S-PRESTO data (N = 300). The model\'s final performance was assessed on a hold-out (N = 89) and an external test set (N = 50, SAMS).
The proposed method enabled reliable segmentation of individual AAT sub-depots in 3D MRI volumes with high mean Dice similarity scores of 98.3%, 97.2%, 96.5%, 96.3%, and 95.9% for SSAT, DSAT, IPAT, RPAT, and PSAT respectively.
Convolutional neural networks can accurately sub-divide abdominal SAT into SSAT and DSAT, and abdominal IAT into IPAT, RPAT, and PSAT with high accuracy. The presented method has the potential to significantly contribute to advancements in the field of obesity imaging and precision medicine.