{Reference Type}: Journal Article {Title}: Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy. {Author}: Zhao H;Liang X;Meng B;Dohopolski M;Choi B;Cai B;Lin MH;Bai T;Nguyen D;Jiang S; {Journal}: Phys Imaging Radiat Oncol {Volume}: 31 {Issue}: 0 {Year}: 2024 Jul 暂无{DOI}: 10.1016/j.phro.2024.100610 {Abstract}: UNASSIGNED: Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.
UNASSIGNED: We introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.
UNASSIGNED: Our proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.
UNASSIGNED: Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.