medical image

医学图像
  • 文章类型: Journal Article
    Colorectal cancer (CRC) is a common malignant tumor that seriously threatens human health. CRC presents a formidable challenge in terms of accurate identification due to its indistinct boundaries. With the widespread adoption of convolutional neural networks (CNNs) in image processing, leveraging CNNs for automatic classification and segmentation holds immense potential for enhancing the efficiency of colorectal cancer recognition and reducing treatment costs. This paper explores the imperative necessity for applying CNNs in clinical diagnosis of CRC. It provides an elaborate overview on research advancements pertaining to CNNs and their improved models in CRC classification and segmentation. Furthermore, this work summarizes the ideas and common methods for optimizing network performance and discusses the challenges faced by CNNs as well as future development trends in their application towards CRC classification and segmentation, thereby promoting their utilization within clinical diagnosis.
    结直肠癌是一种常见的胃肠道恶性肿瘤,严重威胁人类健康。由于结直肠癌区边界模糊,使得对结直肠癌的准确识别存在很大挑战。随着卷积神经网络在图像处理领域应用的普及,利用卷积神经网络进行结直肠癌的自动分类与分割,在提高结直肠癌识别效率、降低癌症治疗成本方面具有很大潜力。本文论述了卷积神经网络在结直肠癌临床诊断中应用的必要性;详细介绍了目前卷积神经网络及其改进型在结直肠癌分类和分割两个部分中的研究进展;总结了对于网络性能优化的思路和常用方法,并讨论了卷积神经网络应用在结直肠癌分类与分割中所面对的挑战和未来的发展趋势,以促进卷积神经网络在结直肠癌临床诊断中的应用。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文介绍了有效的医学图像目标分割任何事物模型(EMedSAM),解决了使用SAM进行医学图像分割任务的高计算需求和有限的适应性。我们提出了一个小说,紧凑型图像编码器,DD-TinyViT,旨在通过一种称为med-adapter的创新参数调整方法来提高分割效率。轻量级DD-TinyViT编码器是使用解耦蒸馏方法从众所周知的ViT-H导出的。MEdSAM对特定结构的分割和识别能力通过med-adapter提高,动态调整专门用于医学成像的模型参数。我们使用公共FLARE2022数据集和浙江大学医学院附属第一医院的数据集对EMedSAM进行了广泛的测试。结果表明,我们的模型在多器官和肺分割任务中都优于现有的最新模型。
    This paper introduces the efficient medical-images-aimed segment anything model (EMedSAM), addressing the high computational demands and limited adaptability of using SAM for medical image segmentation tasks. We present a novel, compact image encoder, DD-TinyViT, designed to enhance segmentation efficiency through an innovative parameter tuning method called med-adapter. The lightweight DD-TinyViT encoder is derived from the well-known ViT-H using a decoupled distillation approach.The segmentation and recognition capabilities of EMedSAM for specific structures are improved by med-adapter, which dynamically adjusts the model parameters specifically for medical imaging. We conducted extensive testing on EMedSAM using the public FLARE 2022 dataset and datasets from the First Hospital of Zhejiang University School of Medicine. The results demonstrate that our model outperforms existing state-of-the-art models in both multi-organ and lung segmentation tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:多器官分割是医学成像中的一项关键任务,在临床实践和研究中具有广泛的应用。从高分辨率3D医学图像中准确描绘器官,比如CT扫描,对放射治疗计划至关重要,提高治疗效果,并将辐射毒性风险降至最低。此外,它在定量图像分析中起着举足轻重的作用,支持各种医学研究。尽管意义重大,从3D图像中手动分割多个器官是劳动密集型的,并且由于操作者间的高变异性而容易再现性低。深度学习的最新进展导致了几种自动分割方法,然而,许多人严重依赖标记数据和人体解剖学专业知识。
    目的:在本研究中,我们的主要目标是解决现有的用于腹部多器官分割的半监督学习(SSL)方法的局限性.我们的目标是引入一种新的SSL方法,利用未标记的数据来增强深度神经网络在分割腹部器官方面的性能。具体来说,我们提出了一种将重绘网络纳入分割过程的方法,以纠正错误并提高准确性。
    方法:我们提出的方法包括三个互连的神经网络:用于图像分割的分割网络,一致性正规化的教师网络,和用于重新绘制对象的重新绘制网络。在培训期间,细分网络经历了两轮优化:基础训练和重新调整。我们采用平均教师模型作为我们的基线SSL方法,利用有标签和无标签的数据。然而,仅使用此方法识别腹部多器官分割中的重大错误,我们引入重绘网络来生成基于CT扫描的重绘图像,保留原始解剖信息。我们的方法基于生成过程假设,包括分割,绘图,和组装阶段。正确的分割对于生成准确的图像至关重要。在基础训练阶段,分割网络使用标记和未标记数据进行训练,纳入一致性学习,以确保扰动前后的一致预测。重新调整阶段的重点是通过根据重新绘制的CT图像和原始CT图像之间的差异优化分割网络参数来减少分割误差。
    结果:我们使用两个公开可用的数据集评估了我们的方法:颅骨外(BTCV)分割数据集(训练:44,验证:6)和腹部多器官分割(AMOS)挑战2022数据集(训练:138,验证:16)。我们的结果与最先进的SSL方法进行了比较,包括MT和双任务一致性(DTC),使用骰子相似性系数(DSC)作为准确性度量。在这两个数据集上,我们提出的SSL方法始终优于其他方法,包括监督学习,实现各种腹部器官优越的分割性能。这些发现证明了我们方法的有效性,即使使用有限数量的标记数据。
    结论:我们新颖的腹部多器官分割半监督学习方法解决了与这项任务相关的挑战。通过集成重绘网络并利用未标记的数据,我们在准确性方面取得了显着提高。与现有的SSL和监督学习方法相比,我们的方法表现出卓越的性能。这种方法在提高医学成像应用中多器官分割的精度和效率方面具有很大的前景。
    BACKGROUND: Multi-organ segmentation is a critical task in medical imaging, with wide-ranging applications in both clinical practice and research. Accurate delineation of organs from high-resolution 3D medical images, such as CT scans, is essential for radiation therapy planning, enhancing treatment outcomes, and minimizing radiation toxicity risks. Additionally, it plays a pivotal role in quantitative image analysis, supporting various medical research studies. Despite its significance, manual segmentation of multiple organs from 3D images is labor-intensive and prone to low reproducibility due to high interoperator variability. Recent advancements in deep learning have led to several automated segmentation methods, yet many rely heavily on labeled data and human anatomy expertise.
    OBJECTIVE: In this study, our primary objective is to address the limitations of existing semi-supervised learning (SSL) methods for abdominal multi-organ segmentation. We aim to introduce a novel SSL approach that leverages unlabeled data to enhance the performance of deep neural networks in segmenting abdominal organs. Specifically, we propose a method that incorporates a redrawing network into the segmentation process to correct errors and improve accuracy.
    METHODS: Our proposed method comprises three interconnected neural networks: a segmentation network for image segmentation, a teacher network for consistency regularization, and a redrawing network for object redrawing. During training, the segmentation network undergoes two rounds of optimization: basic training and readjustment. We adopt the Mean-Teacher model as our baseline SSL approach, utilizing labeled and unlabeled data. However, recognizing significant errors in abdominal multi-organ segmentation using this method alone, we introduce the redrawing network to generate redrawn images based on CT scans, preserving original anatomical information. Our approach is grounded in the generative process hypothesis, encompassing segmentation, drawing, and assembling stages. Correct segmentation is crucial for generating accurate images. In the basic training phase, the segmentation network is trained using both labeled and unlabeled data, incorporating consistency learning to ensure consistent predictions before and after perturbations. The readjustment phase focuses on reducing segmentation errors by optimizing the segmentation network parameters based on the differences between redrawn and original CT images.
    RESULTS: We evaluated our method using two publicly available datasets: the beyond the cranial vault (BTCV) segmentation dataset (training: 44, validation: 6) and the abdominal multi-organ segmentation (AMOS) challenge 2022 dataset (training:138, validation:16). Our results were compared with state-of-the-art SSL methods, including MT and dual-task consistency (DTC), using the Dice similarity coefficient (DSC) as an accuracy metric. On both datasets, our proposed SSL method consistently outperformed other methods, including supervised learning, achieving superior segmentation performance for various abdominal organs. These findings demonstrate the effectiveness of our approach, even with a limited number of labeled data.
    CONCLUSIONS: Our novel semi-supervised learning approach for abdominal multi-organ segmentation addresses the challenges associated with this task. By integrating a redrawing network and leveraging unlabeled data, we achieve remarkable improvements in accuracy. Our method demonstrates superior performance compared to existing SSL and supervised learning methods. This approach holds great promise in enhancing the precision and efficiency of multi-organ segmentation in medical imaging applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在临床实践中,肺静脉的解剖分类在房颤射频消融手术的术前评估中起着至关重要的作用。肺静脉解剖结构的准确分类有助于医生选择合适的标测电极,避免引起肺动脉高压。由于肺静脉的解剖分类多种多样,以及数据分布的不平衡,深度学习模型在提取深度特征时往往表现出较差的表达能力,导致误判,影响分类精度。因此,为了解决左心房肺静脉分类不平衡的问题,本文提出了一种融合多尺度特征增强注意力和双特征提取分类器的网络,叫做DECNet。多尺度特征增强注意力利用多尺度信息引导深层特征的强化,生成通道权重和空间权重,增强深层特征的表达能力。双特征提取分类器为每个类别分配固定数量的通道,平等地评估所有类别,从而缓解了数据失衡导致的学习偏差和过拟合。通过将两者结合起来,增强了深层特征的表达能力,实现对左心房肺静脉形态的准确分类,为后续临床治疗提供支持。所提出的方法是在辽宁省人民医院提供的数据集和公开的DermaMNIST数据集上进行评估的,平均准确率为78.81%和83.44%,分别,证明了所提出方法的有效性。
    In clinical practice, the anatomical classification of pulmonary veins plays a crucial role in the preoperative assessment of atrial fibrillation radiofrequency ablation surgery. Accurate classification of pulmonary vein anatomy assists physicians in selecting appropriate mapping electrodes and avoids causing pulmonary arterial hypertension. Due to the diverse and subtly different anatomical classifications of pulmonary veins, as well as the imbalance in data distribution, deep learning models often exhibit poor expression capability in extracting deep features, leading to misjudgments and affecting classification accuracy. Therefore, in order to solve the problem of unbalanced classification of left atrial pulmonary veins, this paper proposes a network integrating multi-scale feature-enhanced attention and dual-feature extraction classifiers, called DECNet. The multi-scale feature-enhanced attention utilizes multi-scale information to guide the reinforcement of deep features, generating channel weights and spatial weights to enhance the expression capability of deep features. The dual-feature extraction classifier assigns a fixed number of channels to each category, equally evaluating all categories, thus alleviating the learning bias and overfitting caused by data imbalance. By combining the two, the expression capability of deep features is strengthened, achieving accurate classification of left atrial pulmonary vein morphology and providing support for subsequent clinical treatment. The proposed method is evaluated on datasets provided by the People\'s Hospital of Liaoning Province and the publicly available DermaMNIST dataset, achieving average accuracies of 78.81% and 83.44%, respectively, demonstrating the effectiveness of the proposed approach.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    磁共振成像(MRI)在脑肿瘤分类中的应用受到传统诊断程序复杂、耗时的制约,主要是因为需要对几个地区进行全面评估。然而,深度学习(DL)的进步促进了自动化系统的开发,该系统可以改善医学图像的识别和评估,有效应对这些困难。卷积神经网络(CNN)已经成为图像分类和视觉感知的坚定工具。这项研究引入了一种创新的方法,将CNN与混合注意力机制相结合,对原发性脑肿瘤进行分类,包括神经胶质瘤,脑膜瘤,垂体,和无肿瘤病例。所提出的算法经过了来自文献中有据可查的基准数据的严格测试。它与建立的预训练模型如Xception、ResNet50V2、Densenet201、ResNet101V2和DenseNet169。该方法的性能指标显著,分类准确率为98.33%,准确率和召回率为98.30%,F1评分为98.20%。实验发现强调了新方法在识别最常见类型的脑肿瘤方面的优越性。此外,该方法表现出良好的泛化能力,使其成为医疗保健准确有效地诊断大脑状况的宝贵工具。
    The application of magnetic resonance imaging (MRI) in the classification of brain tumors is constrained by the complex and time-consuming characteristics of traditional diagnostics procedures, mainly because of the need for a thorough assessment across several regions. Nevertheless, advancements in deep learning (DL) have facilitated the development of an automated system that improves the identification and assessment of medical images, effectively addressing these difficulties. Convolutional neural networks (CNNs) have emerged as steadfast tools for image classification and visual perception. This study introduces an innovative approach that combines CNNs with a hybrid attention mechanism to classify primary brain tumors, including glioma, meningioma, pituitary, and no-tumor cases. The proposed algorithm was rigorously tested with benchmark data from well-documented sources in the literature. It was evaluated alongside established pre-trained models such as Xception, ResNet50V2, Densenet201, ResNet101V2, and DenseNet169. The performance metrics of the proposed method were remarkable, demonstrating classification accuracy of 98.33%, precision and recall of 98.30%, and F1-score of 98.20%. The experimental finding highlights the superior performance of the new approach in identifying the most frequent types of brain tumors. Furthermore, the method shows excellent generalization capabilities, making it an invaluable tool for healthcare in diagnosing brain conditions accurately and efficiently.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在牙科领域,牙结石的存在是一个常见的问题。如果不及时解决,它有可能导致牙龈发炎和最终的牙齿脱落。Bitewing(BW)图像通过提供牙齿结构的全面视觉表示来发挥关键作用,允许牙医在临床评估期间精确检查难以到达的区域。这种视觉辅助明显有助于早期发现结石,促进及时干预并改善患者的总体预后。这项研究介绍了一种设计用于BW图像中牙结石检测的系统,利用YOLOv8的力量准确识别单个牙齿。该系统拥有令人印象深刻的97.48%的准确率,召回率(敏感度)为96.81%,特异性率为98.25%。此外,这项研究介绍了一种新的方法来增强齿间边缘通过先进的图像增强算法。该算法结合了中值滤波器和双边滤波器的使用,以改善卷积神经网络在对牙结石进行分类时的准确性。在图像增强之前,使用GoogLeNet实现的准确度为75.00%,显着提高到增强后的96.11%。这些结果具有简化牙科咨询的潜力,提高牙科服务的整体效率。
    In the field of dentistry, the presence of dental calculus is a commonly encountered issue. If not addressed promptly, it has the potential to lead to gum inflammation and eventual tooth loss. Bitewing (BW) images play a crucial role by providing a comprehensive visual representation of the tooth structure, allowing dentists to examine hard-to-reach areas with precision during clinical assessments. This visual aid significantly aids in the early detection of calculus, facilitating timely interventions and improving overall outcomes for patients. This study introduces a system designed for the detection of dental calculus in BW images, leveraging the power of YOLOv8 to identify individual teeth accurately. This system boasts an impressive precision rate of 97.48%, a recall (sensitivity) of 96.81%, and a specificity rate of 98.25%. Furthermore, this study introduces a novel approach to enhancing interdental edges through an advanced image-enhancement algorithm. This algorithm combines the use of a median filter and bilateral filter to refine the accuracy of convolutional neural networks in classifying dental calculus. Before image enhancement, the accuracy achieved using GoogLeNet stands at 75.00%, which significantly improves to 96.11% post-enhancement. These results hold the potential for streamlining dental consultations, enhancing the overall efficiency of dental services.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    软组织肉瘤,与宫颈癌和食道癌的发病率相似,来自各种软组织,如平滑肌,脂肪,和纤维组织。成像中肉瘤的有效分割对于准确诊断至关重要。
    本研究收集了45例大腿软组织肉瘤患者的多模态MRI图像,总计8,640张图像。这些图像由临床医生注释以描绘肉瘤区域,创建一个全面的数据集。我们基于UNet框架开发了一种新颖的细分模型,用残差网络和注意力机制增强,以改进特定于模态的信息提取。此外,采用自监督学习策略来优化编码器的特征提取能力。
    与单模态输入相比,新模型在使用多模态MRI图像时表现出优越的分割性能。通过各种实验设置验证了模型利用创建的数据集的有效性,确认增强的能力,以表征肿瘤区域在不同的模式。
    多模态MRI图像和先进的机器学习技术在我们的模型中的集成显着改善了大腿成像中软组织肉瘤的分割。这一进步有助于临床医生更好地诊断和了解患者的病情,利用不同成像方式的优势。进一步的研究可以探索这些技术在其他类型的软组织肉瘤和其他解剖部位的应用。
    UNASSIGNED: Soft tissue sarcomas, similar in incidence to cervical and esophageal cancers, arise from various soft tissues like smooth muscle, fat, and fibrous tissue. Effective segmentation of sarcomas in imaging is crucial for accurate diagnosis.
    UNASSIGNED: This study collected multi-modal MRI images from 45 patients with thigh soft tissue sarcoma, totaling 8,640 images. These images were annotated by clinicians to delineate the sarcoma regions, creating a comprehensive dataset. We developed a novel segmentation model based on the UNet framework, enhanced with residual networks and attention mechanisms for improved modality-specific information extraction. Additionally, self-supervised learning strategies were employed to optimize feature extraction capabilities of the encoders.
    UNASSIGNED: The new model demonstrated superior segmentation performance when using multi-modal MRI images compared to single-modal inputs. The effectiveness of the model in utilizing the created dataset was validated through various experimental setups, confirming the enhanced ability to characterize tumor regions across different modalities.
    UNASSIGNED: The integration of multi-modal MRI images and advanced machine learning techniques in our model significantly improves the segmentation of soft tissue sarcomas in thigh imaging. This advancement aids clinicians in better diagnosing and understanding the patient\'s condition, leveraging the strengths of different imaging modalities. Further studies could explore the application of these techniques to other types of soft tissue sarcomas and additional anatomical sites.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:考虑到口腔肿瘤的稀有性和多样性,需要一个可靠的计算机辅助病理诊断系统。已成功设计出使用深度神经网络的基于内容的图像检索(CBIR)系统,用于数字病理学。由于缺乏针对口腔病理学量身定制的广泛的图像数据库和特征提取器,因此尚未研究用于口腔病理学的CBIR系统。
    方法:本研究使用从30类口腔肿瘤中构建的大型CBIR数据库来比较深度学习方法作为特征提取器。
    结果:通过使用自监督学习(SSL)方法(SimCLR为0.900,TiCo为0.897)在数据库图像上训练的模型,获得了接收器工作特征曲线(AUC)下的最高平均面积。使用来自使用智能手机拍摄的相同案例的查询图像验证了模型的可泛化性。当智能手机图像作为查询进行测试时,两种模型的平均AUC最高(SimCLR为0.871,TiCo为0.857)。我们通过评估前10个平均准确度并检查确切的诊断类别及其鉴别诊断类别来确保检索到的图像结果很容易观察到。
    结论:使用特定于目标部位的图像数据使用SSL方法训练深度学习模型有利于口腔肿瘤组织学中的CBIR任务,以获得组织学上有意义的结果和高性能。这一结果为CBIR系统的有效开发提供了见解,以帮助提高组织病理学诊断的准确性和速度,并在未来推进口腔肿瘤研究。
    BACKGROUND:  Oral tumors necessitate a dependable computer-assisted pathological diagnosis system considering their rarity and diversity. A content-based image retrieval (CBIR) system using deep neural networks has been successfully devised for digital pathology. No CBIR system for oral pathology has been investigated because of the lack of an extensive image database and feature extractors tailored to oral pathology.
    METHODS: This study uses a large CBIR database constructed from 30 categories of oral tumors to compare deep learning methods as feature extractors.
    RESULTS: The highest average area under the receiver operating characteristic curve (AUC) was achieved by models trained on database images using self-supervised learning (SSL) methods (0.900 with SimCLR and 0.897 with TiCo). The generalizability of the models was validated using query images from the same cases taken with smartphones. When smartphone images were tested as queries, both models yielded the highest mean AUC (0.871 with SimCLR and 0.857 with TiCo). We ensured the retrieved image result would be easily observed by evaluating the top 10 mean accuracies and checking for an exact diagnostic category and its differential diagnostic categories.
    CONCLUSIONS: Training deep learning models with SSL methods using image data specific to the target site is beneficial for CBIR tasks in oral tumor histology to obtain histologically meaningful results and high performance. This result provides insight into the effective development of a CBIR system to help improve the accuracy and speed of histopathology diagnosis and advance oral tumor research in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    数字设备可以轻松伪造医学图像。医学图像中的复制移动伪造检测(CMFD)导致了无法访问高级医疗设备的地区的滥用。复制移动图像的伪造直接影响医生的决策。这里讨论的方法是检测医学图像伪造的最佳方法。
    所提出的方法基于一种进化算法,该算法可以很好地检测假块。在第一阶段,在离散余弦变换(DCT)的帮助下,图像被带到信号电平。然后通过应用离散小波变换(DWT)来准备分割。DWT的低低频段,具有最多的图像属性,被分成块。使用平衡优化算法搜索每个块。区块最有可能被选中,并生成最终图像。
    基于三个精度标准对所提出的方法进行了评估,召回,和F1,并获得90.07%,92.34%,和91.56%,分别。它优于在医学图像上研究的方法。
    得出的结论是,我们在医学图像中用于CMFD的方法更准确。
    UNASSIGNED: Digital devices can easily forge medical images. Copy-move forgery detection (CMFD) in medical image has led to abuses in areas where access to advanced medical devices is unavailable. Forgery of the copy-move image directly affects the doctor\'s decision. The method discussed here is an optimal method for detecting medical image forgery.
    UNASSIGNED: The proposed method is based on an evolutionary algorithm that can detect fake blocks well. In the first stage, the image is taken to the signal level with the help of a discrete cosine transform (DCT). It is then ready for segmentation by applying discrete wavelet transform (DWT). The low-low band of DWT, which has the most image properties, is divided into blocks. Each block is searched using the equilibrium optimization algorithm. The blocks are most likely to be selected, and the final image is generated.
    UNASSIGNED: The proposed method was evaluated based on three criteria of precision, recall, and F1 and obtained 90.07%, 92.34%, and 91.56%, respectively. It is superior to the methods studied on medical images.
    UNASSIGNED: It concluded that our method for CMFD in the medical images was more accurate.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    传统上,从医学图像中构建自动肌肉分割的训练数据集,涉及熟练的操作员,导致较高的劳动力成本和有限的可扩展性。为了解决这个问题,我们开发了一种工具,可以让非专家进行有效的注释,并评估其训练自动分割网络的有效性。我们的系统允许用户对模板三维(3D)解剖模型进行变形,以使用具有轴向独立控制点的自由变形来拟合目标磁共振图像。矢状,和日冕方向。这种方法通过允许非专家直观地调整模型来简化注释过程,启用模板中所有肌肉的同时注释。我们评估了由非专家执行的工具辅助分割的质量,与专家分割相比,Dice系数大于0.75,没有明显的错误,例如错误标记相邻肌肉或省略肌肉组织。使用此工具创建的数据集训练的自动分割网络显示出与使用专家生成的数据集训练的网络相当或优于的性能。这种创新的工具大大减少了与自动肌肉分割数据集创建相关的时间和劳动力成本,潜在的革命性的医学图像标注和加速在各种临床应用中基于深度学习的分割网络的发展。
    Traditionally, constructing training datasets for automatic muscle segmentation from medical images involved skilled operators, leading to high labor costs and limited scalability. To address this issue, we developed a tool that enables efficient annotation by non-experts and assessed its effectiveness for training an automatic segmentation network. Our system allows users to deform a template three-dimensional (3D) anatomical model to fit a target magnetic-resonance image using free-form deformation with independent control points for axial, sagittal, and coronal directions. This method simplifies the annotation process by allowing non-experts to intuitively adjust the model, enabling simultaneous annotation of all muscles in the template. We evaluated the quality of the tool-assisted segmentation performed by non-experts, which achieved a Dice coefficient greater than 0.75 compared to expert segmentation, without significant errors such as mislabeling adjacent muscles or omitting musculature. An automatic segmentation network trained with datasets created using this tool demonstrated performance comparable to or superior to that of networks trained with expert-generated datasets. This innovative tool significantly reduces the time and labor costs associated with dataset creation for automatic muscle segmentation, potentially revolutionizing medical image annotation and accelerating the development of deep learning-based segmentation networks in various clinical applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号