3D segmentation

3D 分割
  • 文章类型: Journal Article
    三维可视化和分割越来越广泛地应用于物理,生物和医学科学,促进先进的调查方法。然而,在主流3D可视化平台的范围内,分段体积或结果的集成和再现仍然受到兼容性约束的阻碍。这些障碍不仅挑战了结果的复制,而且阻碍了交叉验证3D可视化输出准确性的过程。为了解决这个差距,我们开发了一种在Drishti的开源框架内实现的创新的重新可视化方法,三维可视化软件。利用四个动物样本和三个主流3D可视化平台作为案例研究,我们的方法证明了分段结果到Drishti的无缝可转移性。此功能有效地促进了身份验证的新途径,并增强了对分段数据的审查。通过促进这种互操作性,我们的方法强调了在不同科学领域的准确性验证和合作研究工作方面取得重大进展的潜力.
    3D visualization and segmentation are increasingly widely used in physical, biological and medical science, facilitating advanced investigative methodologies. However, the integration and reproduction of segmented volumes or results across the spectrum of mainstream 3D visualization platforms remain hindered by compatibility constraints. These barriers not only challenge the replication of findings but also obstruct the process of cross-validating the accuracy of 3D visualization outputs. To address this gap, we developed an innovative revisualization method implemented within the open-source framework of Drishti, a 3D visualization software. Leveraging four animal samples alongside three mainstream 3D visualization platforms as case studies, our method demonstrates the seamless transferability of segmented results into Drishti. This capability effectively fosters a new avenue for authentication and enhanced scrutiny of segmented data. By facilitating this interoperability, our approach underscores the potential for significant advancements in accuracy validation and collaborative research efforts across diverse scientific domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在基于LiDAR传感器的自动驾驶汽车的背景下,分割网络在准确识别和分类对象中起着至关重要的作用。然而,用于训练网络的LiDAR传感器类型与部署在现实驾驶环境中的LiDAR传感器类型之间的差异可能会由于输入张量属性的差异而导致性能下降,比如x,y,和z坐标,和强度。为了解决这个问题,我们提出了新颖的强度渲染和数据插值技术。我们的研究通过将这些方法应用于现实场景中的对象跟踪来评估这些方法的有效性。提出的解决方案旨在协调传感器数据之间的差异,从而提高自主车辆感知系统的深度学习网络的性能和可靠性。此外,我们的算法防止性能下降,即使不同类型的传感器用于训练数据和实际应用。这种方法允许使用公开可用的开放数据集,而无需花费大量时间使用实际部署的传感器进行数据集构建和注释。从而大大节省时间和资源。当应用提出的方法时,与没有这些增强的场景相比,我们观察到mIoU性能提高了大约20%。
    In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们的研究调查了将先前的解剖学知识纳入深度学习(DL)方法的潜在好处,该方法设计用于在胸部CT扫描中自动分割肺叶。
    我们介绍了一种基于DL的自动化方法,该方法利用来自肺部血管系统的解剖信息来指导和增强分割过程。这涉及利用肺血管连通性(LVC)图,编码相关肺血管解剖数据。我们的研究探讨了nnU-Net框架内三种不同神经网络架构的性能:独立的U-Net,多任务U-Net,和级联U网。
    实验结果表明,在DL模型中包含LVC信息可以提高分割精度,特别是,在具有挑战性的呼气胸部CT容积边界区域。此外,我们的研究证明了LVC增强模型泛化能力的潜力。最后,通过对10例COVID-19患者的肺叶分割,评估了该方法的鲁棒性,证明了其在肺部疾病中的适用性。
    结合先前的解剖信息,例如LVC,进入DL模型显示出增强细分性能的希望,特别是在边界区域。然而,这种改进的程度有局限性,进一步探索其实际适用性。
    UNASSIGNED: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.
    UNASSIGNED: We introduce an automated DL-based approach that leverages anatomical information from the lung\'s vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.
    UNASSIGNED: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model\'s generalization capabilities. Finally, the method\'s robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.
    UNASSIGNED: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    鼻旁窦,由八个充满空气的空腔组成的两侧对称系统,代表马身体最复杂的部分之一。这项研究旨在从马头的计算机断层扫描(CT)图像中提取形态测量,并实施聚类分析,以计算机辅助识别与年龄相关的变化。18匹尸体马的头,2-25岁,被CT成像和分割以提取它们的体积,表面积,额窦(FS)的相对密度,背甲窦(DCS),腹侧耳廓窦(VCS),鼻端上颌窦(RMS),上颌窦(CMS),蝶窦(SS),腭窦(PS),和中耳窦(MCS)。数据分为年轻,中年,和老马群,并使用K-means聚类算法进行聚类。形态测量根据马匹的鼻窦位置和年龄而变化,而不是身体侧。VCS的体积和表面积,RMS,CMS随着马龄的增加而增加。RMS的精度值为0.72,CMS为0.67,VCS为0.31,RMS和CMS证实了基于CT的马鼻旁窦3D图像的年龄相关聚类的可能性,但VCS证明了这一可能性.
    The paranasal sinuses, a bilaterally symmetrical system of eight air-filled cavities, represent one of the most complex parts of the equine body. This study aimed to extract morphometric measures from computed tomography (CT) images of the equine head and to implement a clustering analysis for the computer-aided identification of age-related variations. Heads of 18 cadaver horses, aged 2-25 years, were CT-imaged and segmented to extract their volume, surface area, and relative density from the frontal sinus (FS), dorsal conchal sinus (DCS), ventral conchal sinus (VCS), rostral maxillary sinus (RMS), caudal maxillary sinus (CMS), sphenoid sinus (SS), palatine sinus (PS), and middle conchal sinus (MCS). Data were grouped into young, middle-aged, and old horse groups and clustered using the K-means clustering algorithm. Morphometric measurements varied according to the sinus position and age of the horses but not the body side. The volume and surface area of the VCS, RMS, and CMS increased with the age of the horses. With accuracy values of 0.72 for RMS, 0.67 for CMS, and 0.31 for VCS, the possibility of the age-related clustering of CT-based 3D images of equine paranasal sinuses was confirmed for RMS and CMS but disproved for VCS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:许多类型的先天性心脏病都适合手术修复或缓解。该程序通常具有挑战性,需要特定的手术培训,与有限的现实生活中的曝光和通常昂贵的模拟选项。我们的目标是创建逼真且经济实惠的心脏和血管3D仿真模型,以改善培训。
    方法:我们使用多种材料创建了模制容器模型,以确定最佳复制人体血管组织的材料。然后将这种材料用于制造更多的血管,以训练居民进行插管程序。使用免费的开源软件对23个月大的右心室双出口患者的磁共振成像视图进行了分割。通过3D打印生产的可重复使用的模具用于创建心脏的硅胶模型,使用与容器相同的材料,心脏外科医生用它来模拟Rastelli的手术.
    结果:最好的材料是柔软的弹性硅树脂(肖氏A硬度8)。对船舶模型的培训减少了居民的程序时间,并提高了他们在绩效等级量表上的等级。外科医生评估了模制的心脏模型是真实的,并且能够对其进行Rastelli手术。即使阀门表现不佳,它被发现是有用的干预前的训练。
    结论:通过使用免费分割软件,一种相对低成本的硅胶,和一种基于可重复使用模具的技术,获得适合先天性心脏缺损手术训练的心脏模型的成本可以大大降低.
    OBJECTIVE: Many types of congenital heart disease are amenable to surgical repair or palliation. The procedures are often challenging and require specific surgical training, with limited real-life exposure and often costly simulation options. Our objective was to create realistic and affordable 3D simulation models of the heart and vessels to improve training.
    METHODS: We created moulded vessel models using several materials, to identify the material that best replicated human vascular tissue. This material was then used to make more vessels to train residents in cannulation procedures. Magnetic resonance imaging views of a 23-month-old patient with double-outlet right ventricle were segmented using free open-source software. Re-usable moulds produced by 3D printing served to create a silicone model of the heart, with the same material as the vessels, which was used by a heart surgeon to simulate a Rastelli procedure.
    RESULTS: The best material was a soft elastic silicone (Shore A hardness 8). Training on the vessel models decreased the residents\' procedural time and improved their grades on a performance rating scale. The surgeon evaluated the moulded heart model as realistic and was able to perform the Rastelli procedure on it. Even if the valves were poorly represented, it was found to be useful for preintervention training.
    CONCLUSIONS: By using free segmentation software, a relatively low-cost silicone and a technique based on re-usable moulds, the cost of obtaining heart models suitable for training in congenital heart defect surgery can be substantially decreased.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    近年来3D技术的快速发展带来了农业领域的重大变革,包括精准的牲畜管理。从三维几何信息,可以分析韩国牛的体重和身体部位的特征以改善牛的生长。在本文中,相机系统被构建以同步地捕获3D数据并且然后重建3D网格表示。总的来说,重建非刚性物体,一个摄像机系统被同步和校准,然后将每个摄像机的数据转换为全局坐标。然而,当在真实环境中重建牛时,包括围栏和摄像机振动在内的困难可能导致重建过程的失败。提出了一种自动消除环境围栏和噪声的新方案。提出了一种交织相机姿态更新的优化方法,并且相机姿态和初始相机位置之间的距离被添加作为目标函数的一部分。相机的点云与网格输出之间的差异从7.5mm减小到5.5mm。实验结果表明,该方案可以在真实环境中自动生成高质量的网格。该方案提供的数据可用于韩国牛的其他研究。
    The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when reconstructing cattle in a real environment, difficulties including fences and the vibration of cameras can lead to the failure of the process of reconstruction. A new scheme is proposed that automatically removes environmental fences and noise. An optimization method is proposed that interweaves camera pose updates, and the distances between the camera pose and the initial camera position are added as part of the objective function. The difference between the camera\'s point clouds to the mesh output is reduced from 7.5 mm to 5.5 mm. The experimental results showed that our scheme can automatically generate a high-quality mesh in a real environment. This scheme provides data that can be used for other research on Korean cattle.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:转录状态的空间作图提供了对组织环境中细胞功能和相互作用的有价值的生物学见解。准确的3D细胞分割是分析这些数据以了解疾病和原位正常发育的关键步骤。当前设计用于自动化3D分割的方法包括沿一维拼接掩模,从头开始训练3D神经网络架构,并从所有维度上的2D分割重建3D体积。然而,沿非拼接维度的不准确分割阻碍了现有方法的适用性,缺乏高质量多样的3D训练数据,以及由于采集约束导致的图像分辨率沿正交方向的不均匀性;因此,它们在实践中没有被广泛使用。
    方法:为了应对这些挑战,我们制定了一个新的最优运输(OT)方法,发现跨层细胞对应的问题。我们提议CellStitch,一个灵活的管道,从3D图像中分割细胞,而不需要大量的3D训练数据。我们进一步扩展了我们的方法,以从高度各向异性的细胞图像中插值内部切片,以恢复各向同性的细胞形态。
    结果:我们通过八个具有不同各向异性水平和细胞形状的3D植物微观数据集评估了CellStitch的性能。CellStitch在各向异性图像上的性能大大优于最先进的方法,并实现了与各向同性设置中的竞争方法相当的分割质量。我们以实例级精度对所有方法的3D分割结果进行了基准测试和报告,召回率和平均精度(AP)指标。
    结论:提出的基于OT的3D分割管道在具有非零各向异性的不同数据集上优于现有的最新方法,从显微图像提供3D细胞形态的高保真恢复。
    BACKGROUND: Spatial mapping of transcriptional states provides valuable biological insights into cellular functions and interactions in the context of the tissue. Accurate 3D cell segmentation is a critical step in the analysis of this data towards understanding diseases and normal development in situ. Current approaches designed to automate 3D segmentation include stitching masks along one dimension, training a 3D neural network architecture from scratch, and reconstructing a 3D volume from 2D segmentations on all dimensions. However, the applicability of existing methods is hampered by inaccurate segmentations along the non-stitching dimensions, the lack of high-quality diverse 3D training data, and inhomogeneity of image resolution along orthogonal directions due to acquisition constraints; as a result, they have not been widely used in practice.
    METHODS: To address these challenges, we formulate the problem of finding cell correspondence across layers with a novel optimal transport (OT) approach. We propose CellStitch, a flexible pipeline that segments cells from 3D images without requiring large amounts of 3D training data. We further extend our method to interpolate internal slices from highly anisotropic cell images to recover isotropic cell morphology.
    RESULTS: We evaluated the performance of CellStitch through eight 3D plant microscopic datasets with diverse anisotropic levels and cell shapes. CellStitch substantially outperforms the state-of-the art methods on anisotropic images, and achieves comparable segmentation quality against competing methods in isotropic setting. We benchmarked and reported 3D segmentation results of all the methods with instance-level precision, recall and average precision (AP) metrics.
    CONCLUSIONS: The proposed OT-based 3D segmentation pipeline outperformed the existing state-of-the-art methods on different datasets with nonzero anisotropy, providing high fidelity recovery of 3D cell morphology from microscopic images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Observational Study
    背景:现代正畸和正颌外科需要准确识别下颌不对称性,以改善此类畸形的诊断和治疗计划。虽然颅面畸形是非常常见的病理,如果没有适当的诊断工具,某些类型的不对称性可能很难评估。这项研究的目的是使用三维(3D)分割程序来识别具有不同垂直和矢状模式的成年患者下颌水平的不对称性,其中不对称性在观察水平上可能不会被注意到。
    方法:研究样本包括60名成年患者(33名女性和27名男性,年龄在18至60岁之间)。受试者分为3个矢状和垂直骨骼组。CBCT图像被分割,使用ITK-SNAP®和3DSlicer®软件与参考地标进行镜像和基于体素的配准。构建3D表面模型以评估不同解剖水平的不对称程度。
    结果:存在一定程度的不对称性,左侧半可食倾向于包含右侧(0.123±0.270mm(CI95%0.036-0.222;p<0.001)。尽管所研究的受试者在下颌不对称性及其矢状或垂直骨骼模式之间没有显着差异(分别为p=0.809和p=0.453),根据解剖区域发现了统计学上的显着差异(p<0.001;CI95%=1.020-1.021),髁更高,其次是ramus和语料库。
    结论:尽管在对称患者中下颌不对称性与垂直和矢状骨骼模式不能相关,有关3D分割程序和彩色图的知识可以提供有价值的信息来识别下颌不对称。
    BACKGROUND: An accurate identification of mandibular asymmetries is required by modern orthodontics and orthognathic surgery to improve diagnosis and treatment planning of such deformities. Although craniofacial deformities are very frequent pathologies, some types of asymmetries can be very difficult to assess without the proper diagnostic tools. The purpose of this study was to implement the usage of three-dimensional (3D) segmentation procedures to identify asymmetries at the mandibular level in adult patients with different vertical and sagittal patterns where the asymmetries could go unnoticed at the observational level.
    METHODS: The study sample comprised 60 adult patients (33 women and 27 men, aged between 18 and 60 years). Subjects were divided into 3 sagittal and vertical skeletal groups. CBCT images were segmented, mirrored and voxel-based registered with reference landmarks using ITK-SNAP® and 3DSlicer® software\'s. 3D surface models were constructed to evaluate the degree of asymmetry at different anatomical levels.
    RESULTS: There was a degree of asymmetry, with the left hemimandible tending to contain the right one (0.123 ± 0.270 mm (CI95% 0.036-0.222; p < 0.001). Although the subjects under study did not present significant differences between mandibular asymmetries and their sagittal or vertical skeletal pattern (p = 0.809 and p = 0.453, respectively), a statistically significant difference has been found depending on the anatomical region (p < 0.001; CI95%=1.020-1.021), being higher in the condyle, followed by the ramus and the corpus.
    CONCLUSIONS: Although mandibular asymmetries cannot be correlated with vertical and sagittal skeletal patterns in symmetric patients, knowledge about 3D segmentation procedures and color maps can provide valuable information to identify mandibular asymmetries.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    We present a very rare case of a child with nine supernumerary teeth to analyze the potential, benefits, and limitations of artificial intelligence, as well as two commercial tools for tooth segmentation. Artificial intelligence (AI) is increasingly finding applications in dentistry today, particularly in radiography. Special attention is given to models based on convolutional neural networks (CNN) and their application in automatic segmentation of the oral cavity and tooth structures. The integration of AI is gaining increasing attention, and the automation of the detection and localization of supernumerary teeth can accelerate the treatment planning process. Despite advancements in 3D segmentation techniques, relying on trained professionals remains crucial. Therefore, human expertise should remain key, and AI should be seen as a support rather than a replacement. Generally, a comprehensive tool that can satisfy all clinical needs in terms of supernumerary teeth and their segmentation is not yet available, so it is necessary to incorporate multiple tools into practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:椎间盘突出症,退行性腰椎管狭窄症,和其他腰椎疾病可以发生在大多数年龄组。MRI检查以其良好的软组织图像分辨率成为腰椎病变最常用的检测方法。然而,诊断准确性高度依赖于诊断医生的经验,导致诊断医生的主观错误或不同医院多中心研究的诊断标准差异,低效的诊断。这些因素需要腰椎MRI的标准化解释和自动分类以实现客观一致性。在这项研究中,提出了一种基于SAFNet的深度学习网络来解决上述挑战。
    方法:在这项研究中,低级功能,中级功能,并提取脊柱MRI的高级特征。ASPP用于处理高级特征。采用多尺度特征融合方法,提高了底层特征和中层特征的场景感知能力。使用全局自适应池化和Sigmoid函数进一步处理高级特征以获得新的高级特征。然后将经处理的高级特征与中级特征和低级特征点相乘以获得新的高级特征。新的高级功能,低级功能,和中级特征都被采样到相同的大小,并在通道维度中级联以输出最终结果。
    结果:SAFNet对5折17节椎体结构的DSC为79.46±4.63%,78.82±7.97%,81.32±3.45%,80.56±5.47%,80.83±3.48%,平均DSC为80.32±5.00%。平均DSC为80.32±5.00%。与现有方法相比,我们的SAFNet提供了更好的分割结果,对脊柱和腰椎疾病的诊断具有重要意义.
    结论:这项研究提出了SAFNet,一个高度准确和强大的脊柱分割深度学习网络,能够为诊断目的提供有效的解剖分割。结果证明了该方法的有效性及其提高放射学诊断准确性的潜力。
    Intervertebral disc herniation, degenerative lumbar spinal stenosis, and other lumbar spine diseases can occur across most age groups. MRI examination is the most commonly used detection method for lumbar spine lesions with its good soft tissue image resolution. However, the diagnosis accuracy is highly dependent on the experience of the diagnostician, leading to subjective errors caused by diagnosticians or differences in diagnostic criteria for multi-center studies in different hospitals, and inefficient diagnosis. These factors necessitate the standardized interpretation and automated classification of lumbar spine MRI to achieve objective consistency. In this research, a deep learning network based on SAFNet is proposed to solve the above challenges.
    In this research, low-level features, mid-level features, and high-level features of spine MRI are extracted. ASPP is used to process the high-level features. The multi-scale feature fusion method is used to increase the scene perception ability of the low-level features and mid-level features. The high-level features are further processed using global adaptive pooling and Sigmoid function to obtain new high-level features. The processed high-level features are then point-multiplied with the mid-level features and low-level features to obtain new high-level features. The new high-level features, low-level features, and mid-level features are all sampled to the same size and concatenated in the channel dimension to output the final result.
    The DSC of SAFNet for segmenting 17 vertebral structures among 5 folds are 79.46 ± 4.63%, 78.82 ± 7.97%, 81.32 ± 3.45%, 80.56 ± 5.47%, and 80.83 ± 3.48%, with an average DSC of 80.32 ± 5.00%. The average DSC was 80.32 ± 5.00%. Compared to existing methods, our SAFNet provides better segmentation results and has important implications for the diagnosis of spinal and lumbar diseases.
    This research proposes SAFNet, a highly accurate and robust spine segmentation deep learning network capable of providing effective anatomical segmentation for diagnostic purposes. The results demonstrate the effectiveness of the proposed method and its potential for improving radiological diagnosis accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号