super-resolution

超分辨率
  • 文章类型: Journal Article
    目标:低场(LF)MRI扫描仪在许多低收入和中等收入国家很常见,但是它们提供的图像空间分辨率和对比度比高场(HF)扫描仪差。图像质量转移(IQT)是一种机器学习框架,用于基于最近适应LFMRI的高质量参考来增强图像。在这项研究中,我们旨在评估与癫痫儿童的LFMRI扫描相比,它是否可以改善病变可视化。
    方法:T1加权,从12例患者(5至18岁,7名男性)在0.36T(LF)和1.5T扫描仪(HF)上诊断为难治性癫痫。用IQT增强LF图像。七名放射科医生盲目评估了正常灰质(GM)和白质(WM)之间的区别以及LF中癫痫性病变的扩展和定义,HF和IQT增强图像。
    结果:当独立评估图像时,IQT输出的GM-WM差异评分高出26%,对于T1、T2和FLAIR,比LF高17%,低12%。病变定义评分比LF低8-34%,但当并排看到图像时,FLAIR和T1比LF高3%。具有HF专业知识的放射科医师对IQT图像的评分高于具有LF专业知识的放射科医师。
    结论:IQT总体上改善了图像质量评估。对HF/IQT图像外观的熟悉程度会影响对IQT增强图像的病理学评估。这些初步结果表明,IQT可能对无法使用HFMRI的神经放射学实践产生重要影响。
    OBJECTIVE: Low-field (LF) MRI scanners are common in many Low- and middle-Income countries, but they provide images with worse spatial resolution and contrast than high-field (HF) scanners. Image Quality Transfer (IQT) is a machine learning framework to enhance images based on high-quality references that has recently adapted to LF MRI. In this study we aim to assess if it can improve lesion visualisation compared to LF MRI scans in children with epilepsy.
    METHODS: T1-weighted, T2-weighted and FLAIR were acquired from 12 patients (5 to 18 years old, 7 males) with clinical diagnosis of intractable epilepsy on a 0.36T (LF) and a 1.5T scanner (HF). LF images were enhanced with IQT. Seven radiologists blindly evaluated the differentiation between normal grey matter (GM) and white matter (WM) and the extension and definition of epileptogenic lesions in LF, HF and IQT-enhanced images.
    RESULTS: When images were evaluated independently, GM-WM differentiation scores of IQT outputs were 26% higher, 17% higher and 12% lower than LF for T1, T2 and FLAIR. Lesion definition scores were 8-34% lower than LF, but became 3% higher than LF for FLAIR and T1 when images were seen side by side. Radiologists with expertise at HF scored IQT images higher than those with expertise at LF.
    CONCLUSIONS: IQT generally improved the image quality assessments. Evaluation of pathology on IQT-enhanced images was affected by familiarity with HF/IQT image appearance. These preliminary results show that IQT could have an important impact on neuroradiology practice where HF MRI is not available.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    颅颌面(CMF)和鼻标志检测是计算机辅助手术的基本组成部分。医学地标检测方法包括基于回归的方法和基于热图的方法,和基于热图的方法是主要的方法论分支。该方法依赖于包含更多位置信息的高分辨率(HR)特征,以减少由亚像素定位引起的网络误差。先前的研究通过对象检测从下采样图像中提取每个地标周围的HR补丁,然后将它们输入网络以获得HR特征。复杂的多阶段任务会影响准确性。训练期间由降采样和上采样操作引起的网络误差,对低分辨率特征进行插值以生成HR特征或预测热图,仍然很重要。我们提出了标准的超分辨率地标检测网络(SRLD-Net)和超分辨率UNet(SR-UNet),以有效地减少网络错误。SRLD-Net二手金字塔池块,金字塔融合块和超分辨率融合块结合全局先验知识和多尺度局部特征,同样,SR-UNet采用金字塔池块和超分辨率块。它们可以明显提高我们提出的方法的表示学习能力。然后,利用超分辨率上采样层来生成细节预测热图。我们提出的网络与使用颅颌面的最先进的方法进行了比较,鼻部,和下颌磨牙数据集,展示更好的性能。18CMF的平均误差,6个鼻部标志和14个下颌标志分别为1.39±1.04、1.31±1.09、2.01±4.33mm。这些结果表明,超分辨率方法在医学标志检测任务中具有巨大的潜力。本文提供了两种有效的基于热图的地标检测网络,代码发布在https://github.com/Runshi-Zhang/SRLD-Net中。
    Craniomaxillofacial (CMF) and nasal landmark detection are fundamental components in computer-assisted surgery. Medical landmark detection method includes regression-based and heatmap-based methods, and heatmap-based methods are among the main methodology branches. The method relies on high-resolution (HR) features containing more location information to reduce the network error caused by sub-pixel location. Previous studies extracted HR patches around each landmark from downsampling images via object detection and subsequently input them into the network to obtain HR features. Complex multistage tasks affect accuracy. The network error caused by downsampling and upsampling operations during training, which interpolates low-resolution features to generate HR features or predicted heatmap, is still significant. We propose standard super-resolution landmark detection networks (SRLD-Net) and super-resolution UNet (SR-UNet) to reduce network error effectively. SRLD-Net used Pyramid pooling block, Pyramid fusion block and super-resolution fusion block to combine global prior knowledge and multi-scale local features, similarly, SR-UNet adopts Pyramid pooling block and super-resolution block. They can obviously improve representation learning ability of our proposed methods. Then the super-resolution upsampling layer is utilized to generate detail predicted heatmap. Our proposed networks were compared to state-of-the-art methods using the craniomaxillofacial, nasal, and mandibular molar datasets, demonstrating better performance. The mean errors of 18 CMF, 6 nasal and 14 mandibular landmarks are 1.39 ± 1.04, 1.31 ± 1.09, 2.01 ± 4.33 mm. These results indicate that the super-resolution methods have great potential in medical landmark detection tasks. This paper provides two effective heatmap-based landmark detection networks and the code is released in https://github.com/Runshi-Zhang/SRLD-Net.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:与磁共振成像(MRI)兼容的手术机器人的技术进步对术前和术中MRI的实时可变形图像配准(DIR)产生了不可或缺的需求,但是缺乏相关方法。挑战来自维度不匹配,分辨率差异,非刚性变形和实时配准的要求。
    方法:在本文中,我们提出了一个叫做MatchMorph的实时DIR框架,专为低分辨率局部术中MRI和高分辨率全局术前MRI的配准而设计。首先,开发了一种基于全局推理的超分辨率网络,以将术中MRI的分辨率提高到术前MRI的分辨率。从而解决了分辨率差异。其次,设计了一种快速匹配算法,用于确定术中MRI在相应术前MRI中的最佳位置,以解决维度不匹配问题.Further,构建了一个基于交叉注意力的双流DIR网络来操纵术前和术中MRI之间的变形,真正及时。
    结果:我们对公开可用的数据集IXI和OASIS进行了全面的实验,以评估所提出的MatchMorph框架的性能。与最先进的(SOTA)网络TransMorph相比,MatchMorph设计的双流DIR网络在IXI数据集上具有1.306mm小的HD和0.07mm小的ASD评分,实现了卓越的性能.此外,MatchMorph框架演示了大约280毫秒的推理速度。
    结论:从高分辨率全局术前MRI和模拟的低分辨率局部术中MRI获得的定性和定量配准结果验证了所提出的MatchMorph框架的有效性和效率。
    OBJECTIVE: The technological advancements in surgical robots compatible with magnetic resonance imaging (MRI) have created an indispensable demand for real-time deformable image registration (DIR) of pre- and intra-operative MRI, but there is a lack of relevant methods. Challenges arise from dimensionality mismatch, resolution discrepancy, non-rigid deformation and requirement for real-time registration.
    METHODS: In this paper, we propose a real-time DIR framework called MatchMorph, specifically designed for the registration of low-resolution local intraoperative MRI and high-resolution global preoperative MRI. Firstly, a super-resolution network based on global inference is developed to enhance the resolution of intraoperative MRI to the same as preoperative MRI, thus resolving the resolution discrepancy. Secondly, a fast-matching algorithm is designed to identify the optimal position of the intraoperative MRI within the corresponding preoperative MRI to address the dimensionality mismatch. Further, a cross-attention-based dual-stream DIR network is constructed to manipulate the deformation between pre- and intra-operative MRI, real-timely.
    RESULTS: We conducted comprehensive experiments on publicly available datasets IXI and OASIS to evaluate the performance of the proposed MatchMorph framework. Compared to the state-of-the-art (SOTA) network TransMorph, the designed dual-stream DIR network of MatchMorph achieved superior performance with a 1.306 mm smaller HD and a 0.07 mm smaller ASD score on the IXI dataset. Furthermore, the MatchMorph framework demonstrates an inference speed of approximately 280 ms.
    CONCLUSIONS: The qualitative and quantitative registration results obtained from high-resolution global preoperative MRI and simulated low-resolution local intraoperative MRI validated the effectiveness and efficiency of the proposed MatchMorph framework.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在单幅图像超分辨率背景下,特征提取起着举足轻重的作用。尽管如此,依靠单一的特征提取方法往往会破坏特征表示的全部潜力,妨碍模型的整体性能。为了解决这个问题,这项研究介绍了广泛的活化特征蒸馏网络(WFDN),通过双路径学习实现单幅图像的超分辨率。最初,采用双路径并行网络结构,利用剩余网络作为骨干,并结合全局剩余连接,以增强功能开发并加快网络融合。随后,采用了特征蒸馏块,其特点是训练速度快,参数计数低。同时,整合了广泛的激活机制,以进一步提高高频特征的表示能力。最后,引入门控融合机制对双分支提取的特征信息进行加权融合。该机制增强了重建性能,同时减轻了信息冗余。大量的实验表明,与最先进的方法相比,该算法获得了稳定和优越的结果,对四个基准数据集进行的定量评估指标测试证明了这一点。此外,我们的WFDN擅长重建具有更丰富详细纹理的图像,更现实的线条,更清晰的结构,肯定了其非凡的优越性和稳健性。
    Feature extraction plays a pivotal role in the context of single image super-resolution. Nonetheless, relying on a single feature extraction method often undermines the full potential of feature representation, hampering the model\'s overall performance. To tackle this issue, this study introduces the wide-activation feature distillation network (WFDN), which realizes single image super-resolution through dual-path learning. Initially, a dual-path parallel network structure is employed, utilizing a residual network as the backbone and incorporating global residual connections to enhance feature exploitation and expedite network convergence. Subsequently, a feature distillation block is adopted, characterized by fast training speed and a low parameter count. Simultaneously, a wide-activation mechanism is integrated to further enhance the representational capacity of high-frequency features. Lastly, a gated fusion mechanism is introduced to weight the fusion of feature information extracted from the dual branches. This mechanism enhances reconstruction performance while mitigating information redundancy. Extensive experiments demonstrate that the proposed algorithm achieves stable and superior results compared to the state-of-the-art methods, as evidenced by quantitative evaluation metrics tests conducted on four benchmark datasets. Furthermore, our WFDN excels in reconstructing images with richer detailed textures, more realistic lines, and clearer structures, affirming its exceptional superiority and robustness.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:评估基于深度学习的三维(3D)超分辨率扩散加权成像(DWI)影像组学模型预测高强度聚焦超声(HIFU)消融子宫肌瘤预后的可行性和有效性。
    方法:这项回顾性研究包括360例接受HIFU治疗的子宫肌瘤患者,包括中心A(训练集:N=240;内部测试集:N=60)和中心B(外部测试集:N=60),并根据术后非灌注体积比分类为预后良好或不良。在传统高分辨率DWI(HR-DWI)的基础上,采用深度迁移学习方法构建超分辨率DWI(SR-DWI),从两种图像类型的手动分割的感兴趣区域中提取1198个影像组学特征。在数据预处理和特征选择之后,使用支持向量机(SVM)构建HR-DWI和SR-DWI的影像组学模型,随机森林(RF),和光梯度提升机(LightGBM)算法,使用曲线下面积(AUC)和决策曲线评估性能。
    结果:与放射科专家相比,所有DWI影像组学模型在预测HIFU消融子宫肌瘤预后方面均表现出优异的AUC(AUC:0.706,95%CI:0.647-0.748)。当使用不同的机器学习算法时,支持向量机的HR-DWI模型的AUC值为0.805(95%CI:0.679-0.931),0.797(95%CI:0.672-0.921)射频,和0.770(95%CI:0.631-0.908)与LightGBM。同时,在所有算法中,SR-DWI模型优于HR-DWI模型(P<0.05),SVM的AUC值为0.868(95%CI:0.775-0.960),0.824(95%CI:0.715-0.934)与RF,和0.821(95%CI:0.709-0.933)与LightGBM。而决策曲线分析进一步证实了该模型良好的临床应用价值。
    结论:基于深度学习的3DSR-DWI影像组学模型在预测HIFU消融子宫肌瘤预后方面具有良好的可行性和有效性。优于HR-DWI模型和放射科专家的评估。
    OBJECTIVE: To assess the feasibility and efficacy of a deep learning-based three-dimensional (3D) super-resolution diffusion-weighted imaging (DWI) radiomics model in predicting the prognosis of high-intensity focused ultrasound (HIFU) ablation of uterine fibroids.
    METHODS: This retrospective study included 360 patients with uterine fibroids who received HIFU treatment, including Center A (training set: N = 240; internal testing set: N = 60) and Center B (external testing set: N = 60) and were classified as having a favorable or unfavorable prognosis based on the postoperative non-perfusion volume ratio. A deep transfer learning approach was used to construct super-resolution DWI (SR-DWI) based on conventional high-resolution DWI (HR-DWI), and 1198 radiomics features were extracted from manually segmented regions of interest in both image types. Following data preprocessing and feature selection, radiomics models were constructed for HR-DWI and SR-DWI using Support Vector Machine (SVM), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM) algorithms, with performance evaluated using area under the curve (AUC) and decision curves.
    RESULTS: All DWI radiomics models demonstrated superior AUC in predicting HIFU ablated uterine fibroids prognosis compared to expert radiologists (AUC: 0.706, 95% CI: 0.647-0.748). When utilizing different machine learning algorithms, the HR-DWI model achieved AUC values of 0.805 (95% CI: 0.679-0.931) with SVM, 0.797 (95% CI: 0.672-0.921) with RF, and 0.770 (95% CI: 0.631-0.908) with LightGBM. Meanwhile, the SR-DWI model outperformed the HR-DWI model (P < 0.05) across all algorithms, with AUC values of 0.868 (95% CI: 0.775-0.960) with SVM, 0.824 (95% CI: 0.715-0.934) with RF, and 0.821 (95% CI: 0.709-0.933) with LightGBM. And decision curve analysis further confirmed the good clinical value of the models.
    CONCLUSIONS: Deep learning-based 3D SR-DWI radiomics model demonstrated favorable feasibility and effectiveness in predicting the prognosis of HIFU ablated uterine fibroids, which was superior to HR-DWI model and assessment by expert radiologists.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    为了方便传输,全向图像(ODI)通常遵循等矩形投影(ERP)格式,并且分辨率较低。为了提供更好的身临其境的体验,全向图像超分辨率(ODISR)至关重要。然而,ERPODI遭受严重的几何失真和跨纬度的像素拉伸,在高纬度产生大量冗余信息。这一特点对传统的SR方法提出了巨大的挑战,这只能获得次优的ODISR性能。为了解决这个问题,本文提出了一种新的ODISR位置注意网络(PAN)。具体来说,引入了两分支结构,其中基本增强分支(BE)用于实现提取的浅层特征的粗深特征增强。同时,位置注意力增强分支(PAE)构建位置注意力机制,根据不同纬度特征在ERP表示中的位置和拉伸程度动态调整贡献,实现了对差异化信息的增强,抑制冗余信息,并以空间失真调制深层特征。随后,有效地融合了两个分支的特征,以实现进一步的细化和适应ODI的失真特性。之后,我们利用长期记忆模块(LM),促进分支之间的信息交互和融合,以增强对失真的感知,聚合先前的分层功能以保留长期内存并提高ODISR性能。广泛的结果证明了我们的PAN在ODISR中的最先进性能和高效率。
    For convenient transmission, omnidirectional images (ODIs) usually follow the equirectangular projection (ERP) format and are low-resolution. To provide better immersive experience, omnidirectional image super resolution (ODISR) is essential. However, ERP ODIs suffer from serious geometric distortion and pixel stretching across latitudes, generating massive redundant information at high latitudes. This characteristic poses a huge challenge for the traditional SR methods, which can only obtain the suboptimal ODISR performance. To address this issue, we propose a novel position attention network (PAN) for ODISR in this paper. Specifically, a two-branch structure is introduced, in which the basic enhancement branch (BE) serves to achieve coarse deep feature enhancement for extracted shallow features. Meanwhile, the position attention enhancement branch (PAE) builds a positional attention mechanism to dynamically adjust the contribution of features at different latitudes in the ERP representation according to their positions and stretching degrees, which achieves the enhancement for the differentiated information, suppresses the redundant information, and modulate the deep features with spatial distortion. Subsequently, the features of two branches are fused effectively to achieve the further refinement and adapt the distortion characteristic of ODIs. After that, we exploit a long-term memory module (LM), promoting information interactions and fusions between the branches to enhance the perception of the distortion, aggregating the prior hierarchical features to keep the long-term memory and boosting the ODISR performance. Extensive results demonstrate the state-of-the-art performance and the high efficiency of our PAN in ODISR.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    最近,基于金字塔的多分辨率技术已经成为图像超分辨率的主要研究方法。然而,这些方法通常依赖于级别之间的信息传输的单一模式。在我们的方法中,提出了一种基于小波能量熵(WEE)约束的小波金字塔递归神经网络(WPRNN)。该网络传输先前级别的小波系数和附加的浅系数特征以捕获局部细节。此外,每个金字塔级别和跨金字塔级别的低频和高频小波系数的参数是共享的。设计了多分辨率小波金字塔融合(WPF)模块,以促进跨网络金字塔级别的信息传递。此外,从信号能量分布的角度出发,提出了一种小波能量熵损失来约束小波系数的重构。最后,我们的方法通过在公开可用的数据集上进行的一系列广泛的实验,以最小的参数实现了具有竞争力的重建性能,这证明了它的实际效用。
    Recently, multi-resolution pyramid-based techniques have emerged as the prevailing research approach for image super-resolution. However, these methods typically rely on a single mode of information transmission between levels. In our approach, a wavelet pyramid recursive neural network (WPRNN) based on wavelet energy entropy (WEE) constraint is proposed. This network transmits previous-level wavelet coefficients and additional shallow coefficient features to capture local details. Besides, the parameter of low- and high-frequency wavelet coefficients within each pyramid level and across pyramid levels is shared. A multi-resolution wavelet pyramid fusion (WPF) module is devised to facilitate information transfer across network pyramid levels. Additionally, a wavelet energy entropy loss is proposed to constrain the reconstruction of wavelet coefficients from the perspective of signal energy distribution. Finally, our method achieves the competitive reconstruction performance with the minimal parameters through an extensive series of experiments conducted on publicly available datasets, which demonstrates its practical utility.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    无标记超分辨率(LFSR)成像依赖于纳米级物体中的光散射过程,而无需超分辨率FL显微镜中所需的荧光(FL)染色。本路线图的目标是提出对发展的全面愿景,这个领域最先进的,并讨论了打破LFSR成像的经典衍射极限需要克服的分辨率边界和障碍。本路线图的范围涵盖了先进的干扰检测技术,其中衍射限制的横向分辨率与无与伦比的轴向和时间分辨率相结合,基于将分辨率理解为信息科学问题的具有真正横向超分辨率能力的技术,在使用新颖的结构化照明时,近场扫描,和非线性光学方法,以及基于纳米等离子体的超透镜设计,超材料,变换光学,和微球辅助方法。为此,这个路线图带来了来自物理学和生物医学光学领域的研究人员,这些研究通常是分开发展的。本文的最终目的是基于其物理机制为LFSR成像的当前和未来发展创造一个愿景,并为该领域的系列文章创造一个巨大的开放。
    Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:在牙科临床实践中,锥形束计算机断层扫描(CBCT)通常用于帮助从业者识别根管系统的复杂形态。但由于其分辨率的局限性,某些小的解剖结构仍然不能在CBCT上准确识别。这项研究的目的是在深度学习模型的帮助下,对提取的人类牙齿的CBCT图像进行图像超分辨率(SR)处理,并比较CBCT之间的差异,SRCT,和显微计算机断层扫描(Micro-CT)图像通过三维重建。
    方法:选择并修改深度学习模型(Basicvsr++)。该数据集由符合纳入标准的171颗拔牙组成,40颗上颌第一磨牙作为训练组,40颗上颌第一磨牙以及来自其他牙齿位置的91颗牙齿作为外部测试组。相应的CBCT,SRCT,使用MimicsResearch17.0重建测试集中每颗牙齿的Micro-CT图像,并记录三组的根管识别率。测量以下参数:硬组织体积(V1),牙髓腔和根管系统的体积(V2),孔口下可见根管的长度(VL-X,其中X代表特定的根管),根管冠状轴与牙齿长轴的交角(周四X,其中X代表特定的根管)。数据在CBCT和SRCT图像之间采用配对样本t检验和Wilcoxon检验分析进行统计学分析。以微CT图像的测量为黄金标准。
    结果:使用超分辨率程序成功处理了所有测试牙齿的图像。在4道上颌第一磨牙中,CBCT组MB2检出率为72%(18/25),SRCT组92%(23/25),Micro-CT组100%(25/25)。除4根下颌第一磨牙外,所有受试牙齿的硬组织体积的差异均明显小于CBCT和Micro-CT(P<0.05)。在所有测试的牙齿中,牙髓腔和根管系统的体积均获得了相似的结果(P<0.05)。至于孔口下可见根管的长度,在大多数根管中,SRCT和Micro-CT之间的差异明显小于CBCT和Micro-CT之间的差异(P<0.05)。
    结论:本研究开发的深度学习模型有助于优化CBCT中拔牙的根管形态。并可能有助于上颌第一磨牙中MB2的鉴定。
    BACKGROUND: In dental clinical practice, cone-beam computed tomography (CBCT) is commonly used to assist practitioners to recognize the complex morphology of root canal systems; however, because of its resolution limitations, certain small anatomical structures still cannot be accurately recognized on CBCT. The purpose of this study was to perform image super-resolution (SR) processing on CBCT images of extracted human teeth with the help of a deep learning model, and to compare the differences among CBCT, super-resolution computed tomography (SRCT), and micro-computed tomography (Micro-CT) images through three-dimensional reconstruction.
    METHODS: The deep learning model (Basicvsr++) was selected and modified. The dataset consisted of 171 extracted teeth that met inclusion criteria, with 40 maxillary first molars as the training set and 40 maxillary first molars as well as 91 teeth from other tooth positions as the external test set. The corresponding CBCT, SRCT, and Micro-CT images of each tooth in test sets were reconstructed using Mimics Research 17.0, and the root canal recognition rates in the 3 groups were recorded. The following parameters were measured: volume of hard tissue (V1), volume of pulp chamber and root canal system (V2), length of visible root canals under orifice (VL-X, where X represents the specific root canal), and intersection angle between coronal axis of canal and long axis of tooth (∠X, where X represents the specific root canal). Data were statistically analyzed between CBCT and SRCT images using paired sample t-test and Wilcoxon test analysis, with the measurement from Micro-CT images as the gold standard.
    RESULTS: Images from all tested teeth were successfully processed with the SR program. In 4-canal maxillary first molar, identification of MB2 was 72% (18/25) in CBCT group, 92% (23/25) in SRCT group, and 100% (25/25) in Micro-CT group. The difference of hard tissue volume between SRCT and Micro-CT was significantly smaller than that between CBCT and Micro-CT in all tested teeth except 4-canal mandibular first molar (P < .05). Similar results were obtained in volume of pulp chamber and root canal system in all tested teeth (P < .05). As for length of visible root canals under orifice, the difference between SRCT and Micro-CT was significantly smaller than that between CBCT and Micro-CT (P < .05) in most root canals.
    CONCLUSIONS: The deep learning model developed in this study helps to optimize the root canal morphology of extracted teeth in CBCT. And it may be helpful for the identification of MB2 in the maxillary first molar.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    卷积神经网络(CNN)和变换器的出现最近促进了图像超分辨率(SR)任务的重大进展。然而,这些网络通常构造复杂的结构,具有巨大的模型参数和高计算成本,以提高重建性能。此外,他们不考虑结构优先,不利于高质量的图像重建。在这项工作中,我们设计了一个轻量级的交互式特征推理网络(IFIN),补充CNN和Transformer的优势,用于有效的图像SR重建。具体来说,交互式特征聚合模块(IFAM),由结构感知注意力块(北京汽车股份有限公司)实施,双变压器块(SWTB),和增强的空间自适应块(ESAB),作为网络骨干,逐步提取更多的专用特征,以促进图像中高频细节的重建。北京汽车股份有限公司自适应重新校准局部显著结构信息,和SWTB有效地捕获丰富的全球信息。Further,ESAB协同补充了本地和全球先验,以确保各种特征的一致融合,实现图像的高质量重建。全面的实验表明,我们提出的网络在基准数据集上达到了最先进的重建精度,同时保持了较低的计算需求。我们的代码和结果可在以下网址获得:https://github.com/wwwaannggllii/IFIN。
    The emergence of convolutional neural network (CNN) and transformer has recently facilitated significant advances in image super-resolution (SR) tasks. However, these networks commonly construct complex structures, having huge model parameters and high computational costs, to boost reconstruction performance. In addition, they do not consider the structural prior well, which is not conducive to high-quality image reconstruction. In this work, we devise a lightweight interactive feature inference network (IFIN), complementing the strengths of CNN and Transformer, for effective image SR reconstruction. Specifically, the interactive feature aggregation module (IFAM), implemented by structure-aware attention block (SAAB), Swin Transformer block (SWTB), and enhanced spatial adaptive block (ESAB), serves as the network backbone, progressively extracts more dedicated features to facilitate the reconstruction of high-frequency details in the image. SAAB adaptively recalibrates local salient structural information, and SWTB effectively captures rich global information. Further, ESAB synergetically complements local and global priors to ensure the consistent fusion of diverse features, achieving high-quality reconstruction of images. Comprehensive experiments reveal that our proposed networks attain state-of-the-art reconstruction accuracy on benchmark datasets while maintaining low computational demands. Our code and results are available at: https://github.com/wwaannggllii/IFIN .
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号