medical image

医学图像
  • 文章类型: Journal Article
    背景:慢性移植物抗宿主病(cGVHD)是异基因造血细胞移植后患者长期发病和死亡的重要原因。皮肤是最常见的器官,cGVHD的视觉评估可能具有较低的可靠性。来自非专家参与者的众包数据已被用于许多医疗应用,包括图像标记和分割任务。
    目的:这项研究旨在评估非专家评估者人群的能力,这些人没有经过任何事先培训来识别或标记cGHVD,以标定受cGVHD影响的皮肤照片。我们还研究了培训和反馈对人群表现的影响。
    方法:使用CanfieldVectraH13D摄像机,拍摄36例cGVHD患者的皮肤照片360张。地面真相划界由训练有素的专家以3D提供,并由董事会认证的皮肤科医生进行审查。总的来说,通过DiagnosUs移动应用程序为人群划界创建了3000张2D图像(来自各个角度的投影)。评分者分为高反馈组和低反馈组。分析了4种不同非专家群体的表现,包括低反馈和高反馈组的每个图像的17个评分者,低反馈组每幅图像32-35个评分者,以及来自低反馈组的每个图像的前5名表演者。
    结果:在8项划界比赛中,高反馈组招募了130名评估者,低反馈组招募了161名评估者。这导致高反馈组总共进行了54,887次个人划界,低反馈组总共进行了78,967次个人划界。非专业人群以最少的训练分割受cGVHD影响的皮肤取得了良好的整体表现,对于高反馈组和低反馈组中的所有人群,实现小于12%的皮肤像素的中值表面积误差。低反馈人群的表现比高反馈人群稍差,即使使用了更多的人群。从每个图像的低反馈组中跟踪5个最可靠的评估者恢复了与高反馈人群相似的性能。没有发现给定图像的评分者之间的较高变异性与人群共识划界的较低性能相关,因此不能用作可靠性的度量。在任务期间没有观察到显著的学习,因为看到更多的照片和反馈。
    结论:非专家评估者的人群可以区分具有良好整体性能的cGVHD图像。跟踪前5名最可靠的评估者提供了最佳结果,获得最佳性能,并以足够培训所需的最低专家划界数量。然而,个人非专家之间的协议并不能帮助预测人群是否提供了准确的结果。未来的工作应该探索标准临床照片中众包的性能,并进一步评估共识划界的可靠性的方法。
    BACKGROUND: Chronic graft-versus-host disease (cGVHD) is a significant cause of long-term morbidity and mortality in patients after allogeneic hematopoietic cell transplantation. Skin is the most commonly affected organ, and visual assessment of cGVHD can have low reliability. Crowdsourcing data from nonexpert participants has been used for numerous medical applications, including image labeling and segmentation tasks.
    OBJECTIVE: This study aimed to assess the ability of crowds of nonexpert raters-individuals without any prior training for identifying or marking cGHVD-to demarcate photos of cGVHD-affected skin. We also studied the effect of training and feedback on crowd performance.
    METHODS: Using a Canfield Vectra H1 3D camera, 360 photographs of the skin of 36 patients with cGVHD were taken. Ground truth demarcations were provided in 3D by a trained expert and reviewed by a board-certified dermatologist. In total, 3000 2D images (projections from various angles) were created for crowd demarcation through the DiagnosUs mobile app. Raters were split into high and low feedback groups. The performances of 4 different crowds of nonexperts were analyzed, including 17 raters per image for the low and high feedback groups, 32-35 raters per image for the low feedback group, and the top 5 performers for each image from the low feedback group.
    RESULTS: Across 8 demarcation competitions, 130 raters were recruited to the high feedback group and 161 to the low feedback group. This resulted in a total of 54,887 individual demarcations from the high feedback group and 78,967 from the low feedback group. The nonexpert crowds achieved good overall performance for segmenting cGVHD-affected skin with minimal training, achieving a median surface area error of less than 12% of skin pixels for all crowds in both the high and low feedback groups. The low feedback crowds performed slightly poorer than the high feedback crowd, even when a larger crowd was used. Tracking the 5 most reliable raters from the low feedback group for each image recovered a performance similar to that of the high feedback crowd. Higher variability between raters for a given image was not found to correlate with lower performance of the crowd consensus demarcation and cannot therefore be used as a measure of reliability. No significant learning was observed during the task as more photos and feedback were seen.
    CONCLUSIONS: Crowds of nonexpert raters can demarcate cGVHD images with good overall performance. Tracking the top 5 most reliable raters provided optimal results, obtaining the best performance with the lowest number of expert demarcations required for adequate training. However, the agreement amongst individual nonexperts does not help predict whether the crowd has provided an accurate result. Future work should explore the performance of crowdsourcing in standard clinical photos and further methods to estimate the reliability of consensus demarcations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:全髋关节置换术(THR)被认为是治疗难治性退行性髋关节疾病的金标准。确定在短期内应该接受THR的患者很重要。一些保守治疗,例如在THR前几个月进行关节内注射,可能导致关节成形术感染的几率更高。对于那些被标记为需要THR的人,功能恶化后延迟的THR可能会导致较差的结果和更长的等待时间。深度学习(DL)在医学成像中的应用最近取得了重大突破。然而,DL在实际寻路中的使用,例如短期THR预测,仍然缺乏。
    目的:在本研究中,我们将为骨盆X光片患者提出一个基于DL的辅助系统,以确定3个月内是否需要THR。
    方法:我们开发了一种基于卷积神经网络的DL算法来分析骨盆射线照片,预测髋关节感兴趣区域(ROI),并确定是否需要THR。数据集是2008年8月至2017年12月收集的。这些图像包括3013个接受过THR的手术髋关节和1630个非手术髋关节。图像被分割了,使用拆分样本验证,进入训练(n=3903,80%),验证(n=476,10%),和测试(n=475,10%)集合来评估算法性能。
    结果:算法,叫做SurgHipNet,受试者工作特征曲线下面积为0.994(95%CI0.990-0.998)。准确性,灵敏度,特异性,模型的F1评分分别为0.977、0.920、0932和0.944。
    结论:所提出的方法表明,SurgHipNet显示出在临床决策中提供有效支持的能力和潜力;它可以帮助医生及时确定THR的最佳时机。
    BACKGROUND: Total hip replacement (THR) is considered the gold standard of treatment for refractory degenerative hip disorders. Identifying patients who should receive THR in the short term is important. Some conservative treatments, such as intra-articular injection administered a few months before THR, may result in higher odds of arthroplasty infection. Delayed THR after functional deterioration may result in poorer outcomes and longer waiting times for those who have been flagged as needing THR. Deep learning (DL) in medical imaging applications has recently obtained significant breakthroughs. However, the use of DL in practical wayfinding, such as short-term THR prediction, is still lacking.
    OBJECTIVE: In this study, we will propose a DL-based assistant system for patients with pelvic radiographs to identify the need for THR within 3 months.
    METHODS: We developed a convolutional neural network-based DL algorithm to analyze pelvic radiographs, predict the hip region of interest (ROI), and determine whether or not THR is required. The data set was collected from August 2008 to December 2017. The images included 3013 surgical hip ROIs that had undergone THR and 1630 nonsurgical hip ROIs. The images were split, using split-sample validation, into training (n=3903, 80%), validation (n=476, 10%), and testing (n=475, 10%) sets to evaluate the algorithm performance.
    RESULTS: The algorithm, called SurgHipNet, yielded an area under the receiver operating characteristic curve of 0.994 (95% CI 0.990-0.998). The accuracy, sensitivity, specificity, and F1-score of the model were 0.977, 0.920, 0932, and 0.944, respectively.
    CONCLUSIONS: The proposed approach has demonstrated that SurgHipNet shows the ability and potential to provide efficient support in clinical decision-making; it can assist physicians in promptly determining the optimal timing for THR.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:皮肤镜检查通常用于评估色素性病变,但是,众所周知,专家之间在识别皮肤结构方面的协议相对较差。医疗数据的专家标签是机器学习(ML)工具开发的瓶颈,众包已被证明是一种成本和时间高效的医学图像标注方法。
    目的:本研究的目的是证明众包可用于标记色素性病变图像中的基本皮肤镜结构,具有与专家组相似的可靠性。
    方法:首先,我们获得了248张黑素细胞病变图像的标签,其中31张皮肤镜\“子特征\”由20位皮肤镜专家标记。然后根据结构相似性将这些折叠成6个皮肤透视的“超级特征”,由于评分者间可靠性(IRR)较低:点,小球,线条,网络结构,回归结构,和船只。然后将这些图像用作人群研究的黄金标准。商业平台DiagnosUs用于从非专家人群中获取248张图像中存在或不存在6个超级特征的注释。我们与7名皮肤科医生一起复制了这种方法,以与非专家人群进行直接比较。科恩κ值用于衡量评估者之间的一致性。
    结果:总计,我们从人群中获得了139,731个皮肤镜超特征的评分。点和小球的鉴定一致性相对较低(中位数κ值分别为0.526和0.395),而网络结构和血管显示出最高的一致性(中位数κ值分别为0.581和0.798)。在专家评估者中也看到了这种模式,他们的点和小球的中位数κ值为0.483和0.517,分别,网络结构和船只为0.758和0.790。非专家和阈值平均专家读者之间的中位数κ值为0.709点,0.719为小球,线0.714,网络结构为0.838,回归结构为0.818,和0.728的船只。
    结论:这项研究证实,一组专家对不同皮肤镜特征的IRR不同;在非专家人群中观察到类似的模式。人群和专家之间的6个超级特征中的每一个都有很好或很好的协议,突出了标签皮肤镜图像的人群的相似可靠性。这证实了使用众包作为可扩展解决方案来注释大型皮肤镜图像的可行性和可靠性,有几个潜在的临床和教育应用,包括小说的发展,可解释的ML工具。
    BACKGROUND: Dermoscopy is commonly used for the evaluation of pigmented lesions, but agreement between experts for identification of dermoscopic structures is known to be relatively poor. Expert labeling of medical data is a bottleneck in the development of machine learning (ML) tools, and crowdsourcing has been demonstrated as a cost- and time-efficient method for the annotation of medical images.
    OBJECTIVE: The aim of this study is to demonstrate that crowdsourcing can be used to label basic dermoscopic structures from images of pigmented lesions with similar reliability to a group of experts.
    METHODS: First, we obtained labels of 248 images of melanocytic lesions with 31 dermoscopic \"subfeatures\" labeled by 20 dermoscopy experts. These were then collapsed into 6 dermoscopic \"superfeatures\" based on structural similarity, due to low interrater reliability (IRR): dots, globules, lines, network structures, regression structures, and vessels. These images were then used as the gold standard for the crowd study. The commercial platform DiagnosUs was used to obtain annotations from a nonexpert crowd for the presence or absence of the 6 superfeatures in each of the 248 images. We replicated this methodology with a group of 7 dermatologists to allow direct comparison with the nonexpert crowd. The Cohen κ value was used to measure agreement across raters.
    RESULTS: In total, we obtained 139,731 ratings of the 6 dermoscopic superfeatures from the crowd. There was relatively lower agreement for the identification of dots and globules (the median κ values were 0.526 and 0.395, respectively), whereas network structures and vessels showed the highest agreement (the median κ values were 0.581 and 0.798, respectively). This pattern was also seen among the expert raters, who had median κ values of 0.483 and 0.517 for dots and globules, respectively, and 0.758 and 0.790 for network structures and vessels. The median κ values between nonexperts and thresholded average-expert readers were 0.709 for dots, 0.719 for globules, 0.714 for lines, 0.838 for network structures, 0.818 for regression structures, and 0.728 for vessels.
    CONCLUSIONS: This study confirmed that IRR for different dermoscopic features varied among a group of experts; a similar pattern was observed in a nonexpert crowd. There was good or excellent agreement for each of the 6 superfeatures between the crowd and the experts, highlighting the similar reliability of the crowd for labeling dermoscopic images. This confirms the feasibility and dependability of using crowdsourcing as a scalable solution to annotate large sets of dermoscopic images, with several potential clinical and educational applications, including the development of novel, explainable ML tools.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于在不同的成像方式下癌性病变的复杂结构,使用计算机辅助诊断(CAD)系统对医学图像进行解释是艰巨的,班级之间高度相似,类内存在不同的特征,医疗数据的稀缺性,以及工件和噪音的存在。在这项研究中,这些挑战通过开发具有最佳配置的浅层卷积神经网络(CNN)模型来解决,该模型通过改变层结构和超参数并利用合适的增强技术来执行消融研究。研究了八个具有不同模态的医疗数据集,其中所提出的模型,名为MNet-10,具有低计算复杂度,能够在所有数据集上产生最佳性能。还评估了光度和几何增强技术对不同数据集的影响。我们选择乳房X线照片数据集作为最具挑战性的成像方式之一进行消融研究。在生成模型之前,使用这两种方法来增强数据集。首先构建基本CNN模型,并将其应用于增强和非增强乳房X线照片数据集,其中利用光度数据集获得最高精度。因此,通过使用乳房X线照片光度数据集对基础模型进行消融研究来确定模型的架构和超参数。之后,网络的健壮性和不同增强技术的影响是通过用其余的七个数据集训练模型来评估的。我们在乳房X线照片上获得了97.34%的测试准确率,98.43%的人患皮肤癌,99.54%的脑肿瘤磁共振成像(MRI),97.29%的COVID胸部X光检查,96.31%在鼓膜上,胸部计算机断层扫描(CT)扫描占99.82%,和98.75%的乳腺癌超声数据集通过光度增强和96.76%的乳腺癌显微活检数据集通过几何增强。此外,使用所提出的模型,使用所有数据集来探索一些弹性变形增强方法,以评估其有效性。最后,VGG16、InceptionV3和ResNet50在性能最佳的增强数据集上进行了训练,并将它们的性能一致性与MNet-10模型进行了比较。这些发现可能会帮助未来的研究人员进行涉及消融研究和增强技术的医疗数据分析。
    Interpretation of medical images with a computer-aided diagnosis (CAD) system is arduous because of the complex structure of cancerous lesions in different imaging modalities, high degree of resemblance between inter-classes, presence of dissimilar characteristics in intra-classes, scarcity of medical data, and presence of artifacts and noises. In this study, these challenges are addressed by developing a shallow convolutional neural network (CNN) model with optimal configuration performing ablation study by altering layer structure and hyper-parameters and utilizing a suitable augmentation technique. Eight medical datasets with different modalities are investigated where the proposed model, named MNet-10, with low computational complexity is able to yield optimal performance across all datasets. The impact of photometric and geometric augmentation techniques on different datasets is also evaluated. We selected the mammogram dataset to proceed with the ablation study for being one of the most challenging imaging modalities. Before generating the model, the dataset is augmented using the two approaches. A base CNN model is constructed first and applied to both the augmented and non-augmented mammogram datasets where the highest accuracy is obtained with the photometric dataset. Therefore, the architecture and hyper-parameters of the model are determined by performing an ablation study on the base model using the mammogram photometric dataset. Afterward, the robustness of the network and the impact of different augmentation techniques are assessed by training the model with the rest of the seven datasets. We obtain a test accuracy of 97.34% on the mammogram, 98.43% on the skin cancer, 99.54% on the brain tumor magnetic resonance imaging (MRI), 97.29% on the COVID chest X-ray, 96.31% on the tympanic membrane, 99.82% on the chest computed tomography (CT) scan, and 98.75% on the breast cancer ultrasound datasets by photometric augmentation and 96.76% on the breast cancer microscopic biopsy dataset by geometric augmentation. Moreover, some elastic deformation augmentation methods are explored with the proposed model using all the datasets to evaluate their effectiveness. Finally, VGG16, InceptionV3, and ResNet50 were trained on the best-performing augmented datasets, and their performance consistency was compared with that of the MNet-10 model. The findings may aid future researchers in medical data analysis involving ablation studies and augmentation techniques.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:迄今为止,手术的安全和准确执行主要依赖于基于术前影像学生成的术前计划。在介入治疗期间,需要与此类患者图像进行频繁的术中互动,考虑到这样的图像通常显示在外围二维(2D)监视器上并且通过在无菌区之外的接口设备来控制,这是当前麻烦的过程。这项研究提出了一种基于脑机接口(BCI)的新医学图像控制概念,该概念允许免提和直接进行图像处理,而无需依赖手势识别方法或语音命令。
    方法:设计了一种软件环境,用于在外部监视器上显示三维(3D)患者图像,具有基于BCI设备检测到的用户大脑信号的免提图像处理功能(即,视觉诱发信号)。在用户研究中,十位整形外科医生完成了一系列标准化的图像处理任务,以使用开发的界面在计算机断层摄影(CT)图像中导航和定位预定义的3D点。准确性被评估为预定义位置(地面实况)和外科医生导航位置之间的平均误差。所有外科医生使用五点李克特量表(1=强烈不同意5=强烈同意)在标准化调查中对性能和潜在的术中可用性进行评分。
    结果:使用开发的接口时,平均图像控制误差为15.51mm(SD:9.57)。用户的接受度被评为Likert评分4.07(SD:0.96),而界面的总体印象被用户评为3.77(SD:1.02)。我们观察到用户的整体印象和他们获得的校准分数之间存在显著的相关性。
    结论:使用开发的BCI,允许纯粹的大脑引导医学图像控制,产生了有希望的结果,并显示了其未来术中应用的潜力。要克服的主要限制被指出为相互作用延迟。
    BACKGROUND: Safe and accurate execution of surgeries to date mainly rely on preoperative plans generated based on preoperative imaging. Frequent intraoperative interaction with such patient images during the intervention is needed, which is currently a cumbersome process given that such images are generally displayed on peripheral two-dimensional (2D) monitors and controlled through interface devices that are outside the sterile filed. This study proposes a new medical image control concept based on a Brain Computer Interface (BCI) that allows for hands-free and direct image manipulation without relying on gesture recognition methods or voice commands.
    METHODS: A software environment was designed for displaying three-dimensional (3D) patient images onto external monitors, with the functionality of hands-free image manipulation based on the user\'s brain signals detected by the BCI device (i.e., visually evoked signals). In a user study, ten orthopedic surgeons completed a series of standardized image manipulation tasks to navigate and locate predefined 3D points in a Computer Tomography (CT) image using the developed interface. Accuracy was assessed as the mean error between the predefined locations (ground truth) and the navigated locations by the surgeons. All surgeons rated the performance and potential intraoperative usability in a standardized survey using a five-point Likert scale (1 = strongly disagree to 5 = strongly agree).
    RESULTS: When using the developed interface, the mean image control error was 15.51 mm (SD: 9.57). The user\'s acceptance was rated with a Likert score of 4.07 (SD: 0.96) while the overall impressions of the interface was rated as 3.77 (SD: 1.02) by the users. We observed a significant correlation between the users\' overall impression and the calibration score they achieved.
    CONCLUSIONS: The use of the developed BCI, that allowed for a purely brain-guided medical image control, yielded promising results, and showed its potential for future intraoperative applications. The major limitation to overcome was noted as the interaction delay.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    特发性肺纤维化(IPF)是一种致命的间质性肺病,其特征是肺功能的不可预测的下降。已知从肺功能测试的早期变化预测IPF进展是由于急性加重的挑战。虽然这是不可预测的,在IPF进展期间,纤维化网状化的邻近区域增加。有了这些临床信息,高分辨率计算机断层扫描(HRCT)的定量特征和统计学习范式,目的是建立一个预测IPF进展的模型。
    回顾性收集一组6-12个月间隔的IPF受试者的193张匿名HRCT图像。该研究分两个部分进行:(1)A部分收集感兴趣的小区域(ROI)的地面实况,在基线HRCT上标记为“预期进展”或“预期稳定”,并开发统计学习模型以对ROI中的体素进行分类。(2)部分B使用来自部分A的体素水平分类器来产生单扫描总概率(STP)基线的全肺水平得分。
    使用A部分中71名受试者的HRCT扫描的注释ROI,我们应用量子粒子群优化-随机森林(QPSO-RF)来构建分类器。然后,122名受试者的HRCT扫描用于测试预测。使用Spearman等级相关性和生存分析,我们确定了STP与6-12个月肺纤维化和强迫肺活量定量变化的相关性.
    这项研究可以作为收集地面实况的参考,并开发统计学习技术来预测医学成像的进展。
    Idiopathic pulmonary fibrosis (IPF) is a fatal interstitial lung disease characterized by an unpredictable decline in lung function. Predicting IPF progression from the early changes in lung function tests have known to be a challenge due to acute exacerbation. Although it is unpredictable, the neighboring regions of fibrotic reticulation increase during IPF\'s progression. With this clinical information, quantitative characteristics of high-resolution computed tomography (HRCT) and a statistical learning paradigm, the aim is to build a model to predict IPF progression.
    A paired set of anonymized 193 HRCT images from IPF subjects with 6-12 month intervals were collected retrospectively. The study was conducted in two parts: (1) Part A collects the ground truth in small regions of interest (ROIs) with labels of \"expected to progress\" or \"expected to be stable\" at baseline HRCT and develop a statistical learning model to classify voxels in the ROIs. (2) Part B uses the voxel-level classifier from Part A to produce whole-lung level scores of a single-scan total probability\'s (STP) baseline.
    Using annotated ROIs from 71 subjects\' HRCT scans in Part A, we applied Quantum Particle Swarm Optimization-Random Forest (QPSO-RF) to build the classifier. Then, 122 subjects\' HRCT scans were used to test the prediction. Using Spearman rank correlations and survival analyses, we ascertained STP associations with 6-12 month changes in quantitative lung fibrosis and forced vital capacity.
    This study can serve as a reference for collecting ground truth, and developing statistical learning techniques to predict progression in medical imaging.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    骨转移是转移性癌症患者最常见的疾病之一,如乳腺癌或前列腺癌。一种流行的诊断方法是骨闪烁扫描,其中扫描患者的整个身体。然而,扫描图像中出现的热点可能会产生误导,使得骨转移的准确可靠诊断成为挑战。人工智能可以作为决策支持工具发挥关键作用,以减轻在图像上生成手动注释的负担,从而防止医学专家的疏忽。到目前为止,几种最先进的卷积神经网络(CNN)已被用于解决骨转移诊断作为一个二元或多分类问题,达到足够的准确性(高于90%).然而,由于它们增加的复杂性(层数和自由参数),这些网络严重依赖于可用训练图像的数量,这些图像通常在医学领域内受到限制。我们的研究致力于使用一种新的深度学习架构,该架构通过使用卷积神经网络来克服计算负担,该网络的浮点运算(FLOP)和自由参数数量明显减少。实现了所提出的轻量级后视全卷积神经网络,并将其与几个著名的强大CNN进行了比较,例如ResNet50,VGG16,InceptionV3,Xception,和MobileNet在中等大小的成像数据集上(来自男性前列腺癌受试者的778张图像)。结果证明了所提出的方法在识别骨转移方面优于当前的最新技术。所提出的方法展示了彻底改变基于图像的诊断的独特潜力,从而为增强的癌症转移监测和治疗提供了新的可能性。
    Bone metastasis is among the most frequent in diseases to patients suffering from metastatic cancer, such as breast or prostate cancer. A popular diagnostic method is bone scintigraphy where the whole body of the patient is scanned. However, hot spots that are presented in the scanned image can be misleading, making the accurate and reliable diagnosis of bone metastasis a challenge. Artificial intelligence can play a crucial role as a decision support tool to alleviate the burden of generating manual annotations on images and therefore prevent oversights by medical experts. So far, several state-of-the-art convolutional neural networks (CNN) have been employed to address bone metastasis diagnosis as a binary or multiclass classification problem achieving adequate accuracy (higher than 90%). However, due to their increased complexity (number of layers and free parameters), these networks are severely dependent on the number of available training images that are typically limited within the medical domain. Our study was dedicated to the use of a new deep learning architecture that overcomes the computational burden by using a convolutional neural network with a significantly lower number of floating-point operations (FLOPs) and free parameters. The proposed lightweight look-behind fully convolutional neural network was implemented and compared with several well-known powerful CNNs, such as ResNet50, VGG16, Inception V3, Xception, and MobileNet on an imaging dataset of moderate size (778 images from male subjects with prostate cancer). The results prove the superiority of the proposed methodology over the current state-of-the-art on identifying bone metastasis. The proposed methodology demonstrates a unique potential to revolutionize image-based diagnostics enabling new possibilities for enhanced cancer metastasis monitoring and treatment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号