CAD system

CAD 系统
  • 文章类型: Journal Article
    中心性浆液性脉络膜视网膜病变(CSCR)是全球范围内视力障碍的重要原因,光动力疗法(PDT)正在成为一种有前途的治疗策略。在光学相干断层扫描(OCT)扫描中精确分割流体区域并预测对PDT治疗的响应的能力可以显著增强患者结果。本文介绍了一种新颖的深度学习(DL)方法,用于OCT扫描中流体区域的自动3D分割。随后对CSCR患者进行PDT应答分析。我们的方法利用来自OCT扫描的丰富3D上下文信息来训练准确描绘流体区域的模型。该模型不仅大大减少了分割所需的时间和精力,而且提供了一种标准化的技术,促进进一步的大规模研究。此外,通过合并治疗前和治疗后的OCT扫描,我们的模型能够预测PDT响应,因此能够制定个性化治疗策略和优化患者管理。为了验证我们的方法,我们采用了一个强大的数据集,包括2,769个OCT扫描(124个3D体积),获得的结果非常令人满意,优于当前最先进的方法。这项研究标志着DL进步与实际临床应用整合的重要里程碑,推动我们朝着改善CSCR管理迈出了一步。此外,所开发的方法和系统可以进行调整和推断,以应对其他视网膜病变的诊断和治疗中的类似挑战,有利于更全面和个性化的病人护理。
    Central Serous Chorioretinopathy (CSCR) is a significant cause of vision impairment worldwide, with Photodynamic Therapy (PDT) emerging as a promising treatment strategy. The capability to precisely segment fluid regions in Optical Coherence Tomography (OCT) scans and predict the response to PDT treatment can substantially augment patient outcomes. This paper introduces a novel deep learning (DL) methodology for automated 3D segmentation of fluid regions in OCT scans, followed by a subsequent PDT response analysis for CSCR patients. Our approach utilizes the rich 3D contextual information from OCT scans to train a model that accurately delineates fluid regions. This model not only substantially reduces the time and effort required for segmentation but also offers a standardized technique, fostering further large-scale research studies. Additionally, by incorporating pre- and post-treatment OCT scans, our model is capable of predicting PDT response, hence enabling the formulation of personalized treatment strategies and optimized patient management. To validate our approach, we employed a robust dataset comprising 2,769 OCT scans (124 3D volumes), and the results obtained were significantly satisfactory, outperforming the current state-of-the-art methods. This research signifies an important milestone in the integration of DL advancements with practical clinical applications, propelling us a step closer towards improved management of CSCR. Furthermore, the methodologies and systems developed can be adapted and extrapolated to tackle similar challenges in the diagnosis and treatment of other retinal pathologies, favoring more comprehensive and personalized patient care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    提出了一种使用CT图像分类为恶性-良性类别的计算机辅助检测(CAD)系统。
    实施了两种使用分形维数(FD)作为肺结节轮廓不规则性度量的方法(盒计数和功率谱)。本研究使用LIDC-IDRI数据库。其中,用两种方法分析了属于100名患者的100个切片。
    两种方法之间的性能相似,精度高于90%。两种方法在不同恶性程度的FD范围之间几乎没有重叠,功率谱稍好。盒计数比功率谱多一个假阳性。
    两种方法都能够在高恶性程度和低恶性程度之间建立边界。为了进一步验证这些结果并增强CAD系统的性能,额外的研究将是必要的。
    UNASSIGNED: A Computer-Assisted Detection (CAD) System for classification into malignant-benign classes using CT images is proposed.
    UNASSIGNED: Two methods that use the fractal dimension (FD) as a measure of the lung nodule contour irregularities (Box counting and Power spectrum) were implemented. The LIDC-IDRI database was used for this study. Of these, 100 slices belonging to 100 patients were analyzed with both methods.
    UNASSIGNED: The performance between both methods was similar with an accuracy higher than 90%. Little overlap was obtained between FD ranges for the different malignancy grades with both methods, being slightly better in Power spectrum. Box counting had one more false positive than Power spectrum.
    UNASSIGNED: Both methods are able to establish a boundary between the high and low malignancy degree. To further validate these results and enhance the performance of the CAD system, additional studies will be necessary.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    超声检查被广泛用于筛查甲状腺肿瘤,因为它是安全的,易于使用,和低成本。然而,它同时受到斑点噪声和其他伪影的影响,因此,早期发现甲状腺异常对放射科医生来说变得困难。因此,各种研究人员不断解决超声检查的局限性,并提高了US图像对最近3次衰变的甲状腺组织的诊断潜力。因此,本研究广泛回顾了用于对与数据集相关的甲状腺肿瘤US(TTUS)图像进行分类的各种CAD系统,去斑点算法,分割算法,特征提取和选择,评估参数,和分类算法。经过详尽的审查,报告了成就和挑战,并为新研究人员建立路线图。
    Ultrasonography is widely used to screen thyroid tumors because it is safe, easy to use, and low-cost. However, it is simultaneously affected by speckle noise and other artifacts, so early detection of thyroid abnormalities becomes difficult for the radiologist. Therefore, various researchers continuously address the limitations of sonography and improve the diagnosis potential of US images for thyroid tissue from the last three decays. Accordingly, the present study extensively reviewed various CAD systems used to classify thyroid tumor US (TTUS) images related to datasets, despeckling algorithms, segmentation algorithms, feature extraction and selection, assessment parameters, and classification algorithms. After the exhaustive review, the achievements and challenges have been reported, and build a road map for the new researchers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在世界上,八分之一的女性会患上乳腺癌。男人也可以发展它,但不太频繁。这种情况始于由调节细胞分裂和生长的基因变化引起的不受控制的细胞分裂,导致结节或肿瘤的发展。这些肿瘤可以是良性的,这不会带来健康风险,或恶性,也被称为癌变,这使患者的生命处于危险之中,并有可能传播。诊断此问题的最常见方法是通过乳房X光检查。这种检查能够检测乳腺组织的异常,如肿块和微钙化,这被认为是疾病存在的指标。这项研究旨在确定基于直方图的图像增强方法如何影响乳房X线照片的分类,分为五组:良性钙化,良性肿块,恶性钙化,恶性肿块,和健康的组织,由使用卷积神经网络的自动乳房X线摄影分类的CAD系统确定。将使用对比度限制的自适应直方图均衡(CAHE)和直方图强度加窗(HIW)(CLAHE)。通过改善图像背景之间的对比度,纤维组织,致密组织,和生病的组织,包括微钙化和肿块,使用这些程序修改乳房X线照相术直方图。为了帮助神经网络,学习,增加了对比度,以便更容易区分各种类型的组织。通过这种技术,正确分类的图像的比例可能会上升。使用深度卷积神经网络,建立了一个模型,可以对不同类型的病变进行分类.该模型实现了62%的准确率,基于迷你MIAS数据。该项目的最终目标是创建一种更新算法,该算法将被纳入CAD系统,并将增强微钙化和肿块的自动识别和分类。因此,有可能增加早期疾病识别的可能性,这很重要,因为早期发现将治愈的可能性提高到几乎100%。
    In the world, one in eight women will develop breast cancer. Men can also develop it, but less frequently. This condition starts with uncontrolled cell division brought on by a change in the genes that regulate cell division and growth, which leads to the development of a nodule or tumour. These tumours can be either benign, which poses no health risk, or malignant, also known as cancerous, which puts patients\' lives in jeopardy and has the potential to spread. The most common way to diagnose this problem is via mammograms. This kind of examination enables the detection of abnormalities in breast tissue, such as masses and microcalcifications, which are thought to be indicators of the presence of disease. This study aims to determine how histogram-based image enhancement methods affect the classification of mammograms into five groups: benign calcifications, benign masses, malignant calcifications, malignant masses, and healthy tissue, as determined by a CAD system of automatic mammography classification using convolutional neural networks. Both Contrast-limited Adaptive Histogram Equalization (CAHE) and Histogram Intensity Windowing (HIW) will be used (CLAHE). By improving the contrast between the image\'s background, fibrous tissue, dense tissue, and sick tissue, which includes microcalcifications and masses, the mammography histogram is modified using these procedures. In order to help neural networks, learn, the contrast has been increased to make it easier to distinguish between various types of tissue. The proportion of correctly classified images could rise with this technique. Using Deep Convolutional Neural Networks, a model was developed that allows classifying different types of lesions. The model achieved an accuracy of 62%, based on mini-MIAS data. The final goal of the project is the creation of an update algorithm that will be incorporated into the CAD system and will enhance the automatic identification and categorization of microcalcifications and masses. As a result, it would be possible to increase the possibility of early disease identification, which is important because early discovery increases the likelihood of a cure to almost 100%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑肿瘤(BT)是一种罕见但致命的癌症。因此,迄今为止,用于在磁共振成像(MRI)中对脑肿瘤进行分类的计算机辅助诊断(CAD)系统的开发已成为许多研究论文的主题。然而,这方面的研究还处于起步阶段。这项研究的最终目标是开发U-Net深度网络的轻量级有效实现,以用于执行精确的实时分割。此外,提出了一种用于BT分类的简化的深度卷积神经网络(DCNN)体系结构,用于对分割的感兴趣区域(ROI)进行自动特征提取和分类。五个卷积层,整流线性单元,归一化,和最大池化层构成了DCNN提出的简化架构。所介绍的方法在多模态脑肿瘤分割(BRATS2015)数据集上进行了验证。我们对BRATS2015获得的骰子相似系数(DSC)得分的实验结果,灵敏度,分类准确率为88.8%,89.4%,高级别胶质瘤占88.6%。当谈到分割BRATS2015BT图像时,我们提出的CAD框架的性能与现有的最先进的方法相当。然而,本研究对BT图像分类的准确性比先前研究报告的准确性有所提高.BRATS2015BT的图像分类精度已从88%提高到88.6%。
    Brain tumors (BTs) are an uncommon but fatal kind of cancer. Therefore, the development of computer-aided diagnosis (CAD) systems for classifying brain tumors in magnetic resonance imaging (MRI) has been the subject of many research papers so far. However, research in this sector is still in its early stage. The ultimate goal of this research is to develop a lightweight effective implementation of the U-Net deep network for use in performing exact real-time segmentation. Moreover, a simplified deep convolutional neural network (DCNN) architecture for the BT classification is presented for automatic feature extraction and classification of the segmented regions of interest (ROIs). Five convolutional layers, rectified linear unit, normalization, and max-pooling layers make up the DCNN\'s proposed simplified architecture. The introduced method was verified on multimodal brain tumor segmentation (BRATS 2015) datasets. Our experimental results on BRATS 2015 acquired Dice similarity coefficient (DSC) scores, sensitivity, and classification accuracy of 88.8%, 89.4%, and 88.6% for high-grade gliomas. When it comes to segmenting BRATS 2015 BT images, the performance of our proposed CAD framework is on par with existing state-of-the-art methods. However, the accuracy achieved in this study for the classification of BT images has improved upon the accuracy reported in prior studies. Image classification accuracy for BRATS 2015 BT has been improved from 88% to 88.6%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    最近,一旦公共数据库的规模增加,卷积神经网络的性能就大大优于以前基于手工制作特征的系统。然而,这些算法学习难以解释和分析的特征表示。另一方面,专家要求自动系统根据临床标准解释他们的决定,在黑色素瘤诊断领域,与病变中发现的皮肤镜特征的分析有关。近年来,深度网络的可解释性已经被探索使用的方法,获得视觉特征突出的神经元或分析激活,以提取更多有用的信息。按照后一种方法,本研究提出了一种黑色素瘤诊断系统,该系统通过通道调制方案将皮肤镜特征分割明确纳入诊断网络。调制权重基于病变内容控制检测到的视觉模式的影响。如实验部分所示,我们的设计不仅提高了ISIC2016的系统性能(平均AUC为86.6%与85.8%)和2017年(平均AUC为94.0%与93.8%)数据集,但也显著增强了诊断的可解释性,为临床医生提供有用和直观的线索。
    Recently, convolutional neural networks have greatly outperformed previous systems based on handcrafted features once the size of public databases has increased. However, these algorithms learn feature representations that are difficult to interpret and analyse. On the other hand, experts require automatic systems to explain their decisions according to clinical criteria which, in the field of melanoma diagnosis, are related to the analysis of dermoscopic features found in the lesions. In recent years, the interpretability of deep networks has been explored using methods that obtain visual features highlighted by neurones or analyse activations to extract more useful information. Following the latter approach, this study proposes a system for melanoma diagnosis that explicitly incorporates dermoscopic feature segmentations into a diagnosis network through a channel modulation scheme. Modulation weights control the influence of the detected visual patterns based on the lesion content. As shown in the experimental section, our design not only improves the system performance on the ISIC 2016 (average AUC of 86.6% vs. 85.8%) and 2017 (average AUC of 94.0% vs. 93.8%) datasets, but also notably enhances the interpretability of the diagnosis, providing useful and intuitive cues to clinicians.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医疗保健行业和科学界最有前途的研究领域之一是专注于基于AI的应用,以应对真正的医疗挑战,例如构建乳腺癌计算机辅助诊断(CAD)系统。迁移学习是最近新兴的基于AI的技术之一,可以快速学习并提高医学影像诊断性能。尽管乳腺癌的深度学习分类已被广泛覆盖,在研究提取的高级深层特征之间的独立性方面仍然存在某些障碍。这项工作解决了在设计有效的CAD系统从乳房X线照片进行乳房病变分类时仍然存在的两个挑战。第一个挑战是通过生成伪彩色图像而不是仅使用输入的原始灰度图像来丰富深度学习模型的输入信息。为了实现该目标,并行使用两种不同的图像预处理技术:对比度限制自适应直方图均衡(CLAHE)和逐像素强度调整。原始图像保留在第一个通道中,而其他两个通道接收处理后的图像,分别。生成的三通道伪彩色图像被直接馈送到骨干CNN的输入层中,以生成更强大的高级深层特征。第二个挑战是克服从深度学习模型生成的高相关深度特征之间发生的多重共线性问题。提出了一种新的基于Logistic回归(LR)和主成分分析(PCA)的混合处理技术,称为LR-PCA。这样的过程有助于选择重要的主成分(PC)以进一步将它们用于分类目的。已使用两个不同的公共基准数据集INbast和mini-MAIS检查了拟议的CAD系统。拟议的CAD系统可以使用INbast和mini-MAIS数据集实现98.60%和98.80%的最高性能精度,分别。这种CAD系统似乎对乳腺癌诊断有用且可靠。
    One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:已经开发了许多深度学习方法,用于胸部计算机断层扫描(CT)图像中的肺部病变检测。然而,这些方法通常针对一种特定的病变类型,也就是说,肺结节。在这项工作中,我们打算为更具挑战性的任务开发和评估一种新颖的深度学习方法,检测各种大小差异很大的良性和恶性纵隔病变,形状,强度,以及胸部CT图像中的位置。
    方法:我们的纵隔病变检测方法包括两个主要阶段:(a)大小适应性病变候选检测,然后(b)假阳性(FP)减少和良恶性分类。对于候选检测,无锚和一级探测器,即3D-CenterNet旨在定位可疑区域(即,各种大小的候选人)在纵隔内。然后,基于3D-SEResNet的分类器用于区分FP,良性病变,和候选人的恶性病变。
    结果:我们通过对相对大规模的数据集进行五次交叉验证来评估所提出的方法,其中包括从甲级三级医院收集的1136名患者的数据。该方法可达到84.3%±1.9%的灵敏度,90.2%±1.4%,93.2%±0.8%,和93.9%±1.1%,分别,在1/8、1/4、½时发现所有良性和恶性病变,每次扫描1个FP,良恶性分类的准确率可达78.7%±2.5%。
    结论:所提出的方法可以有效地检测各种大小的纵隔病灶,形状,以及胸部CT图像中的位置。它可以集成到大多数现有的肺部病变检测系统中,以促进其临床应用。该方法还可以容易地扩展到其他类似的3D病变检测任务。
    OBJECTIVE: Many deep learning methods have been developed for pulmonary lesion detection in chest computed tomography (CT) images. However, these methods generally target one particular lesion type, that is, pulmonary nodules. In this work, we intend to develop and evaluate a novel deep learning method for a more challenging task, detecting various benign and malignant mediastinal lesions with wide variations in sizes, shapes, intensities, and locations in chest CT images.
    METHODS: Our method for mediastinal lesion detection contains two main stages: (a) size-adaptive lesion candidate detection followed by (b) false-positive (FP) reduction and benign-malignant classification. For candidate detection, an anchor-free and one-stage detector, namely 3D-CenterNet is designed to locate suspicious regions (i.e., candidates with various sizes) within the mediastinum. Then, a 3D-SEResNet-based classifier is used to differentiate FPs, benign lesions, and malignant lesions from the candidates.
    RESULTS: We evaluate the proposed method by conducting five-fold cross-validation on a relatively large-scale dataset, which consists of data collected on 1136 patients from a grade A tertiary hospital. The method can achieve sensitivity scores of 84.3% ± 1.9%, 90.2% ± 1.4%, 93.2% ± 0.8%, and 93.9% ± 1.1%, respectively, in finding all benign and malignant lesions at 1/8, 1/4, ½, and 1 FPs per scan, and the accuracy of benign-malignant classification can reach up to 78.7% ± 2.5%.
    CONCLUSIONS: The proposed method can effectively detect mediastinal lesions with various sizes, shapes, and locations in chest CT images. It can be integrated into most existing pulmonary lesion detection systems to promote their clinical applications. The method can also be readily extended to other similar 3D lesion detection tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    需要及早发现乳腺癌以降低死亡率。超声成像(US)可以显着增强对致密乳房的诊断。大多数现有的计算机辅助诊断(CAD)系统采用乳腺肿瘤的单个超声图像来提取特征以将其分类为良性或恶性。然而,这种CAD系统的精度是有限的,由于大的肿瘤大小和形状的变化,不规则和模糊的肿瘤边界,由于其噪声性质以及正常和异常组织之间的显着相似性,超声图像中的信噪比较低。为了处理这些问题,本文提出了一种基于乳腺US序列的基于深度学习的影像组学方法。所提出的方法涉及三个主要组成部分:基于深度学习网络的放射学特征提取,所谓的ConvNeXt,恶性肿瘤评分汇集机制,和视觉诠释。具体来说,我们采用了ConvNeXt网络,使用视觉变换器风格训练的深度卷积神经网络(CNN)。我们还提出了一种有效的合并机制,以基于图像质量统计来融合每个乳房US序列帧的恶性评分。消融研究和实验结果表明,与其他基于CNN的方法相比,我们的方法获得了有竞争力的结果。
    Breast cancer needs to be detected early to reduce mortality rate. Ultrasound imaging (US) could significantly enhance diagnosing cases with dense breasts. Most of the existing computer-aided diagnosis (CAD) systems employ a single ultrasound image for the breast tumor to extract features to classify it as benign or malignant. However, the accuracy of such CAD system is limited due to the large tumor size and shape variation, irregular and ambiguous tumor boundaries, and low signal-to-noise ratio in ultrasound images due to their noisy nature and the significant similarity between normal and abnormal tissues. To handle these issues, we propose a deep-learning-based radiomics method based on breast US sequences in this paper. The proposed approach involves three main components: radiomic features extraction based on a deep learning network, so-called ConvNeXt, a malignancy score pooling mechanism, and visual interpretations. Specifically, we employ the ConvNeXt network, a deep convolutional neural network (CNN) trained using the vision transformer style. We also propose an efficient pooling mechanism to fuse the malignancy scores of each breast US sequence frame based on image-quality statistics. The ablation study and experimental results demonstrate that our method achieves competitive results compared to other CNN-based methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    由于世界各地的人们都很容易受到COVID-19病毒的影响,这种病毒的自动检测是一个重要的问题。本文旨在利用机器学习对电晕病毒进行检测和分类。在CT-肺部筛查和计算机辅助诊断(CAD)系统中发现和鉴定冠状病毒,预计将对COVID-19进行区分和分类。通过利用从电晕感染患者获得的临床标本,并借助诸如决策树之类的机器学习技术,支持向量机,K均值聚类,和径向基函数。虽然一些专家认为RT-PCR检测是诊断新冠肺炎患者的最佳选择,其他人认为肺部CT扫描可以更准确地诊断冠状病毒感染,以及比PCR测试更便宜。临床标本包括血清标本,呼吸道分泌物,和全血标本.总的来说,作为先前临床检查的结果,从这些样本中测量了15个因素。拟议的CAD系统包括从CT肺部筛查收集开始的四个阶段,然后进行预处理阶段,以增强毛玻璃混浊(GGO)结节的外观,因为它们最初以昏暗的对比度锁定朦胧。修改的K-均值算法将用于检测和分割这些区域。最后,使用检测到的,在检测阶段获得的感染区域,规模为50×50,并对似乎是GGO的可靠假阳性进行分割,作为机器学习分类器的输入和目标,这里使用了支持向量机(SVM)和径向基函数(RBF)。此外,开发了GUI应用程序,通过给出从临床标本获得的15个输入因子,避免了医生获得确切结果的困惑。
    As people all over the world are vulnerable to be affected by the COVID-19 virus, the automatic detection of such a virus is an important concern. The paper aims to detect and classify corona virus using machine learning. To spot and identify corona virus in CT-Lung screening and Computer-Aided diagnosis (CAD) system is projected to distinguish and classifies the COVID-19. By utilizing the clinical specimens obtained from the corona-infected patients with the help of some machine learning techniques like Decision Tree, Support Vector Machine, K-means clustering, and Radial Basis Function. While some specialists believe that the RT-PCR test is the best option for diagnosing Covid-19 patients, others believe that CT scans of the lungs can be more accurate in diagnosing corona virus infection, as well as being less expensive than the PCR test. The clinical specimens include serum specimens, respiratory secretions, and whole blood specimens. Overall, 15 factors are measured from these specimens as the result of the previous clinical examinations. The proposed CAD system consists of four phases starting with the CT lungs screening collection, followed by a pre-processing stage to enhance the appearance of the ground glass opacities (GGOs) nodules as they originally lock hazy with fainting contrast. A modified K-means algorithm will be used to detect and segment these regions. Finally, the use of detected, infected areas that obtained in the detection phase with a scale of 50×50 and perform segmentation of the solid false positives that seem to be GGOs as inputs and targets for the machine learning classifiers, here a support vector machine (SVM) and Radial basis function (RBF) has been utilized. Moreover, a GUI application is developed which avoids the confusion of the doctors for getting the exact results by giving the 15 input factors obtained from the clinical specimens.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号