radiologist

放射科医生
  • 文章类型: Journal Article
    目的:使用深度学习方法,使用单模态T2加权成像非侵入性检测前列腺癌并预测Gleason分级。
    方法:前列腺癌患者,经组织病理学证实,2015年9月至2022年6月期间在我们医院接受磁共振成像检查的患者被回顾性纳入内部数据集.来自另一个医疗中心的外部数据集和公共挑战数据集用于外部验证。设计了一种深度学习方法用于前列腺癌检测和Gleason等级预测。计算曲线下面积(AUC)以比较模型性能。
    结果:对于前列腺癌检测,内部数据集包括来自195名健康个体(年龄:57.27±14.45岁)和302名诊断为前列腺癌的患者(年龄:72.20±8.34岁)的数据.在验证集中我们的前列腺癌检测模型的AUC(n=96,19.7%)为0.918。对于格里森品位预测,数据集包括来自302名前列腺癌患者中的283名的数据,227名(年龄:72.06±7.98岁)和56名(年龄:72.78±9.49岁)患者正在接受培训和测试,分别。外部和公共挑战数据集包括来自48名患者(年龄:72.19±7.81岁)和91名患者(年龄信息不可用)的数据。分别。我们在训练集中的格里森等级预测模型的AUC(n=227)为0.902,而那些验证(n=56),外部验证(n=48),和公共挑战验证集(n=91)分别为0.854,0.776和0.838.
    结论:通过多中心数据集验证,我们提出的深度学习方法可以检测前列腺癌,并比人类专家更好地预测Gleason等级。
    精确的前列腺癌检测和Gleason分级预测对临床治疗和决策具有重要意义。
    结论:对于放射科医生来说,前列腺分割比前列腺癌病灶更容易注释。我们的深度学习方法检测到前列腺癌并预测Gleason分级,表现优于人类专家。非侵入性Gleason等级预测可以减少不必要的活检次数。
    OBJECTIVE: To noninvasively detect prostate cancer and predict the Gleason grade using single-modality T2-weighted imaging with a deep-learning approach.
    METHODS: Patients with prostate cancer, confirmed by histopathology, who underwent magnetic resonance imaging examinations at our hospital during September 2015-June 2022 were retrospectively included in an internal dataset. An external dataset from another medical center and a public challenge dataset were used for external validation. A deep-learning approach was designed for prostate cancer detection and Gleason grade prediction. The area under the curve (AUC) was calculated to compare the model performance.
    RESULTS: For prostate cancer detection, the internal datasets comprised data from 195 healthy individuals (age: 57.27 ± 14.45 years) and 302 patients (age: 72.20 ± 8.34 years) diagnosed with prostate cancer. The AUC of our model for prostate cancer detection in the validation set (n = 96, 19.7%) was 0.918. For Gleason grade prediction, datasets comprising data from 283 of 302 patients with prostate cancer were used, with 227 (age: 72.06 ± 7.98 years) and 56 (age: 72.78 ± 9.49 years) patients being used for training and testing, respectively. The external and public challenge datasets comprised data from 48 (age: 72.19 ± 7.81 years) and 91 patients (unavailable information on age), respectively. The AUC of our model for Gleason grade prediction in the training set (n = 227) was 0.902, whereas those of the validation (n = 56), external validation (n = 48), and public challenge validation sets (n = 91) were 0.854, 0.776, and 0.838, respectively.
    CONCLUSIONS: Through multicenter dataset validation, our proposed deep-learning method could detect prostate cancer and predict the Gleason grade better than human experts.
    UNASSIGNED: Precise prostate cancer detection and Gleason grade prediction have great significance for clinical treatment and decision making.
    CONCLUSIONS: Prostate segmentation is easier to annotate than prostate cancer lesions for radiologists. Our deep-learning method detected prostate cancer and predicted the Gleason grade, outperforming human experts. Non-invasive Gleason grade prediction can reduce the number of unnecessary biopsies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:本研究的目的是开发和验证基于深度卷积神经网络(DCNN)的卵巢肿瘤超声诊断模型,并将其诊断性能与人类专家的诊断性能进行比较。
    方法:我们收集了192例卵巢恶性肿瘤妇女的486张超声图像和213例卵巢良性肿瘤妇女的617张超声图像,均经病理检查证实。根据7:3的比率将图像数据集分成训练集和验证集。我们选择了5个DCNN来开发我们的模型:MobileNet,Xception,盗梦空间,ResNet和DenseNet。我们通过曲线下面积(AUC)比较了五个模型的性能,灵敏度,特异性,和准确性。然后,我们从验证集中随机选择200个图像作为测试集。我们请三位放射科医生诊断这些图像,以比较放射科医生和DCNN模型的表现。
    结果:在验证集中,DenseNet的AUC为0.997,而ResNet的AUC为0.988,初始权0.987,0.968的Xception和0.836的MobileNet。在测试集中,DenseNet模型的准确度为0.975,而放射科医生的准确度为0.825(p<0.0001),灵敏度为0.975对0.700(p<0.0001),特异性为0.975对0.908(p<0.001)。
    结论:DensNet在根据超声图像识别恶性卵巢肿瘤和良性卵巢肿瘤方面比其他DCNNs和放射科专家表现更好,这一发现需要在临床试验中进一步探索。
    OBJECTIVE: The objective of this study was to develop and validate an ovarian tumor ultrasonographic diagnostic model based on deep convolutional neural networks (DCNN) and compare its diagnostic performance with that of human experts.
    METHODS: We collected 486 ultrasound images of 192 women with malignant ovarian tumors and 617 ultrasound images of 213 women with benign ovarian tumors, all confirmed by pathological examination. The image dataset was split into a training set and a validation set according to a 7:3 ratio. We selected 5 DCNNs to develop our model: MobileNet, Xception, Inception, ResNet and DenseNet. We compared the performance of the five models through the area under the curve (AUC), sensitivity, specificity, and accuracy. We then randomly selected 200 images from the validation set as the test set. We asked three expert radiologists to diagnose the images to compare the performance of radiologists and the DCNN model.
    RESULTS: In the validation set, AUC of DenseNet was 0.997 while AUC was 0.988 of ResNet, 0.987 of Inception, 0.968 of Xception and 0.836 of MobileNet. In the test set, the accuracy was 0.975 with the DenseNet model versus 0.825 (p < 0.0001) with the radiologists, and sensitivity was 0.975 versus 0.700 (p < 0.0001), and specificity was 0.975 versus 0.908 (p < 0.001).
    CONCLUSIONS: DensNet performed better than other DCNNs and expert radiologists in identifying malignant ovarian tumors from benign ovarian tumors based on ultrasound images, a finding that needs to be further explored in clinical trials.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Published Erratum
    [这更正了文章DOI:10.3389/fnins.2023.1152619。].
    [This corrects the article DOI: 10.3389/fnins.2023.1152619.].
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:我们打算基于乳腺超声动态视频开发基于深度学习的分类模型,然后与基于超声静态图像和不同放射科医生的经典模型进行比较,评估其诊断性能。
    方法:我们从2020年5月至2021年12月的888例患者中收集了1000个乳腺病变。每个病变包含两个静态图像和两个动态视频。我们将这些病变随机分成训练,验证,和测试集的比例为7:2:1。两个深度学习(DL)模型,即DL-视频和DL-图像,是基于3DResnet-50和2DResnet-50开发的,使用2000个动态视频和2000个静态图像,分别。评估测试集中的病变,以比较两个模型和六个具有不同资历的放射科医生的诊断性能。
    结果:DL-video模型的曲线下面积明显高于DL-image模型(0.969vs.0.925,P=0.0172)和六名放射科医生(0.969vs.0.779-0.912,P<0.05)。与静态图像相比,所有放射科医师在评估动态视频时表现更好。此外,放射科医生在阅读图像和视频方面都表现更好,资历更高。
    结论:与传统的DL图像模型和放射科医生相比,DL视频模型可以识别更详细的空间和时间信息,从而对乳腺病变进行准确分类。临床应用可进一步提高乳腺癌的诊断水平。
    OBJECTIVE: We intended to develop a deep-learning-based classification model based on breast ultrasound dynamic video, then evaluate its diagnostic performance in comparison with the classic model based on ultrasound static image and that of different radiologists.
    METHODS: We collected 1000 breast lesions from 888 patients from May 2020 to December 2021. Each lesion contained two static images and two dynamic videos. We divided these lesions randomly into training, validation, and test sets by the ratio of 7:2:1. Two deep learning (DL) models, namely DL-video and DL-image, were developed based on 3D Resnet-50 and 2D Resnet-50 using 2000 dynamic videos and 2000 static images, respectively. Lesions in the test set were evaluated to compare the diagnostic performance of two models and six radiologists with different seniority.
    RESULTS: The area under the curve of the DL-video model was significantly higher than those of the DL-image model (0.969 vs. 0.925, P = 0.0172) and six radiologists (0.969 vs. 0.779-0.912, P < 0.05). All radiologists performed better when evaluating the dynamic videos compared to the static images. Furthermore, radiologists performed better with increased seniority both in reading images and videos.
    CONCLUSIONS: The DL-video model can discern more detailed spatial and temporal information for accurate classification of breast lesions than the conventional DL-image model and radiologists, and its clinical application can further improve the diagnosis of breast cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视觉专业知识反映了在审查特定领域图像方面积累的经验,并且在特定任务的功能磁共振成像研究中已被证明可以调节大脑功能。然而,人们对视觉体验如何调节静息状态的大脑网络动力学知之甚少。为了探索这个,我们招募了22名放射科实习生和22名匹配的健康对照,并使用静息态功能磁共振成像(rs-fMRI)和度中心性(DC)方法研究了脑网络动力学的变化.我们的结果显示,在与视觉处理相关的大脑区域中,RI组和对照组之间的DC存在显着差异。决策,记忆,注意力控制,和工作记忆。使用递归特征消除-支持向量机算法,我们取得了88.64%的分类准确率。我们的发现表明,视觉体验可以调节放射科医生的静息状态脑网络动力学,并为视觉专业知识的神经机制提供新的见解。
    Visual expertise reflects accumulated experience in reviewing domain-specific images and has been shown to modulate brain function in task-specific functional magnetic resonance imaging studies. However, little is known about how visual experience modulates resting-state brain network dynamics. To explore this, we recruited 22 radiology interns and 22 matched healthy controls and used resting-state functional magnetic resonance imaging (rs-fMRI) and the degree centrality (DC) method to investigate changes in brain network dynamics. Our results revealed significant differences in DC between the RI and control group in brain regions associated with visual processing, decision making, memory, attention control, and working memory. Using a recursive feature elimination-support vector machine algorithm, we achieved a classification accuracy of 88.64%. Our findings suggest that visual experience modulates resting-state brain network dynamics in radiologists and provide new insights into the neural mechanisms of visual expertise.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:评估商业人工智能(AI)辅助超声检查(US)对甲状腺结节的诊断性能,并验证其在现实医学实践中的价值。
    方法:从2021年3月至2021年7月,前瞻性地纳入了236例具有312个可疑甲状腺结节的连续患者。一位经验丰富的放射科医生使用实时AI系统(S-Detect)进行了美国检查。记录结节的美国图像和AI报告。9名居民和3名资深放射科医生被邀请根据记录的美国图像做出“良性”或“恶性”诊断,而不知道AI报告。在提到AI报告后,再次诊断。AI的诊断性能,居民,分析了有和没有AI报告的高级放射科医生。
    结果:灵敏度,准确度,AI系统的AUC分别为0.95、0.84和0.753,与经验丰富的放射科医生没有统计学差异,但优于居民(均p<0.01)。AI辅助驻留策略显著提高了结节≤1.5cm的准确度和灵敏度(均p<0.01),而对于>1.5cm的结节,不必要的活检率降低了27.7%(p=0.034)。
    结论:AI系统实现了性能,用于癌症诊断,与普通的高级甲状腺放射科医生相当。AI辅助策略可以显着提高经验不足的放射科医生的整体诊断性能。同时增加甲状腺癌≤1.5cm的发现,并减少在现实世界的医疗实践中对>1.5cm的结节进行不必要的活检。
    结论:•AI系统在评估甲状腺癌方面达到了类似于放射科医师的高级水平,并且可以显着提高居民的整体诊断能力。•AI辅助策略显着改善≤1.5cm甲状腺癌筛查AUC,准确度,和居民的敏感性,导致甲状腺癌的检出增加,同时保持与放射科医生相当的特异性。•AI辅助策略显着降低了居民对甲状腺结节>1.5cm的不必要活检率,同时保持与放射科医生相当的灵敏度。
    OBJECTIVE: To evaluate the diagnostic performance of a commercial artificial intelligence (AI)-assisted ultrasonography (US) for thyroid nodules and to validate its value in real-world medical practice.
    METHODS: From March 2021 to July 2021, 236 consecutive patients with 312 suspicious thyroid nodules were prospectively enrolled in this study. One experienced radiologist performed US examinations with a real-time AI system (S-Detect). US images and AI reports of the nodules were recorded. Nine residents and three senior radiologists were invited to make a \"benign\" or \"malignant\" diagnosis based on recorded US images without knowing the AI reports. After referring to AI reports, the diagnosis was made again. The diagnostic performance of AI, residents, and senior radiologists with and without AI reports were analyzed.
    RESULTS: The sensitivity, accuracy, and AUC of the AI system were 0.95, 0.84, and 0.753, respectively, and were not statistically different from those of the experienced radiologists, but were superior to those of the residents (all p < 0.01). The AI-assisted resident strategy significantly improved the accuracy and sensitivity for nodules ≤ 1.5 cm (all p < 0.01), while reducing the unnecessary biopsy rate by up to 27.7% for nodules > 1.5 cm (p = 0.034).
    CONCLUSIONS: The AI system achieved performance, for cancer diagnosis, comparable to that of an average senior thyroid radiologist. The AI-assisted strategy can significantly improve the overall diagnostic performance for less-experienced radiologists, while increasing the discovery of thyroid cancer ≤ 1.5 cm and reducing unnecessary biopsies for nodules > 1.5 cm in real-world medical practice.
    CONCLUSIONS: • The AI system reached a senior radiologist-like level in the evaluation of thyroid cancer and could significantly improve the overall diagnostic performance of residents. • The AI-assisted strategy significantly improved ≤ 1.5 cm thyroid cancer screening AUC, accuracy, and sensitivity of the residents, leading to an increased detection of thyroid cancer while maintaining a comparable specificity to that of radiologists alone. • The AI-assisted strategy significantly reduced the unnecessary biopsy rate for thyroid nodules > 1.5 cm by the residents, while maintaining a comparable sensitivity to that of radiologists alone.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:深度学习算法(DLA)可以自动测量混合磨玻璃结节(mGGN)的固体部分,与病理检查期间测量的侵入性组件大小一致。然而,基于DLA的纯毛玻璃结节(pGGNs)的测量在文献中很少报道。
    目的:评估使用市售的DLA在计算机断层扫描(CT)上自动测量pGGNs。
    方法:在这项回顾性研究中,我们纳入了68例81pGGNs患者。结节的最大直径由高级放射科医生手动测量,并由DLA自动分割和测量。使用Bland-Altman图评估放射科医生和DLA之间的一致性,并使用Pearson相关性分析相关性。最后,我们在术前CT检查中评估了放射科医师和DLA测量值与pGGNs患者肺腺癌侵袭性之间的关联.
    结果:放射科医师和DLA测量结果表现出良好的一致性,Bland-Altman偏差为3.0%,这是临床上可以接受的。两组最大直径之间的相关性也很强,皮尔逊相关系数为0.968(P<0.001)。此外,两组最大直径均大于浸润性腺癌组(P<0.001).
    结论:DLA的自动pGGNs测量值与手动测量值相当,并且与肺腺癌的侵袭性密切相关。
    BACKGROUND: Deep learning algorithms (DLAs) could enable automatic measurements of solid portions of mixed ground-glass nodules (mGGNs) in agreement with the invasive component sizes measured during pathologic examinations. However, the measurement of pure ground-glass nodules (pGGNs) based on DLAs has rarely been reported in the literature.
    OBJECTIVE: To evaluate the use of a commercially available DLA for the automatic measurement of pGGNs on computed tomography (CT).
    METHODS: In this retrospective study, we included 68 patients with 81 pGGNs. The maximum diameter of the nodules was manually measured by senior radiologists and automatically segmented and measured by the DLA. Agreement between the measurements by the radiologist and DLA was assessed using Bland-Altman plots, and correlations were analyzed using Pearson correlation. Finally, we evaluated the association between the radiologist and DLA measurements and the invasiveness of lung adenocarcinoma in patients with pGGNs on preoperative CT.
    RESULTS: The radiologist and DLA measurements exhibited good agreement with a Bland-Altman bias of 3.0%, which were clinically acceptable. The correlation between both sets of maximum diameters was also strong, with a Pearson correlation coefficient of 0.968 (P < 0.001). In addition, both sets of maximum diameters were larger in the invasive adenocarcinoma group than in the non-invasive adenocarcinoma group (P < 0.001).
    CONCLUSIONS: Automatic pGGNs measurements by the DLA were comparable with those measured manually and were closely associated with the invasiveness of lung adenocarcinoma.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    辐射工作者的健康一直是我们关注的焦点。流行病学调查表明,长期暴露于低剂量电离辐射,尤其是癌症和心血管疾病,对此有很多研究。然而,到现在为止,关于放射工作者血液和生物样本研究的报道很少。在这项研究中,严格筛查放射工作人员和健康对照组,通过提取外周静脉血对mRNA和circRNA的转录组进行测序。同时,在GEO数据库中选择适当的数据集进行生物信息学分析,并构建了circRNA-miRNA-mRNA网络。我们鉴定了9种不同的环状核糖核酸,3个微小的核糖核酸,和2个中心基因(NOD2和IRF7)。这些差异表达基因和非编码RNA与电离辐射损伤密切相关,并作为生物标志物发挥着重要作用。总之,这项研究可能为circRNA-miRNA-mRNA调控网络在放射工作者健康中的作用提供新的见解,为今后的辐射生物学研究提供了新的策略。
    The health of radiation workers has always been our focus. Epidemiological investigation shows that long-term exposure to low-dose ionizing radiation can affect human health, especially cancer and cardiovascular disease, and there are many studies on it. However, up to now, there have been few reports on the research of blood and biological samples from radiation workers. In this study, radiation workers and healthy control groups were strictly screened, and the transcriptome of mRNA and circRNA was sequenced by extracting their peripheral venous blood. At the same time, appropriate data sets were selected in the GEO database for bioinformatics analysis, and circRNA-miRNA-mRNA network was constructed. We identified 9 different circular ribonucleic acids, 3 tiny ribonucleic acids, and 2 central genes (NOD 2 and IRF 7). These differentially expressed genes and non-coding RNA are closely related to ionizing radiation damage, and play an important role as biological markers. In conclusion, this study may provide new insights into the role of the circRNA-miRNA-mRNA regulatory network in the health of radiation workers, and provides a new strategy for the future study of radiation biology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    SARS-CoV-2病毒特异性逆转录酶-聚合酶链反应(RT-PCR)测试通常用于诊断COVID-19。然而,这项测试需要2天才能完成。此外,为了避免假阴性结果,串行测试可能是必不可少的。目前RT-PCR检测试剂盒的供应有限,强调需要替代方法来精确和快速诊断COVID-19。可以使用胸部CT扫描图像评估怀疑感染SARS-CoV-2的患者。然而,不能单独使用CT图像来排除SARS-CoV-2感染,因为个别患者在疾病的主要阶段可能表现出正常的放射学结果。开发了一种基于机器学习(ML)的识别和分割系统,以自发地发现和计算COVID-19患者CT扫描中的感染区域。可计算的评估显示了自动感染区域分配的合适性能。所开发的ML模型适用于COVID-19(+)的直接检测。ML被前沿医学专家证实是诊断COVID-19(+)的补充诊断技术。完整的COVID-19人工描绘通常需要225.5分钟;然而,所提出的RILML方法在四次模型更新迭代后将描绘时间减少到7分钟。
    A SARS-CoV-2 virus-specific reverse transcriptase-polymerase chain reaction (RT-PCR) test is usually used to diagnose COVID-19. However, this test requires up to 2 days for completion. Moreover, to avoid false-negative outcomes, serial testing may be essential. The availability of RT-PCR test kits is currently limited, highlighting the need for alternative approaches for the precise and rapid diagnosis of COVID-19. Patients suspected to be infected with SARS-CoV-2 can be assessed using chest CT scan images. However, CT images alone cannot be used for ruling out SARS-CoV-2 infection because individual patients may exhibit normal radiological results in the primary phases of the disease. A machine learning (ML)-based recognition and segmentation system was developed to spontaneously discover and compute infection areas in CT scans of COVID-19 patients. The computable assessment exhibited suitable performance for automatic infection region allocation. The ML models developed were suitable for the direct detection of COVID-19 (+). ML was confirmed to be a complementary diagnostic technique for diagnosing COVID-19(+) by forefront medical specialists. The complete manual delineation of COVID-19 often requires up to 225.5 min; however, the proposed RILML method decreases the delineation time to 7 min after four iterations of model updating.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    OBJECTIVE: Artificial intelligence (AI) has been an important addition to medicine. We aimed to explore the use of deep learning (DL) to distinguish benign from malignant lesions with breast ultrasound (BUS).
    METHODS: The DL model was trained with BUS nodule data using a standard protocol (1271 malignant nodules, 1053 benign nodules, and 2144 images of the contralateral normal breast). The model was tested with 692 images of 256 breast nodules. We used the accuracy, precision, recall, harmonic mean of recall and precision, and mean average precision as the indices to assess the DL model. We used 100 BUS images to evaluate differences in diagnostic accuracy among the AI system, experts (>25 years of experience), and physicians with varying levels of experience. A receiver operating characteristic curve was generated to evaluate the accuracy for distinguishing between benign and malignant breast nodules.
    RESULTS: The DL model showed 73.3% sensitivity and 94.9% specificity for the diagnosis of benign versus malignant breast nodules (area under the curve, 0.943). No significant difference in diagnostic ability was found between the AI system and the expert group (P = .951), although the physicians with lower levels of experience showed significant differences from the AI and expert groups (P = .01 and .03, respectively).
    CONCLUSIONS: Deep learning could distinguish between benign and malignant breast nodules with BUS. On BUS images, DL achieved diagnostic accuracy equivalent to that of expert physicians.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号