DenseNet-121

DenseNet - 121
  • 文章类型: Journal Article
    急性淋巴细胞白血病,通常被称为所有,是一种可以影响血液和骨髓的癌症。诊断过程是一个困难的过程,因为它经常需要专家测试,比如验血,骨髓穿刺,还有活检,所有这些都非常耗时和昂贵。必须获得ALL的早期诊断,以便及时和适当地开始治疗。在最近的医学诊断中,人工智能(AI)和物联网(IoT)设备的集成取得了实质性进展。我们的提案引入了一种新的基于AI的医疗物联网(IoMT)框架,旨在从外周血涂片(PBS)图像中自动识别白血病。在这项研究中,我们提出了一种新的基于深度学习的融合模型来检测所有类型的白血病。系统将诊断报告无缝地提供给集中式数据库,包括患者特定的设备。从医院采集血样后,PBS图像通过支持WiFi的微观设备传输到云服务器。在云服务器中,配置了能够对PBS图像中的ALL进行分类的新融合模型。使用包括来自89个个体的6512个原始和分割图像的数据集来训练融合模型。在融合模型中,两个输入通道用于特征提取。这些通道包括原始图像和分割图像。VGG16负责从原始图像中提取特征,而DenseNet-121负责从分割图像中提取特征。两个输出特征合并在一起,和致密层用于白血病的分类。已经提出的融合模型获得了99.89%的准确率,精度为99.80%,召回率达到99.72%,这使它在白血病分类中处于很好的位置。所提出的模型在性能方面优于几种最先进的卷积神经网络(CNN)模型。因此,这个提出的模型有可能挽救生命和努力。为了更全面地模拟整个方法,本研究开发了一个网络应用程序(测试版)。本申请旨在确定个体中是否存在白血病。这项研究的结果具有在生物医学研究中应用的巨大潜力,特别是提高计算机辅助白血病检测的准确性。
    Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    心律失常是心血管疾病发病率和死亡率的主要原因。便携式心电图(ECG)监测仪已经使用了数十年来监测心律失常患者。这些监测器提供心脏活动的实时数据,以识别不规则的心跳。然而,节律监测和电波检测,尤其是在12导联心电图中,这使得很难通过将ECG分析与患者的状况相关联来解释ECG分析。此外,即使是经验丰富的从业者也发现心电图分析具有挑战性。所有这些都是由于ECG读数中的噪声和噪声发生的频率。这项研究的主要目的是去除噪声和提取特征从ECG信号使用提出的无限脉冲响应(IIR)滤波器,以提高ECG的质量,非专家可以更好地理解。为此,这项研究使用了来自麻省理工学院贝丝以色列医院(MIT-BIH)数据库的ECG信号数据。这允许使用机器学习(ML)和深度学习(DL)模型轻松评估获取的数据,并将其分类为节奏。为了获得准确的结果,我们对ML分类器应用了超参数(HP)调整,对DL模型应用了微调(FT)。这项研究还检查了使用不同过滤器对心律失常的分类以及准确性的变化。因此,当评估所有模型时,没有FT的DenseNet-121实现了99%的准确度,而FT显示出更好的结果,准确率为99.97%。
    Arrhythmias are a leading cause of cardiovascular morbidity and mortality. Portable electrocardiogram (ECG) monitors have been used for decades to monitor patients with arrhythmias. These monitors provide real-time data on cardiac activity to identify irregular heartbeats. However, rhythm monitoring and wave detection, especially in the 12-lead ECG, make it difficult to interpret the ECG analysis by correlating it with the condition of the patient. Moreover, even experienced practitioners find ECG analysis challenging. All of this is due to the noise in ECG readings and the frequencies at which the noise occurs. The primary objective of this research is to remove noise and extract features from ECG signals using the proposed infinite impulse response (IIR) filter to improve ECG quality, which can be better understood by non-experts. For this purpose, this study used ECG signal data from the Massachusetts Institute of Technology Beth Israel Hospital (MIT-BIH) database. This allows the acquired data to be easily evaluated using machine learning (ML) and deep learning (DL) models and classified as rhythms. To achieve accurate results, we applied hyperparameter (HP)-tuning for ML classifiers and fine-tuning (FT) for DL models. This study also examined the categorization of arrhythmias using different filters and the changes in accuracy. As a result, when all models were evaluated, DenseNet-121 without FT achieved 99% accuracy, while FT showed better results with 99.97% accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    高效的废物管理对人类福祉和环境健康至关重要,因为忽视适当的处置做法会导致财务损失和自然资源的枯竭。鉴于快速的城市化和人口增长,开发一个自动化的,创新垃圾分类模式势在必行。为了满足这一需求,我们的论文介绍了一种新颖而强大的解决方案-一种智能废物分类模型,该模型利用混合深度学习模型(OptimizedDenseNet-121+SVM)使用TrashNet数据集对废物进行分类。我们提出的方法使用先进的深度学习模型DenseNet-121,优化了卓越的性能,从扩展的TrashNet数据集中提取有意义的特征。这些特征随后被馈送到支持向量机(SVM)中以进行精确分类。采用数据增强技术进一步提高分类准确性,同时减轻过度拟合的风险,特别是在处理有限的TrashNet数据时。我们对这种混合深度学习模型的实验评估结果非常有希望,令人印象深刻的准确率为99.84%。这种准确性超过了现有的类似模型,确认我们为实现可持续和更清洁的未来而彻底改变废物分类的方法的有效性和潜力。
    Efficient waste management is essential for human well-being and environmental health, as neglecting proper disposal practices can lead to financial losses and the depletion of natural resources. Given the rapid urbanization and population growth, developing an automated, innovative waste classification model becomes imperative. To address this need, our paper introduces a novel and robust solution - a smart waste classification model that leverages a hybrid deep learning model (Optimized DenseNet-121 + SVM) to categorize waste items using the TrashNet datasets. Our proposed approach uses the advanced deep learning model DenseNet-121, optimized for superior performance, to extract meaningful features from an expanded TrashNet dataset. These features are subsequently fed into a support vector machine (SVM) for precise classification. Employing data augmentation techniques further enhances classification accuracy while mitigating the risk of overfitting, especially when working with limited TrashNet data. The results of our experimental evaluation of this hybrid deep learning model are highly promising, with an impressive accuracy rate of 99.84%. This accuracy surpasses similar existing models, affirming the efficacy and potential of our approach to revolutionizing waste classification for a sustainable and cleaner future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑出血是由于脑血管突然破裂而导致死亡的主要原因之一,导致脑实质出血.脑损伤的早期检测和分割对于及时治疗极为重要。以前的一些研究集中在基于边界框定位脑出血,而没有指定特定的损伤区域。然而,在实践中,医生需要更准确地检测和分割出血区域。在本文中,我们提出了一种使用所提出的网络模型自动脑出血检测和分割的方法,通过使用典型的特征提取网络改变其骨干,从U-Net进行了改进,即,DenseNet-121、ResNet-50和MobileNet-V2。U-Net架构具有许多突出的优点。它不需要对原始图像进行太多预处理技术,并且可以使用小的数据集进行训练,从而在医学图像中提供低误差分割。我们使用迁移学习方法,在Kaggle上收集的头部CT数据集,包括两类,出血和非出血。此外,我们给出了所提出的模型与先前工作之间的一些比较结果,以概述适用于脑CT图像的模型。在头部CT数据集上,我们提出的模型实现了高达99%的分割精度。
    Brain hemorrhage is one of the leading causes of death due to the sudden rupture of a blood vessel in the brain, resulting in bleeding in the brain parenchyma. The early detection and segmentation of brain damage are extremely important for prompt treatment. Some previous studies focused on localizing cerebral hemorrhage based on bounding boxes without specifying specific damage regions. However, in practice, doctors need to detect and segment the hemorrhage area more accurately. In this paper, we propose a method for automatic brain hemorrhage detection and segmentation using the proposed network models, which are improved from the U-Net by changing its backbone with typical feature extraction networks, i.e., DenseNet-121, ResNet-50, and MobileNet-V2. The U-Net architecture has many outstanding advantages. It does not need to do too many preprocessing techniques on the original images and it can be trained with a small dataset providing low error segmentation in medical images. We use the transfer learning approach with the head CT dataset gathered on Kaggle including two classes, bleeding and non-bleeding. Besides, we give some comparison results between the proposed models and the previous works to provide an overview of the suitable model for cerebral CT images. On the head CT dataset, our proposed models achieve a segmentation accuracy of up to 99%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    糖尿病性视网膜病变(DR)有时可以进行治疗,如果发现并治疗得当,可以防止其导致不可逆的视力丧失。在这项工作中,采用深度学习(DL)模型来准确识别DR的所有五个阶段。
    建议的方法给出了两个例子,一个有,一个没有图片增强。然后使用增强方法生成在两种情况下满足相同标准的平衡数据集。与其他识别DR五个阶段的方法相比,DenseNet-121在亚太远程眼科学会(APTOS)上渲染的模型和糖尿病视网膜病变(DDR)数据集的数据集表现异常出色。
    我们提出的模型达到了98.36%的最高测试精度,top-2精度为100%,APTOS数据集的前3个准确率为100%,最高测试精度为79.67%,top-2精度为92。%76,DDR数据集的前3位准确率为98.94%。附加标准(精度,召回,和F1评分)在APTOS和DDR的帮助下建立了用于衡量所提出模型的功效的模型。
    人们发现,用更高质量的照片喂养模型可以提高其学习效率和能力,与最先进的技术和其他技术相反,非增强模型。
    UNASSIGNED: Diabetic retinopathy (DR) can sometimes be treated and prevented from causing irreversible vision loss if caught and treated properly. In this work, a deep learning (DL) model is employed to accurately identify all five stages of DR.
    UNASSIGNED: The suggested methodology presents two examples, one with and one without picture augmentation. A balanced dataset meeting the same criteria in both cases is then generated using augmentative methods. The DenseNet-121-rendered model on the Asia Pacific Tele-Ophthalmology Society (APTOS) and dataset for diabetic retinopathy (DDR) datasets performed exceptionally well when compared to other methods for identifying the five stages of DR.
    UNASSIGNED: Our propose model achieved the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100% for the APTOS dataset, and the highest test accuracy of 79.67%, top-2 accuracy of 92.%76, and top-3 accuracy of 98.94% for the DDR dataset. Additional criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS and DDR.
    UNASSIGNED: It was discovered that feeding a model with higher-quality photographs increased its efficiency and ability for learning, as opposed to both state-of-the-art technology and the other, non-enhanced model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    早期发现眼部疾病是及时治疗和预防失明的唯一解决方案。彩色眼底照相(CFP)是一种有效的眼底检查技术。由于早期眼部疾病的症状相似,并且难以区分疾病的类型,需要计算机辅助自动诊断技术。这项研究的重点是使用基于融合方法的特征提取的混合技术对眼部疾病数据集进行分类。设计了三种策略来对CFP图像进行分类以诊断眼部疾病。第一种方法是在使用主成分分析(PCA)减少高维和重复特征之后,使用人工神经网络(ANN)分别对来自MobileNet和DenseNet121模型的特征进行分类。第二种方法是在减少特征之前和之后,基于来自MobileNet和DenseNet121模型的融合特征,使用ANN对眼病数据集进行分类。第三种方法是使用ANN基于来自MobileNet和DenseNet121模型的融合特征分别与手工制作的特征对眼病数据集进行分类。基于融合的MobileNet和手工制作的功能,ANN获得了99.23%的AUC,准确率为98.5%,精度为98.45%,特异性为99.4%,灵敏度为98.75%。
    Early detection of eye diseases is the only solution to receive timely treatment and prevent blindness. Colour fundus photography (CFP) is an effective fundus examination technique. Because of the similarity in the symptoms of eye diseases in the early stages and the difficulty in distinguishing between the type of disease, there is a need for computer-assisted automated diagnostic techniques. This study focuses on classifying an eye disease dataset using hybrid techniques based on feature extraction with fusion methods. Three strategies were designed to classify CFP images for the diagnosis of eye disease. The first method is to classify an eye disease dataset using an Artificial Neural Network (ANN) with features from the MobileNet and DenseNet121 models separately after reducing the high dimensionality and repetitive features using Principal Component Analysis (PCA). The second method is to classify the eye disease dataset using an ANN on the basis of fused features from the MobileNet and DenseNet121 models before and after reducing features. The third method is to classify the eye disease dataset using ANN based on the fused features from the MobileNet and DenseNet121 models separately with handcrafted features. Based on the fused MobileNet and handcrafted features, the ANN attained an AUC of 99.23%, an accuracy of 98.5%, a precision of 98.45%, a specificity of 99.4%, and a sensitivity of 98.75%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    作物害虫对作物的品质和产量有很大影响。使用深度学习识别作物害虫对于作物精确管理非常重要。
    为了解决当前害虫研究中缺乏数据集和分类准确性差的问题,建立了大规模害虫数据集HQIP102,提出了害虫识别模型MADN。IP102大型农作物害虫数据集存在一些问题,例如某些害虫类别是错误的,图像中缺少害虫主题。在这项研究中,对IP102数据集进行仔细过滤,得到HQIP102数据集,其中包含八种作物的102种害虫类别的47,393张图像。MADN模型从三个方面提高了DenseNet的表示能力。首先,选择性内核单元被引入到DenseNet模型中,可以根据输入自适应调整感受野的大小,更有效地捕获不同大小的目标物体。其次,为了使特征服从稳定分布,DenseNet模型中使用了代表性批标准化模块。此外,自适应选择是否激活神经元可以提高网络的性能,在DenseNet模型中使用了ACON激活函数。最后,MADN模型由集成学习构成。
    实验结果表明,MADN在HQIP102数据集上实现了75.28%和65.46%的准确性和F1得分,与改善前的DenseNet-121相比,改善了5.17个百分点和5.20个百分点。与ResNet-101相比,MADN模型的准确性和F1Score分别提高了10.48个百分点和10.56个百分点,而参数大小下降了35.37%。使用移动应用程序将模型部署到云服务器有助于确保作物产量和质量。
    UNASSIGNED: Crop pests have a great impact on the quality and yield of crops. The use of deep learning for the identification of crop pests is important for crop precise management.
    UNASSIGNED: To address the lack of data set and poor classification accuracy in current pest research, a large-scale pest data set named HQIP102 is built and the pest identification model named MADN is proposed. There are some problems with the IP102 large crop pest dataset, such as some pest categories are wrong and pest subjects are missing from the images. In this study, the IP102 data set was carefully filtered to obtain the HQIP102 data set, which contains 47,393 images of 102 pest classes on eight crops. The MADN model improves the representation capability of DenseNet in three aspects. Firstly, the Selective Kernel unit is introduced into the DenseNet model, which can adaptively adjust the size of the receptive field according to the input and capture target objects of different sizes more effectively. Secondly, in order to make the features obey a stable distribution, the Representative Batch Normalization module is used in the DenseNet model. In addition, adaptive selection of whether to activate neurons can improve the performance of the network, for which the ACON activation function is used in the DenseNet model. Finally, the MADN model is constituted by ensemble learning.
    UNASSIGNED: Experimental results show that MADN achieved an accuracy and F1Score of 75.28% and 65.46% on the HQIP102 data set, an improvement of 5.17 percentage points and 5.20 percentage points compared to the pre-improvement DenseNet-121. Compared with ResNet-101, the accuracy and F1Score of MADN model improved by 10.48 percentage points and 10.56 percentage points, while the parameters size decreased by 35.37%. Deploying models to cloud servers with mobile application provides help in securing crop yield and quality.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在这项研究中,我们为混乱的环境开发了一个智能和自我监督的工业拾取和放置操作的框架。我们的目标是让代理学习执行可理解和非可理解的机器人操作,以提高拾取和放置任务的效率和吞吐量。为了实现这一目标,我们将问题指定为马尔可夫决策过程(MDP),并部署了称为深度Q网络(DQN)的深度强化学习(RL)无时间差异模型算法。我们在MDP中考虑了三个动作;一个是从可理解的操纵类别中的“抓取”,另外两个是从非可理解的操纵类别中的“向左滑动”和“向右滑动”。我们的DQN由基于DenseNet-121的内存高效架构的三个完全卷积网络(FCN)组成,它们一起训练而不会造成任何瓶颈情况。每个FCN对应于每个离散动作,并输出相关动作的示能表示的像素级映射。在每次前向传递之后分配奖励,并且执行反向传播以用于相应FCN中的权重调谐。以这种方式,非理解的操作是可以学习的,反过来,导致在不久的将来可能成功的操纵,反之亦然,从而提高拾取和放置任务的效率和吞吐量。结果部分显示了我们的方法与基线深度学习方法和基于ResNet架构的方法的性能比较。以及在一系列复杂场景测试用例中不同杂波密度下非常有希望的测试结果。
    In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. To achieve this target, we specify the problem as a Markov decision process (MDP) and deploy a deep reinforcement learning (RL) temporal difference model-free algorithm known as the deep Q-network (DQN). We consider three actions in our MDP; one is \'grasping\' from the prehensile manipulation category and the other two are \'left-slide\' and \'right-slide\' from the non-prehensile manipulation category. Our DQN is composed of three fully convolutional networks (FCN) based on the memory-efficient architecture of DenseNet-121 which are trained together without causing any bottleneck situations. Each FCN corresponds to each discrete action and outputs a pixel-wise map of affordances for the relevant action. Rewards are allocated after every forward pass and backpropagation is carried out for weight tuning in the corresponding FCN. In this manner, non-prehensile manipulations are learnt which can, in turn, lead to possible successful prehensile manipulations in the near future and vice versa, thus increasing the efficiency and throughput of the pick-and-place task. The Results section shows performance comparisons of our approach to a baseline deep learning approach and a ResNet architecture-based approach, along with very promising test results at varying clutter densities across a range of complex scenario test cases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    COVID-19是一种传染性和传染性病毒。在撰写本文时,自出现以来,已经有超过1.6亿人被感染,包括阿尔及利亚的125,000多人。在这项工作中,我们首先在阿尔及利亚的特莱姆森医院收集了4986张通过RT-PCR测试确认的COVID和非COVID图像的数据集。然后我们对深度学习模型进行了迁移学习,在ImageNet数据集上得到了最好的结果,例如DenseNet121、DenseNet201、VGG16、VGG19、InceptionResnet-V2和Xception,为了进行比较研究。因此,我们提出了一种基于DenseNet201架构和GradCam解释算法的可解释模型,用于检测胸部CT图像中的COVID-19并解释输出决策。实验显示了有希望的结果,并证明引入的模型可以有益于诊断和随访COVID-19患者。
    COVID-19 is an infectious and contagious virus. As of this writing, more than 160 million people have been infected since its emergence, including more than 125,000 in Algeria. In this work, We first collected a dataset of 4986 COVID and non-COVID images confirmed by RT-PCR tests at Tlemcen hospital in Algeria. Then we performed a transfer learning on deep learning models that got the best results on the ImageNet dataset, such as DenseNet121, DenseNet201, VGG16, VGG19, Inception Resnet-V2, and Xception, in order to conduct a comparative study. Therefore, We have proposed an explainable model based on the DenseNet201 architecture and the GradCam explanation algorithm to detect COVID-19 in chest CT images and explain the output decision. Experiments have shown promising results and proven that the introduced model can be beneficial for diagnosing and following up patients with COVID-19.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    最近,2019年冠状病毒(俗称COVID-19)的破坏性影响已经影响了公众健康和人类生活。自第二次世界大战以来,这种灾难性的影响通过引入更具破坏性的不可预测的健康危机来破坏人类的体验(Kursumovic等人。在麻醉75:989-992,2020年)。COVID-19在人类社区中的强烈传播特征使世界危机成为严重的大流行。由于COVID-19疫苗无法控制而不是治愈,病毒的早期和准确检测可能是跟踪和防止感染传播的有前途的技术(例如,通过隔离患者)。这种情况表明改进了辅助COVID-19检测技术。计算机断层扫描(CT)成像是一种广泛使用的肺炎技术,因为它具有预期的可用性。人工智能辅助图像分析可能是识别COVID-19的有希望的替代方法。本文提出了一种使用卷积神经网络(CNN)从CT图像中预测COVID-19患者的有前途的技术。这种新方法基于最新修改的CNN架构(DenseNet-121)来预测COVID-19。结果准确率超过92%,95%的召回率显示出预测COVID-19的可接受表现。
    Recently, the destructive impact of Coronavirus 2019, commonly known as COVID-19, has affected public health and human lives. This catastrophic effect disrupted human experience by introducing an exponentially more damaging unpredictable health crisis since the Second World War (Kursumovic et al. in Anaesthesia 75: 989-992, 2020). Strong communicable characteristics of COVID-19 within human communities make the world\'s crisis a severe pandemic. Due to the unavailable vaccine of COVID-19 to control rather than cure, early and accurate detection of the virus can be a promising technique for tracking and preventing the infection from spreading (e.g., by isolating the patients). This situation indicates improving the auxiliary COVID-19 detection technique. Computed tomography (CT) imaging is a widely used technique for pneumonia because of its expected availability. The artificial intelligence-aided images analysis might be a promising alternative for identifying COVID-19. This paper presents a promising technique of predicting COVID-19 patients from the CT image using convolutional neural networks (CNN). The novel approach is based on the most recent modified CNN architecture (DenseNet-121) to predict COVID-19. The results outperformed 92% accuracy, with a 95% recall showing acceptable performance for the prediction of COVID-19.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号