YOLOv5s

YOLOv5s
  • 文章类型: Journal Article
    长时间过度使用电子设备已导致久坐的人的诸如颈部疼痛和压力伤害的问题。如果没有及早发现和纠正,这些问题会对身体健康造成严重风险。通用物体的探测器无法充分捕获这种微妙的颈部行为,导致漏检。在本文中,我们探索了一种基于深度学习的颈部异常行为检测解决方案,并提出了一个名为NABNet的模型,该模型将基于YOLOv5的对象检测与基于轻量级OpenPose的姿态估计相结合。NABNet从全局到局部提取颈部的详细行为特征,并通过分析数据的角度来检测异常行为。我们在云和边缘设备上部署了NABNet,以实现远程监控和异常行为警报。最后,我们将基于NABNet的物联网系统应用于异常行为检测,以评估其有效性。实验结果表明,该系统能够在云平台上有效检测颈部异常行为,最高精度达到94.13%。
    The excessive use of electronic devices for prolonged periods has led to problems such as neck pain and pressure injury in sedentary people. If not detected and corrected early, these issues can cause serious risks to physical health. Detectors for generic objects cannot adequately capture such subtle neck behaviors, resulting in missed detections. In this paper, we explore a deep learning-based solution for detecting abnormal behavior of the neck and propose a model called NABNet that combines object detection based on YOLOv5s with pose estimation based on Lightweight OpenPose. NABNet extracts the detailed behavior characteristics of the neck from global to local and detects abnormal behavior by analyzing the angle of the data. We deployed NABNet on the cloud and edge devices to achieve remote monitoring and abnormal behavior alarms. Finally, we applied the resulting NABNet-based IoT system for abnormal behavior detection in order to evaluate its effectiveness. The experimental results show that our system can effectively detect abnormal neck behavior and raise alarms on the cloud platform, with the highest accuracy reaching 94.13%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    同时定位与地图(SLAM)是解决移动机器人自主导航的关键技术之一,利用环境特征来确定机器人的位置并创建其周围环境的地图。目前,视觉SLAM算法通常在静态环境中产生精确可靠的结果,许多算法选择过滤掉动态区域中的特征点。然而,当摄像机视图中的动态对象数量增加时,这种方法可能会导致准确性降低或跟踪失败。因此,本研究提出了一种基于ORB-SLAM2的YPL-SLAM解决方案。该方案增加了目标识别和区域分割模块,用于确定动态区域,潜在动态区域,和静态区域;利用具有极几何约束的RANSAC方法确定潜在动态区域的状态;去除动态特征点。然后提取非动态区域的线特征,最后采用加权融合策略进行点线融合优化过程,考虑图像动态得分和成功的特征点-线匹配次数,从而保证系统的鲁棒性和准确性。已经使用公开可用的TUM数据集进行了大量实验,以将YPL-SLAM与全球领先的SLAM算法进行比较。结果表明,与Dyna-SLAM相比,新算法在准确性方面超过了ORB-SLAM2(最大提高了96.1%),同时还显示出显着提高的操作速度。
    Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot\'s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera\'s view, this approach might result in decreased accuracy or tracking failures. Therefore, this study proposes a solution called YPL-SLAM based on ORB-SLAM2. The solution adds a target recognition and region segmentation module to determine the dynamic region, potential dynamic region, and static region; determines the state of the potential dynamic region using the RANSAC method with polar geometric constraints; and removes the dynamic feature points. It then extracts the line features of the non-dynamic region and finally performs the point-line fusion optimization process using a weighted fusion strategy, considering the image dynamic score and the number of successful feature point-line matches, thus ensuring the system\'s robustness and accuracy. A large number of experiments have been conducted using the publicly available TUM dataset to compare YPL-SLAM with globally leading SLAM algorithms. The results demonstrate that the new algorithm surpasses ORB-SLAM2 in terms of accuracy (with a maximum improvement of 96.1%) while also exhibiting a significantly enhanced operating speed compared to Dyna-SLAM.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    车辆检测是目标检测领域的一个研究方向,在智能交通中有着广泛的应用,自动驾驶,城市规划,和其他领域。为了平衡轻量级网络的高速优势和多尺度网络的高精度优势,提出了一种基于轻量级骨干网络和多尺度颈部网络的车辆检测算法。采用基于深度可分离卷积的移动NetV3轻量级网络作为骨干网络,提高车辆检测速度。icbam注意机制模块,用于加强对骨干网络检测到的车辆特征信息的处理,以丰富颈部网络的输入信息。将bffpn和icbam注意力机制模块集成到颈部网络中,以提高不同尺寸和类别车辆的检测精度。在Ua-Debrac数据集上的车辆检测实验验证了该算法能够有效地平衡车辆检测精度和速度。检测准确率为71.19%,参数的数量是3.8MB,检测速度为120.02fps,满足参数数量的实际要求,检测速度,以及嵌入在移动设备中的车辆检测算法的准确性。
    Vehicle detection is a research direction in the field of target detection and is widely used in intelligent transportation, automatic driving, urban planning, and other fields. To balance the high-speed advantage of lightweight networks and the high-precision advantage of multiscale networks, a vehicle detection algorithm based on a lightweight backbone network and a multiscale neck network is proposed. The mobile NetV3 lightweight network based on deep separable convolution is used as the backbone network to improve the speed of vehicle detection. The icbam attention mechanism module is used to strengthen the processing of the vehicle feature information detected by the backbone network to enrich the input information of the neck network. The bifpn and icbam attention mechanism modules are integrated into the neck network to improve the detection accuracy of vehicles of different sizes and categories. A vehicle detection experiment on the Ua-Detrac dataset verifies that the proposed algorithm can effectively balance vehicle detection accuracy and speed. The detection accuracy is 71.19%, the number of parameters is 3.8 MB, and the detection speed is 120.02 fps, which meets the actual requirements of the parameter quantity, detection speed, and accuracy of the vehicle detection algorithm embedded in the mobile device.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    解决交通拥堵和人身安全问题对人类的生活至关重要。自动驾驶系统导航复杂路况的能力至关重要。深度学习极大地促进了自动驾驶中的机器视觉感知。针对传统YOLOv5s中的小目标检测问题,本文提出了一种优化的目标检测算法。算法骨干上的C3模块已升级为CBAMC3模块,介绍了一种新的GELU激活函数和EfficiCIOU损失函数,这加速了头寸损失lbox的收敛,信心丧失,和分类损失LCLS,增强图像学习能力,通过改进算法解决小目标检测不准确的问题。在预定义的路线上使用车载摄像机进行测试可以有效地识别道路车辆并分析深度位置信息。回避模式,结合纯追求和MPC控制算法,表现出更稳定的车速变化,前轮转向角,横向加速度,等。,与非优化版本相比。增强了驾驶系统的视觉回避功能的鲁棒性,进一步改善拥堵问题并确保人身安全。
    The resolution of traffic congestion and personal safety issues holds paramount importance for human\'s life. The ability of an autonomous driving system to navigate complex road conditions is crucial. Deep learning has greatly facilitated machine vision perception in autonomous driving. Aiming at the problem of small target detection in traditional YOLOv5s, this paper proposes an optimized target detection algorithm. The C3 module on the algorithm\'s backbone is upgraded to the CBAMC3 module, introducing a novel GELU activation function and EfficiCIoU loss function, which accelerate convergence on position loss lbox, confidence loss lobj, and classification loss lcls, enhance image learning capabilities and address the issue of inaccurate detection of small targets by improving the algorithm. Testing with a vehicle-mounted camera on a predefined route effectively identifies road vehicles and analyzes depth position information. The avoidance model, combined with Pure Pursuit and MPC control algorithms, exhibits more stable variations in vehicle speed, front-wheel steering angle, lateral acceleration, etc., compared to the non-optimized version. The robustness of the driving system\'s visual avoidance functionality is enhanced, further ameliorating congestion issues and ensuring personal safety.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    为了解决检测速度慢的问题,基于深度学习的煤矸石目标检测方法参数多、计算量大,提出了一种改进的基于Yolov5s的脉石目标检测算法。首先,采用轻量级网络EfficientVIT作为骨干网络,提高目标检测速度。第二,C3_Faster替换HEAD模块中的C3部分,这降低了模型的复杂性。再一次,颈部区域的20×20特征图分支被删除,这降低了模型的复杂性;第三,CIOU损失函数被Mpdiou损失函数代替。SE注意机制的引入使得模型更加关注关键特征以提高检测性能。实验结果表明,改进的模型大小的煤群检测算法压缩量降低了77.8%,参数数量减少78.3%,计算成本减少77.8%,帧数减少30.6%,可作为智能煤矸石分类的参考。
    In order to solve the problems of slow detection speed, large number of parameters and large computational volume of deep learning based gangue target detection method, we propose an improved algorithm for gangue target detection based on Yolov5s. First, the lightweight network EfficientVIT is used as the backbone network to increase the target detection speed. Second, C3_Faster replaces the C3 part in the HEAD module, which reduces the model complexity. once again, the 20 × 20 feature map branch in the Neck region is deleted, which reduces the model complexity; thirdly, the CIOU loss function is replaced by the Mpdiou loss function. The introduction of the SE attention mechanism makes the model pay more attention to critical features to improve detection performance. Experimental results show that the improved model size of the coal gang detection algorithm reduces the compression by 77.8%, the number of parameters by 78.3% the computational cost is reduced by 77.8% and the number of frames is reduced by 30.6%, which can be used as a reference for intelligent coal gangue classification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在复杂的工业环境中,工业目标的准确识别和定位至关重要。本研究旨在通过有效融合不同尺度和层次的特征信息,提高工业场景中目标检测的精度和准确度,并介绍了边缘检测头算法和注意机制。提出了一种改进的基于YOLOv5的工业目标检测算法。我们的改进算法结合了交叉双向特征金字塔(CBiFPN),有效地解决了多尺度多层次特征融合中的信息丢失问题。因此,我们的方法可以提高不同大小物体的检测性能。同时,我们已经将注意力机制(C3_CA)集成到YOLOv5中,以增强特征表达能力。此外,我们介绍了边缘检测头(EDH)方法,它擅长通过合并边缘信息并在特征内放大来解决具有遮挡对象和杂乱背景的场景中的检测挑战。在修改后的ITODD数据集上进行的实验表明,原始YOLOv5s算法在mAP@0.5和mAP@0.5:0.95上分别达到82.11%和60.98%,准确率和召回率分别为86.8%和74.75%,分别。改进的YOLOv5s算法在mAP@0.5和mAP@0.5:0.95上的性能分别提高了1.23%和1.44%,分别,准确率和召回率分别提高了3.68%和1.06%,分别。结果表明,该方法大大提高了工业目标识别和定位的准确性和鲁棒性。
    In complex industrial environments, accurate recognition and localization of industrial targets are crucial. This study aims to improve the precision and accuracy of object detection in industrial scenarios by effectively fusing feature information at different scales and levels, and introducing edge detection head algorithms and attention mechanisms. We propose an improved YOLOv5-based algorithm for industrial object detection. Our improved algorithm incorporates the Crossing Bidirectional Feature Pyramid (CBiFPN), effectively addressing the information loss issue in multi-scale and multi-level feature fusion. Therefore, our method can enhance detection performance for objects of varying sizes. Concurrently, we have integrated the attention mechanism (C3_CA) into YOLOv5s to augment feature expression capabilities. Furthermore, we introduce the Edge Detection Head (EDH) method, which is adept at tackling detection challenges in scenes with occluded objects and cluttered backgrounds by merging edge information and amplifying it within the features. Experiments conducted on the modified ITODD dataset demonstrate that the original YOLOv5s algorithm achieves 82.11% and 60.98% on mAP@0.5 and mAP@0.5:0.95, respectively, with its precision and recall being 86.8% and 74.75%, respectively. The performance of the modified YOLOv5s algorithm on mAP@0.5 and mAP@0.5:0.95 has been improved by 1.23% and 1.44%, respectively, and the precision and recall have been enhanced by 3.68% and 1.06%, respectively. The results show that our method significantly boosts the accuracy and robustness of industrial target recognition and localization.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    钢带是重要的工程原材料,汽车,造船,和航空航天工业。然而,在生产过程中,钢带表面容易产生裂纹,点蚀,以及其他影响其外观和性能的缺陷。重要的是使用机器视觉技术来检测钢带表面的缺陷,以提高其质量。为解决带钢表面图像细粒度特征分类困难,提高缺陷检出率,我们提出了一种改进的YOLOv5s模型,称为YOLOv5s-FPD(细颗粒检测)。构建了SPPF-A(空间金字塔池快速推进)模块来调整空间金字塔结构,引入了自适应空间特征融合(ASFF)和内容感知特征重组(CARAFE)模块,以提高条带图像的特征提取和融合能力。还构建了CSBL(卷积可分离瓶颈)模块,并引入了DCNv2(可变形ConvNetsv2)模块,以改善模型的轻量级属性。CBAM(卷积块注意模块)注意模块用于提取关键和重要信息,进一步提高模型的特征提取能力。在NEU_DET(NEU表面缺陷数据库)数据集上的实验结果表明,YOLOv5s-FPD在数据增强前将mAP50精度提高了2.6%,在SSIE(钢带图像增强)数据增强后提高了1.8%,与YOLOV5S原型相比。它还提高了数据集中所有六个缺陷的检测精度。在VOC2007公共数据集上的实验结果表明,YOLOv5s-FPD在数据增强之前将mAP50精度提高了4.6%,与YOLOV5S原型相比。总的来说,这些结果证实了该模型的有效性和有用性。
    Steel strip is an important raw material for the engineering, automotive, shipbuilding, and aerospace industries. However, during the production process, the surface of the steel strip is prone to cracks, pitting, and other defects that affect its appearance and performance. It is important to use machine vision technology to detect defects on the surface of a steel strip in order to improve its quality. To address the difficulties in classifying the fine-grained features of strip steel surface images and to improve the defect detection rate, we propose an improved YOLOv5s model called YOLOv5s-FPD (Fine Particle Detection). The SPPF-A (Spatial Pyramid Pooling Fast-Advance) module was constructed to adjust the spatial pyramid structure, and the ASFF (Adaptively Spatial Feature Fusion) and CARAFE (Content-Aware ReAssembly of FEatures) modules were introduced to improve the feature extraction and fusion capabilities of strip images. The CSBL (Convolutional Separable Bottleneck) module was also constructed, and the DCNv2 (Deformable ConvNets v2) module was introduced to improve the model\'s lightweight properties. The CBAM (Convolutional Block Attention Module) attention module is used to extract key and important information, further improving the model\'s feature extraction capability. Experimental results on the NEU_DET (NEU surface defect database) dataset show that YOLOv5s-FPD improves the mAP50 accuracy by 2.6% before data enhancement and 1.8% after SSIE (steel strip image enhancement) data enhancement, compared to the YOLOv5s prototype. It also improves the detection accuracy of all six defects in the dataset. Experimental results on the VOC2007 public dataset demonstrate that YOLOv5s-FPD improves the mAP50 accuracy by 4.6% before data enhancement, compared to the YOLOv5s prototype. Overall, these results confirm the validity and usefulness of the proposed model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    识别小麦穗在预测小麦产量中起着至关重要的作用。采用深度学习方法进行小麦籽粒鉴定是当前研究和应用的主流方法。然而,这种方法仍然面临挑战,例如高计算参数体积,大模型权重,和缓慢的处理速度,这使得它们难以在麦田有限的硬件资源上应用于实时识别任务。因此,探索用于实时识别的轻量级小麦穗检测方法具有重要意义。
    本研究提出了一种基于YOLOv5s的小麦穗检测和计数的轻量级方法。它利用ShuffleNetV2轻量级卷积神经网络通过减少参数数量和简化计算过程的复杂性来优化YOLOv5s模型。此外,在特征金字塔结构中引入了轻量级上采样算子内容感知的特征重组,以消除轻量级过程对模型检测性能的影响。该方法旨在提高特征图像的空间分辨率,增强感知场的有效性,减少信息损失。最后,通过引入动态目标检测头,检测头的形状和特征提取策略可以动态调整,当遇到大范围变化的麦穗时,可以提高检测精度,不同的形状,或显著的取向变化。
    本研究使用全局小麦头检测数据集,并结合局部实验数据集,以提高所提出模型的鲁棒性和泛化性。重量,该模型的FLOP和MAP为2.9MB,2.5*109和94.8%,分别。全局麦头检测数据集和局部实验点模型试验结果与实际值的线性拟合判定系数R2分别为0.94和0.97。改进后的轻量化模型能较好地满足精密麦穗计数的要求,在嵌入式系统中,移动设备,或其他具有有限计算资源的硬件系统。
    UNASSIGNED: Recognizing wheat ears plays a crucial role in predicting wheat yield. Employing deep learning methods for wheat ears identification is the mainstream method in current research and applications. However, such methods still face challenges, such as high computational parameter volume, large model weights, and slow processing speeds, making it difficult to apply them for real-time identification tasks on limited hardware resources in the wheat field. Therefore, exploring lightweight wheat ears detection methods for real-time recognition holds significant importance.
    UNASSIGNED: This study proposes a lightweight method for detecting and counting wheat ears based on YOLOv5s. It utilizes the ShuffleNetV2 lightweight convolutional neural network to optimize the YOLOv5s model by reducing the number of parameters and simplifying the complexity of the calculation processes. In addition, a lightweight upsampling operator content-aware reassembly of features is introduced in the feature pyramid structure to eliminate the impact of the lightweight process on the model detection performance. This approach aims to improve the spatial resolution of the feature images, enhance the effectiveness of the perceptual field, and reduce information loss. Finally, by introducing the dynamic target detection head, the shape of the detection head and the feature extraction strategy can be dynamically adjusted, and the detection accuracy can be improved when encountering wheat ears with large-scale changes, diverse shapes, or significant orientation variations.
    UNASSIGNED: This study uses the global wheat head detection dataset and incorporates the local experimental dataset to improve the robustness and generalization of the proposed model. The weight, FLOPs and mAP of this model are 2.9 MB, 2.5 * 109 and 94.8%, respectively. The linear fitting determination coefficients R2 for the model test result and actual value of global wheat head detection dataset and local experimental Site are 0.94 and 0.97, respectively. The improved lightweight model can better meet the requirements of precision wheat ears counting and play an important role in embedded systems, mobile devices, or other hardware systems with limited computing resources.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着劳动力成本的不断增加,农业部门迫切需要自动化的苹果-QPick设备。在苹果收获之前,该设备不仅要准确地定位苹果,而且还能辨别水果的可抓握性。虽然已经进行了许多关于苹果检测的研究,与确定苹果可抓取性有关的挑战仍未解决。
    本研究介绍了一种基于增强的YOLOv5s模型检测多遮挡苹果的方法,目的是在复杂的果园环境中识别苹果闭塞的类型并确定苹果的可抓性。使用引导自己的atent(BYOL)和知识转移(KT)策略,我们有效地提高了多遮挡苹果的分类精度,同时降低了数据生产成本。还包含选择性内核(SK)模块,使网络模型能够更精确地识别各种苹果遮挡类型。为了评估我们网络模型的性能,我们定义了三个关键指标:APGA,APTUGA,和APUGA,表示可抓取的平均检测精度,暂时无法抓住,还有抓不住的苹果,分别。
    实验结果表明,改进的YOLOv5s模型表现得非常好,达到94.78%的检测精度,93.86%,APGA为94.98%,APTUGA,和APUGA,分别。
    与当前的轻量网络模型(如YOLOX-s和YOLOv7)相比,我们提出的方法在多个评估指标上显示出显著的优势。在未来的研究中,我们打算将水果姿势和咬合检测相结合,以f]进一步提高苹果采摘设备的视觉感知能力。
    UNASSIGNED: With continuously increasing labor costs, an urgent need for automated apple- Qpicking equipment has emerged in the agricultural sector. Prior to apple harvesting, it is imperative that the equipment not only accurately locates the apples, but also discerns the graspability of the fruit. While numerous studies on apple detection have been conducted, the challenges related to determining apple graspability remain unresolved.
    UNASSIGNED: This study introduces a method for detecting multi-occluded apples based on an enhanced YOLOv5s model, with the aim of identifying the type of apple occlusion in complex orchard environments and determining apple graspability. Using bootstrap your own atent(BYOL) and knowledge transfer(KT) strategies, we effectively enhance the classification accuracy for multi-occluded apples while reducing data production costs. A selective kernel (SK) module is also incorporated, enabling the network model to more precisely identify various apple occlusion types. To evaluate the performance of our network model, we define three key metrics: APGA, APTUGA, and APUGA, representing the average detection accuracy for graspable, temporarily ungraspable, and ungraspable apples, respectively.
    UNASSIGNED: Experimental results indicate that the improved YOLOv5s model performs exceptionally well, achieving detection accuracies of 94.78%, 93.86%, and 94.98% for APGA, APTUGA, and APUGA, respectively.
    UNASSIGNED: Compared to current lightweight network models such as YOLOX-s and YOLOv7s, our proposed method demonstrates significant advantages across multiple evaluation metrics. In future research, we intend to integrate fruit posture and occlusion detection to f]urther enhance the visual perception capabilities of apple-picking equipment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    提升机笼用于提升煤矿副井中的矿工。监测矿工的不安全行为及其在提升机笼中的状况对煤矿安全生产至关重要。在这项研究中,提出了一种视觉检测模型来估计矿工的数量和类别,并确定矿工是否戴着头盔,以及他们是否掉入吊笼中。开发了一个数据集,该数据集包含起重机笼中的八类矿工状态,用于训练和验证模型。使用数据集,经典模型被训练用于比较,从中选择YOLOv5s模型作为基本模型。由于目标小,照明条件差,煤尘和避难所,Yolov5s模型的检测准确率仅为89.2%。为了获得更好的检测精度,k-means++聚类算法,基于BiFPN的特征融合网络,卷积块注意力模块(CBAM),并提出了ACIOU损失函数来改进YOLOv5s模型,随后开发了基于多尺度级联特征融合的YOLOv5s模型(AMCFF-YOLOv5s)。在自建数据集上的训练结果表明,其检测准确率提高到97.6%。此外,AMCFF-YOLOv5s模型被证明对噪声和光具有鲁棒性。
    The hoist cage is used to lift miners in a coal mine\'s auxiliary shaft. Monitoring miners\' unsafe behaviors and their status in the hoist cage is crucial to production safety in coal mines. In this study, a visual detection model is proposed to estimate the number and categories of miners, and to identify whether the miners are wearing helmets and whether they have fallen in the hoist cage. A dataset with eight categories of miners\' statuses in hoist cages was developed for training and validating the model. Using the dataset, the classical models were trained for comparison, from which the YOLOv5s model was selected to be the basic model. Due to small-sized targets, poor lighting conditions, and coal dust and shelter, the detection accuracy of the Yolov5s model was only 89.2%. To obtain better detection accuracy, k-means++ clustering algorithm, a BiFPN-based feature fusion network, the convolutional block attention module (CBAM), and a CIoU loss function were proposed to improve the YOLOv5s model, and an attentional multi-scale cascaded feature fusion-based YOLOv5s model (AMCFF-YOLOv5s) was subsequently developed. The training results on the self-built dataset indicate that its detection accuracy increased to 97.6%. Moreover, the AMCFF-YOLOv5s model was proven to be robust to noise and light.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号