YOLOv8

YOLOv8
  • 文章类型: Journal Article
    牙周病是一个重要的全球性口腔健康问题。射线照相分期对于确定牙周炎的严重程度和治疗要求至关重要。这项研究旨在使用咬翼图像使用深度学习方法自动分期牙周骨丢失。该研究总共使用了1752张咬翼图像。放射学检查分为4组。健康(正常),无骨丢失;I期(轻度破坏),冠状区第三骨丢失(<15%);II期(中度破坏),骨丢失在冠状区的三分之一和15-33%(15-33%);III-IV期(严重破坏),骨丢失从中间三分之一延伸到顶端三分之一,分叉破坏(>33%)。使用双线性插值将所有图像转换为512×400维。数据分为80%的训练验证和20%的测试。YOLOv8深度学习模型的分类模块用于基于人工智能的图像分类。根据四节课的成绩,在迁移学习和微调后,使用5倍交叉验证对其进行了训练。培训结束后,20%的测试数据,系统从未见过,使用在每次交叉验证中获得的人工智能权重进行分析。训练和测试结果以平均精度计算,精度,召回,和F1分数性能指标。使用Eigen-CAM可解释性热图分析了测试图像。在将咬翼图像分类为健康图像时,轻度破坏,适度破坏,和严重的破坏,训练成绩准确率为86.100%,84.790%精度,82.350%召回,和84.411%F1得分,测试性能结果准确率为83.446%,81.742%精度,80.83%召回,和81.090%的F1分数。深度学习模型在咬翼图像中对牙周骨丢失进行分期方面取得了成功的结果。在咬翼图像中,正常(无骨丢失)和严重骨丢失的分类评分相对较高,因为它们比轻度和中度损伤更清晰可见。
    Periodontal disease is a significant global oral health problem. Radiographic staging is critical in determining periodontitis severity and treatment requirements. This study aims to automatically stage periodontal bone loss using a deep learning approach using bite-wing images. A total of 1752 bite-wing images were used for the study. Radiological examinations were classified into 4 groups. Healthy (normal), no bone loss; stage I (mild destruction), bone loss in the coronal third (< 15%); stage II (moderate destruction), bone loss is in the coronal third and from 15 to 33% (15-33%); stage III-IV (severe destruction), bone loss extending from the middle third to the apical third with furcation destruction (> 33%). All images were converted to 512 × 400 dimensions using bilinear interpolation. The data was divided into 80% training validation and 20% testing. The classification module of the YOLOv8 deep learning model was used for the artificial intelligence-based classification of the images. Based on four class results, it was trained using fivefold cross-validation after transfer learning and fine tuning. After the training, 20% of test data, which the system had never seen, were analyzed using the artificial intelligence weights obtained in each cross-validation. Training and test results were calculated with average accuracy, precision, recall, and F1-score performance metrics. Test images were analyzed with Eigen-CAM explainability heat maps. In the classification of bite-wing images as healthy, mild destruction, moderate destruction, and severe destruction, training performance results were 86.100% accuracy, 84.790% precision, 82.350% recall, and 84.411% F1-score, and test performance results were 83.446% accuracy, 81.742% precision, 80.883% recall, and 81.090% F1-score. The deep learning model gave successful results in staging periodontal bone loss in bite-wing images. Classification scores were relatively high for normal (no bone loss) and severe bone loss in bite-wing images, as they are more clearly visible than mild and moderate damage.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    药物泡罩包装中的缺陷检测是在制造时检测片剂中出现的缺陷时获得准确结果的最具挑战性的任务。传统的缺陷检测方法包括人为干预以检查泡罩包装内的片剂质量,这是低效的,耗时,增加劳动力成本。为了缓解这个问题,YOLO系列主要用于许多行业,用于连续生产中的实时缺陷检测。为了增强特征提取能力并减少实时环境中的计算开销,CBS-YOLOv8是通过增强YOLOv8模型提出的。在所提出的CBS-YOLOv8中,引入了协调关注以通过捕获空间和跨信道信息并且还保持远程依赖性来提高特征提取能力。在YOLOv8中还引入了BiFPN(加权双向特征金字塔网络),以增强每个卷积层的特征融合,以避免更精确的信息丢失。通过实施SimSPPF(简单空间金字塔快速池化),提高了模型的效率,这降低了计算需求和模型复杂性,从而提高了速度。包含有缺陷的平板电脑图像的自定义数据集用于训练所提出的模型。然后通过将CBS-YOLOv8模型与各种其他模型进行比较来评估其性能。在自定义数据集上的实验结果表明,CBS-YOLOv8模型实现了97.4%的mAP和79.25FPS的推理速度,表现优于其他型号。所提出的模型还在SESOVERA-ST盐水瓶填充水平监测数据集上进行了评估,达到了99.3%的mAP50。这表明CBS-YOLOv8提供了一个优化的检测过程,能够及时发现和纠正缺陷,从而加强制造环境中的质量保证实践。
    Defect detection in pharmaceutical blister packages is the most challenging task to get an accurate result in detecting defects that arise in tablets while manufacturing. Conventional defect detection methods include human intervention to check the quality of tablets within the blister packages, which is inefficient, time-consuming, and increases labor costs. To mitigate this issue, the YOLO family is primarily used in many industries for real-time defect detection in continuous production. To enhance the feature extraction capability and reduce the computational overhead in a real-time environment, the CBS-YOLOv8 is proposed by enhancing the YOLOv8 model. In the proposed CBS-YOLOv8, coordinate attention is introduced to improve the feature extraction capability by capturing the spatial and cross-channel information and also maintaining the long-range dependencies. The BiFPN (weighted bi-directional feature pyramid network) is also introduced in YOLOv8 to enhance the feature fusion at each convolution layer to avoid more precise information loss. The model\'s efficiency is enhanced through the implementation of SimSPPF (simple spatial pyramid pooling fast), which reduces computational demands and model complexity, resulting in improved speed. A custom dataset containing defective tablet images is used to train the proposed model. The performance of the CBS-YOLOv8 model is then evaluated by comparing it with various other models. Experimental results on the custom dataset reveal that the CBS-YOLOv8 model achieves a mAP of 97.4% and an inference speed of 79.25 FPS, outperforming other models. The proposed model is also evaluated on SESOVERA-ST saline bottle fill level monitoring dataset achieved the mAP50 of 99.3%. This demonstrates that CBS-YOLOv8 provides an optimized inspection process, enabling prompt detection and correction of defects, thus bolstering quality assurance practices in manufacturing settings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    柠檬,作为一种具有丰富营养价值的重要经济作物,在全球范围内具有重要的种植重要性和市场需求。然而,柠檬病害严重影响柠檬的品质和产量,必须及早发现以进行有效控制。本文通过收集柠檬疾病的数据集来满足这一需求,由在不同光照水平下拍摄的726张图像组成,生长阶段,射击距离和疾病状况。通过裁剪高分辨率图像,数据集扩展到2022年的图像,包括4441只健康柠檬和718只患病柠檬,每个图像大约有1-6个目标。然后,我们提出了一种新的模型柠檬表面病YOLO(LSD-YOLO),集成了可切换Atrous卷积(SAConv)和卷积块注意力模块(CBAM),同时设计了C2f-SAC,增加了小目标探测层,增强了关键特征的提取和不同尺度特征的融合。实验结果表明,所提出的LSD-YOLO在收集的数据集上达到了90.62%的精度,MAP@50-95达到80.84%。与原来的YOLOv8n型号相比,mAP@50和mAP@50-95指标都得到了增强。因此,本研究中提出的LSD-YOLO模型提供了对健康和患病柠檬的更准确识别,有助于有效解决柠檬病检测问题。
    Lemon, as an important cash crop with rich nutritional value, holds significant cultivation importance and market demand worldwide. However, lemon diseases seriously impact the quality and yield of lemons, necessitating their early detection for effective control. This paper addresses this need by collecting a dataset of lemon diseases, consisting of 726 images captured under varying light levels, growth stages, shooting distances and disease conditions. Through cropping high-resolution images, the dataset is expanded to 2022 images, comprising 4441 healthy lemons and 718 diseased lemons, with approximately 1-6 targets per image. Then, we propose a novel model lemon surface disease YOLO (LSD-YOLO), which integrates Switchable Atrous Convolution (SAConv) and Convolutional Block Attention Module (CBAM), along with the design of C2f-SAC and the addition of a small-target detection layer to enhance the extraction of key features and the fusion of features at different scales. The experimental results demonstrate that the proposed LSD-YOLO achieves an accuracy of 90.62% on the collected datasets, with mAP@50-95 reaching 80.84%. Compared with the original YOLOv8n model, both mAP@50 and mAP@50-95 metrics are enhanced. Therefore, the LSD-YOLO model proposed in this study provides a more accurate recognition of healthy and diseased lemons, contributing effectively to solving the lemon disease detection problem.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    为了解决检测精度下降的问题,错误检测,以及由于近距和远距目标之间的尺度差异和环境因素(例如照明和水波)在非船员船只的水面目标检测任务中造成的错过检测,YOLOv8-MSS算法用于优化水面目标检测。通过增加一个小目标检测头,该模型在识别小目标时变得更加灵敏和准确。为减少骨干网络下采样过程中复杂水面环境造成的噪声干扰,C2f_MLCA用于增强模型的鲁棒性和稳定性。颈部组件采用轻量模型SENetV2,以提高模型对小目标的检测性能和抗干扰能力。SIoU损失函数通过形状感知和几何信息集成来提高检测精度和边界框回归精度。在公开可用数据集FloW-Img上的实验表明,改进算法的mAP@0.5为87.9%,mAP@0.5:0.95为47.6%,分别提高了5%和2.6%,分别,与原始模型相比。
    To address the issues of decreased detection accuracy, false detections, and missed detections caused by scale differences between near and distant targets and environmental factors (such as lighting and water waves) in surface target detection tasks for uncrewed vessels, the YOLOv8-MSS algorithm is proposed to be used to optimize the detection of water surface targets. By adding a small target detection head, the model becomes more sensitive and accurate in recognizing small targets. To reduce noise interference caused by complex water surface environments during the downsampling process in the backbone network, C2f_MLCA is used to enhance the robustness and stability of the model. The lightweight model SENetV2 is employed in the neck component to improve the model\'s performance in detecting small targets and its anti-interference capability. The SIoU loss function enhances detection accuracy and bounding box regression precision through shape awareness and geometric information integration. Experiments on the publicly available dataset FloW-Img show that the improved algorithm achieves an mAP@0.5 of 87.9% and an mAP@0.5:0.95 of 47.6%, which are improvements of 5% and 2.6%, respectively, compared to the original model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文解决了在铁路轨道交通中检测未知或不可预见的障碍物的挑战,提出了一种创新的检测策略,该策略将增量聚类算法与轻量级分割技术相结合。在检测阶段,本文创新性地采用增量聚类算法作为核心方法,结合膨胀和侵蚀理论,为了扩展点云簇的边界,将相邻的点云元素合并为统一的聚类。该方法有效地识别和连接空间相邻的点云簇,同时有效地消除目标物体点云的噪声,从而实现对轨道上未知障碍物的更精确识别。此外,该算法与轻量级共享卷积语义分割算法的有效集成,可以实现障碍物的准确定位。使用两个组合公共数据集的实验结果表明,该方法的障碍物检测平均召回率达到90.3%。显著提高系统可靠性。结果表明,该检测策略有效地提高了障碍物识别的准确性和实时性,对保障铁路轨道安全运行具有重要的实际应用价值。
    This paper addresses the challenge of detecting unknown or unforeseen obstacles in railway track transportation, proposing an innovative detection strategy that integrates an incremental clustering algorithm with lightweight segmentation techniques. In the detection phase, the paper innovatively employs the incremental clustering algorithm as a core method, combined with dilation and erosion theories, to expand the boundaries of point cloud clusters, merging adjacent point cloud elements into unified clusters. This method effectively identifies and connects spatially adjacent point cloud clusters while efficiently eliminating noise from target object point clouds, thereby achieving more precise recognition of unknown obstacles on the track. Furthermore, the effective integration of this algorithm with lightweight shared convolutional semantic segmentation algorithms enables accurate localization of obstacles. Experimental results using two combined public datasets demonstrate that the obstacle detection average recall rate of the proposed method reaches 90.3%, significantly enhancing system reliability. These findings indicate that the proposed detection strategy effectively improves the accuracy and real-time performance of obstacle recognition, thereby presenting important practical application value for ensuring the safe operation of railway tracks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    YOLOv8作为一种高效的目标检测方法,可以快速,精确地识别图像中的物体。然而,传统算法在检测遥感图像中的小目标时遇到困难,例如缺少信息,背景噪声,以及复杂场景中多个物体之间的相互作用,这可能会影响性能。为了应对这些挑战,我们提出了一种优化的增强算法,用于检测遥感图像中的小目标,名为HP-YOLOv8。首先,我们设计了C2f-D-Mixer(C2f-DM)模块来代替原来的C2f模块。该模块集成了本地和全球信息,显著提高了检测小物体特征的能力。其次,我们介绍了一种基于注意力机制的特征融合技术,命名为门控特征金字塔网络(BGFPN)中的双层路由注意。该技术利用高效的特征聚合网络和重新参数化技术来优化不同比例尺特征图之间的信息交互,并通过双层路由注意(BRA)机制,它有效地捕获小物体的关键特征信息。最后,我们提出了形状平均垂直距离在联上相交(SMPDIoU)损失函数。该方法综合考虑了检测盒的形状和尺寸,增强了模型对检测框属性的关注,并提供了一种更为精确的边界框回归损失计算方法。为了证明我们的方法的有效性,我们在整个RSOD中进行了全面的实验,NWPUVHR-10和VisDrone2019数据集。实验结果表明,HP-YOLOv8达到95.11%,93.05%,在mAP@0.5度量中,为53.49%,72.03%,65.37%,在更严格的mAP@0.5:0.95公制中,38.91%,分别。
    YOLOv8, as an efficient object detection method, can swiftly and precisely identify objects within images. However, traditional algorithms encounter difficulties when detecting small objects in remote sensing images, such as missing information, background noise, and interactions among multiple objects in complex scenes, which may affect performance. To tackle these challenges, we propose an enhanced algorithm optimized for detecting small objects in remote sensing images, named HP-YOLOv8. Firstly, we design the C2f-D-Mixer (C2f-DM) module as a replacement for the original C2f module. This module integrates both local and global information, significantly improving the ability to detect features of small objects. Secondly, we introduce a feature fusion technique based on attention mechanisms, named Bi-Level Routing Attention in Gated Feature Pyramid Network (BGFPN). This technique utilizes an efficient feature aggregation network and reparameterization technology to optimize information interaction between different scale feature maps, and through the Bi-Level Routing Attention (BRA) mechanism, it effectively captures critical feature information of small objects. Finally, we propose the Shape Mean Perpendicular Distance Intersection over Union (SMPDIoU) loss function. The method comprehensively considers the shape and size of detection boxes, enhances the model\'s focus on the attributes of detection boxes, and provides a more accurate bounding box regression loss calculation method. To demonstrate our approach\'s efficacy, we conducted comprehensive experiments across the RSOD, NWPU VHR-10, and VisDrone2019 datasets. The experimental results show that the HP-YOLOv8 achieves 95.11%, 93.05%, and 53.49% in the mAP@0.5 metric, and 72.03%, 65.37%, and 38.91% in the more stringent mAP@0.5:0.95 metric, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    葡萄柚和茎的检测在自动化葡萄收获中起着至关重要的作用。然而,葡萄园中水果的密集排列以及葡萄茎和树枝之间颜色的相似性带来了挑战,通常会导致大多数现有模型中的漏检或错误检测。此外,这些模型的大量参数和计算需求导致检测速度慢,并且难以在移动设备上部署它们。因此,提出了一种基于YOLOv8n的轻量级TiGra-YOLOv8模型。最初,我们将注意力标度融合(ASF)模块集成到颈部,增强网络提取密集果园葡萄特征的能力。随后,我们采用自适应训练样本选择(ATSS)作为标签匹配策略,以提高阳性样本的质量,并解决检测具有相似颜色的葡萄茎的挑战。然后,我们利用顺序证据的加权插值对联合交集(Wise-IoU)损失函数来克服CIoU的局限性,它不考虑目标的几何属性,从而提高检测效率。最后,模型的大小通过通道修剪减小。结果表明,与YOLOv8n相比,TiGra-YOLOv8模型的mAP(0.5)增加了3.33%,检测速度(FPS)提高了7.49%,参数计数减少52.19%,计算需求下降51.72%,同时还减少了45.76%的模型尺寸。TiGra-YOLOv8模型不仅提高了对密集和挑战性目标的检测精度,而且减少了模型参数并加快了检测速度,为葡萄检测提供了显著的好处。
    Grapefruit and stem detection play a crucial role in automated grape harvesting. However, the dense arrangement of fruits in vineyards and the similarity in color between grape stems and branches pose challenges, often leading to missed or false detections in most existing models. Furthermore, these models\' substantial parameters and computational demands result in slow detection speeds and difficulty deploying them on mobile devices. Therefore, we propose a lightweight TiGra-YOLOv8 model based on YOLOv8n. Initially, we integrated the Attentional Scale Fusion (ASF) module into the Neck, enhancing the network\'s ability to extract grape features in dense orchards. Subsequently, we employed Adaptive Training Sample Selection (ATSS) as the label-matching strategy to improve the quality of positive samples and address the challenge of detecting grape stems with similar colors. We then utilized the Weighted Interpolation of Sequential Evidence for Intersection over Union (Wise-IoU) loss function to overcome the limitations of CIoU, which does not consider the geometric attributes of targets, thereby enhancing detection efficiency. Finally, the model\'s size was reduced through channel pruning. The results indicate that the TiGra-YOLOv8 model\'s mAP(0.5) increased by 3.33% compared to YOLOv8n, with a 7.49% improvement in detection speed (FPS), a 52.19% reduction in parameter count, and a 51.72% decrease in computational demand, while also reducing the model size by 45.76%. The TiGra-YOLOv8 model not only improves the detection accuracy for dense and challenging targets but also reduces model parameters and speeds up detection, offering significant benefits for grape detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    航拍图像目标检测对城市规划至关重要,交通监控,和灾害评估。然而,现有的检测算法在复杂环境下难以识别小目标和准确性。为了解决这个问题,提出了一种基于YOLOv8的改进模型MPE-YOLO。最初,采用多级特征集成器(MFI)模块来增强小目标特征的表示,在特征融合过程中精心缓解了信息丢失。对于该模型的骨干网络,引入了感知增强卷积(PEC)模块来取代传统的卷积层,从而扩展网络的细粒度特征处理能力。此外,设计了一个增强型scope-C2f(ES-C2f)模块,利用信道扩展和多尺度卷积内核的堆叠来增强网络捕获小目标细节的能力。在VisDrone上进行了一系列实验之后,RSOD,和AI-TOD数据集,与现有的高级算法相比,该模型不仅在航空图像检测任务中表现出优越的性能,而且实现了轻量级的模型结构。实验结果证明了MPE-YOLO在提高空中目标检测的准确性和操作效率方面的潜力。代码将在线提供(https://github.com/zhanderen/MPE-YOLO)。
    Aerial image target detection is essential for urban planning, traffic monitoring, and disaster assessment. However, existing detection algorithms struggle with small target recognition and accuracy in complex environments. To address this issue, this paper proposes an improved model based on YOLOv8, named MPE-YOLO. Initially, a multilevel feature integrator (MFI) module is employed to enhance the representation of small target features, which meticulously moderates information loss during the feature fusion process. For the backbone network of the model, a perception enhancement convolution (PEC) module is introduced to replace traditional convolutional layers, thereby expanding the network\'s fine-grained feature processing capability. Furthermore, an enhanced scope-C2f (ES-C2f) module is designed, utilizing channel expansion and stacking of multiscale convolutional kernels to enhance the network\'s ability to capture small target details. After a series of experiments on the VisDrone, RSOD, and AI-TOD datasets, the model has not only demonstrated superior performance in aerial image detection tasks compared to existing advanced algorithms but also achieved a lightweight model structure. The experimental results demonstrate the potential of MPE-YOLO in enhancing the accuracy and operational efficiency of aerial target detection. Code will be available online (https://github.com/zhanderen/MPE-YOLO).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在工业安全领域,戴头盔对确保工人的健康起着至关重要的作用。针对工业环境中的复杂背景,由于距离的差异,头盔小目标佩戴检测方法需要针对误检和漏检问题进行检测。提出了一种改进的YOLOv8安全帽佩戴检测网络,以增强细节捕获,改进多尺度特征处理,通过引入扩展残差注意模块提高小目标检测的精度,atrous空间金字塔池化和归一化Wasserstein距离损失函数。在SHWD数据集上进行了实验,结果表明,改进后的网络的mAP提高到92.0%,在准确性方面超过了传统的目标检测网络,召回,和其他关键指标。这些发现进一步改善了复杂环境下头盔佩戴的检测,并大大提高了检测的准确性。
    In the field of industrial safety, wearing helmets plays a vital role in ensuring workers\' health. Aiming at addressing the complex background in the industrial environment, caused by differences in distance, the helmet small target wearing detection methods for misdetection and omission detection problems are needed. An improved YOLOv8 safety helmet wearing detection network is proposed to enhance the capture of details, improve multiscale feature processing and improve the accuracy of small target detection by introducing Dilation-wise residual attention module, atrous spatial pyramid pooling and normalized Wasserstein distance loss function. Experiments were conducted on the SHWD dataset, and the results showed that the mAP of the improved network improved to 92.0%, which exceeded that of the traditional target detection network in terms of accuracy, recall, and other key metrics. These findings further improved the detection of helmet wearing in complex environments and greatly enhanced the accuracy of detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在火车站和机场等大型公共场所,密集行人检测对于安全和保障非常重要。深度学习方法提供了相对有效的解决方案,但仍面临特征提取困难等问题,图像多尺度变化,和高泄漏检测率,这给该领域的研究带来了巨大的挑战。在本文中,提出了一种改进的基于Yolov8的密集行人检测算法GR-yolo。GR-yolo引入了repc3模块来优化骨干网,这增强了特征提取的能力,采用聚集分布机制重建yolov8颈部结构,融合多层次的信息,实现更有效的信息交换,增强了模型的检测能力。同时,Giou损失计算用于帮助GR-yolo更好地收敛,提高目标位置的检测精度,减少漏检。实验表明,GR-yolo比yolov8提高了检测性能,在更广泛的人数据集上检测手段精度提高了3.1%,人群人类数据集上的7.2%,和11.7%的人检测图像数据集。因此,所提出的GR-yolo算法适用于密集,多尺度,和场景可变的行人检测,改进也为解决真实场景中密集行人检测提供了新思路。
    In large public places such as railway stations and airports, dense pedestrian detection is important for safety and security. Deep learning methods provide relatively effective solutions but still face problems such as feature extraction difficulties, image multi-scale variations, and high leakage detection rates, which bring great challenges to the research in this field. In this paper, we propose an improved dense pedestrian detection algorithm GR-yolo based on Yolov8. GR-yolo introduces the repc3 module to optimize the backbone network, which enhances the ability of feature extraction, adopts the aggregation-distribution mechanism to reconstruct the yolov8 neck structure, fuses multi-level information, achieves a more efficient exchange of information, and enhances the detection ability of the model. Meanwhile, the Giou loss calculation is used to help GR-yolo converge better, improve the detection accuracy of the target position, and reduce missed detection. Experiments show that GR-yolo has improved detection performance over yolov8, with a 3.1% improvement in detection means accuracy on the wider people dataset, 7.2% on the crowd human dataset, and 11.7% on the people detection images dataset. Therefore, the proposed GR-yolo algorithm is suitable for dense, multi-scale, and scene-variable pedestrian detection, and the improvement also provides a new idea to solve dense pedestrian detection in real scenes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号