CARAFE

  • 文章类型: Journal Article
    田间杂草的精确检测是实施杂草管理的前提。然而,相似的颜色,形态学,小麦和杂草之间的闭塞对杂草的检测提出了挑战。在这项研究中,提出了基于改进的YOLOv7体系结构的CSCW-YOLOv7,以识别复杂麦田中的五种杂草。
    首先,为常见的五种杂草构建了一个数据集,即,DescurainiaSophia,蓟,金色的虎杖,牧羊人的钱包药草,和Artemisiaargyi。第二,提出了一种称为CSCW-YOLOv7的小麦杂草检测模型,以实现对小麦杂草的准确识别和分类。在CSCW-YOLOv7中,引入了CARAFE算子作为上采样算法,以提高对小目标的识别。然后,在骨干网络的扩展潜在注意网络(ELAN)模块和特征融合模块的级联层中添加了挤压激励(SE)网络,以增强重要的杂草特征并抑制无关特征。此外,上下文转换器(CoT)模块,基于变压器的建筑设计,用于捕获全局信息并通过挖掘相邻键之间的上下文信息来增强自我注意力。最后,引入动态非单调聚焦机制的WiseIntersectionoverUnion(WIoU)损失函数被用来更好地预测被遮挡杂草的边界框。
    消融实验结果表明,CSCW-YOLOv7在其他型号中取得了最佳性能。准确性,召回,CSCW-YOLOv7的平均精度(mAP)值为97.7%,98%,94.4%,分别。与基线YOLOv7相比,改进的CSCW-YOLOv7获得了精度,召回,mAP增加1.8%,1%,和2.1%,分别。同时,参数压缩了10.7%,减少了3.8MB,导致每秒浮点运算(FLOP)减少10%。梯度加权类激活图(Grad-CAM)可视化方法建议CSCW-YOLOv7可以学习一组更具代表性的特征,这些特征可以帮助在复杂的田间环境中更好地定位不同尺度的杂草。此外,将CSCW-YOLOv7的性能与广泛使用的深度学习模型进行了比较,结果表明,CSCW-YOLOv7具有更好的区分重叠杂草和小规模杂草的能力。总体结果表明,CSCW-YOLOv7是检测杂草的有前途的工具,具有很大的田间应用潜力。
    UNASSIGNED: The precise detection of weeds in the field is the premise of implementing weed management. However, the similar color, morphology, and occlusion between wheat and weeds pose a challenge to the detection of weeds. In this study, a CSCW-YOLOv7 based on an improved YOLOv7 architecture was proposed to identify five types of weeds in complex wheat fields.
    UNASSIGNED: First, a dataset was constructed for five weeds that are commonly found, namely, Descurainia sophia, thistle, golden saxifrage, shepherd\'s purse herb, and Artemisia argyi. Second, a wheat weed detection model called CSCW-YOLOv7 was proposed to achieve the accurate identification and classification of wheat weeds. In the CSCW-YOLOv7, the CARAFE operator was introduced as an up-sampling algorithm to improve the recognition of small targets. Then, the Squeeze-and-Excitation (SE) network was added to the Extended Latent Attention Networks (ELAN) module in the backbone network and the concatenation layer in the feature fusion module to enhance important weed features and suppress irrelevant features. In addition, the contextual transformer (CoT) module, a transformer-based architectural design, was used to capture global information and enhance self-attention by mining contextual information between neighboring keys. Finally, the Wise Intersection over Union (WIoU) loss function introducing a dynamic nonmonotonic focusing mechanism was employed to better predict the bounding boxes of the occluded weed.
    UNASSIGNED: The ablation experiment results showed that the CSCW-YOLOv7 achieved the best performance among the other models. The accuracy, recall, and mean average precision (mAP) values of the CSCW-YOLOv7 were 97.7%, 98%, and 94.4%, respectively. Compared with the baseline YOLOv7, the improved CSCW-YOLOv7 obtained precision, recall, and mAP increases of 1.8%, 1%, and 2.1%, respectively. Meanwhile, the parameters were compressed by 10.7% with a 3.8-MB reduction, resulting in a 10% decrease in floating-point operations per second (FLOPs). The Gradient-weighted Class Activation Mapping (Grad-CAM) visualization method suggested that the CSCW-YOLOv7 can learn a more representative set of features that can help better locate the weeds of different scales in complex field environments. In addition, the performance of the CSCW-YOLOv7 was compared to the widely used deep learning models, and results indicated that the CSCW-YOLOv7 exhibits a better ability to distinguish the overlapped weeds and small-scale weeds. The overall results suggest that the CSCW-YOLOv7 is a promising tool for the detection of weeds and has great potential for field applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    吸烟行为的检测是一个新兴的领域,在识别小,经常被遮挡的物体,如使用现有的深度学习技术的烟头。这样的挑战已经导致不令人满意的检测准确性和差的模型鲁棒性。
    为了克服这些问题,本文介绍了一种新颖的吸烟检测算法,YOLOv8-MNC,它建立在YOLOv8网络上,包括一个专门的小目标检测层。YOLOv8-MNC算法采用了三种关键策略:(1)它利用NWD损失来减轻在IoU上物体位置的微小偏差的影响,从而提高训练精度;(2)采用多头自注意机制(MHSA)来增强网络的全局特征学习能力;(3)实现轻量级通用上采样算子CARAFE,代替传统的最近邻插值上采样模块,最小化上采样过程中的特征信息损失。
    来自定制的吸烟行为数据集的实验结果表明检测准确性的显着提高。YOLOv8-MNC模型的检测精度达到85.887%,与以前的算法相比,平均精度(mAP@0.5)显著提高了5.7%。
    YOLOv8-MNC算法代表了在解决吸烟行为检测中的现有问题方面向前迈出的有价值的一步。它在检测准确性和鲁棒性方面的增强性能表明了在相关领域的潜在适用性,从而说明了吸烟行为检测领域的有意义的进步。未来的努力将集中在完善这项技术,并探索其在更广泛的背景下的应用。
    UNASSIGNED: The detection of smoking behavior is an emerging field faced with challenges in identifying small, frequently occluded objects like cigarette butts using existing deep learning technologies. Such challenges have led to unsatisfactory detection accuracy and poor model robustness.
    UNASSIGNED: To overcome these issues, this paper introduces a novel smoking detection algorithm, YOLOv8-MNC, which builds on the YOLOv8 network and includes a specialized layer for small target detection. The YOLOv8-MNC algorithm employs three key strategies: (1) It utilizes NWD Loss to mitigate the effects of minor deviations in object positions on IoU, thereby enhancing training accuracy; (2) It incorporates the Multi-head Self-Attention Mechanism (MHSA) to bolster the network\'s global feature learning capacity; and (3) It implements the lightweight general up-sampling operator CARAFE, in place of conventional nearest-neighbor interpolation up-sampling modules, minimizing feature information loss during the up-sampling process.
    UNASSIGNED: Experimental results from a customized smoking behavior dataset demonstrate significant improvement in detection accuracy. The YOLOv8-MNC model achieved a detection accuracy of 85.887%, signifying a remarkable increase of 5.7% in the mean Average Precision (mAP@0.5) when compared to the previous algorithm.
    UNASSIGNED: The YOLOv8-MNC algorithm represents a valuable step forward in resolving existing problems in smoking behavior detection. Its enhanced performance in both detection accuracy and robustness indicates potential applicability in related fields, thus illustrating a meaningful advancement in the sphere of smoking behavior detection. Future efforts will focus on refining this technique and exploring its application in broader contexts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    为了满足高速铁路时代铁路隧道设备维修的快速、准确的自动检测要求,以及适应高动态,隧道出口强光形成的低照度成像环境,提出了一种基于全景成像和物体识别的深度学习自动检测方案。我们在检查车上安装了双曲面折反射全景成像系统,以获得大的视野,并屏蔽隧道出口处的高动态现象,提出了一种基于铁路设备识别的YOLOv5-CCFE目标检测模型。实验结果表明,YOLOv5-CCFE模型的mAP@0.5值达到98.6%,mAP@0.5:0.95达到68.9%。FPS值为158,能够满足沿线铁路隧道设备的自动检测要求,具有较高的实际应用价值。
    In order to meet the fast and accurate automatic detection requirements of equipment maintenance in railway tunnels in the era of high-speed railways, as well as adapting to the high dynamic, low-illumination imaging environment formed by strong light at the tunnel exit, we propose an automatic inspection solution based on panoramic imaging and object recognition with deep learning. We installed a hyperboloid catadioptric panoramic imaging system on an inspection vehicle to obtain a large field of view as well as to shield the high dynamic phenomena at the tunnel exit, and proposed a YOLOv5-CCFE object detection model based on railway equipment recognition. The experimental results show that the mAP@0.5 value of the YOLOv5-CCFE model reaches 98.6%, and mAP@0.5:0.95 reaches 68.9%. The FPS value is 158, which can meet the automatic inspection requirements of railway tunnel equipment along the line and has high practical application value.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号