及时准确地鉴定花生病虫害,加上有效的对策,是确保优质高效花生生产的关键。尽管花生种植中病虫害盛行,挑战,如微小的疾病点,害虫难以捉摸的性质,复杂的环境条件往往导致识别准确性和效率下降。此外,在现实世界的农业环境中持续监测花生健康需要计算效率高的解决方案。传统的深度学习模型通常需要大量的计算资源。限制其实际适用性。为了应对这些挑战,我们介绍LSCDNet(轻质沙漏和协调关注网络),从DenseNet派生的流线型模型。LSCDNet仅保留过渡层,以减少特征图维度,简化模型的复杂性。包含砂玻璃块支撑特征提取能力,减轻由于降维导致的潜在信息损失。此外,坐标注意力的结合解决了特征提取过程中与位置信息丢失有关的问题。实验结果表明,LSCDNet实现了令人印象深刻的指标,精度,召回,F1得分为96.67%,98.05%,95.56%,96.79%,分别,同时保持仅0.59M的紧凑参数计数。与MobileNetV1,MobileNetV2,NASNetMobile等已建立的模型相比,DenseNet-121,InceptionV3和Xception,LSCDNet的表现优于2.65%的精度增益,4.87%,8.71%,5.04%,6.32%,和8.2%,伴随着更少的参数。最后,我们在树莓派上部署了LSCDNet模型,用于实际测试和应用,平均识别准确率为85.36%,从而满足现实世界的操作要求。
Timely and accurate identification of peanut pests and diseases, coupled with effective countermeasures, are pivotal for ensuring high-quality and efficient peanut production. Despite the prevalence of pests and diseases in peanut cultivation, challenges such as minute disease spots, the elusive nature of pests, and intricate environmental conditions often lead to diminished identification accuracy and efficiency. Moreover, continuous monitoring of peanut health in real-world agricultural settings demands solutions that are computationally efficient. Traditional deep learning models often require substantial computational resources, limiting their practical applicability. In response to these challenges, we introduce LSCDNet (Lightweight Sandglass and Coordinate Attention Network), a streamlined model derived from DenseNet. LSCDNet preserves only the transition layers to reduce feature map dimensionality, simplifying the model\'s complexity. The inclusion of a sandglass block bolsters features extraction capabilities, mitigating potential information loss due to dimensionality reduction. Additionally, the incorporation of coordinate attention addresses issues related to positional information loss during feature extraction. Experimental results showcase that LSCDNet achieved impressive metrics with an accuracy, precision, recall, and F1 score of 96.67%, 98.05%, 95.56%, and 96.79%, respectively, while maintaining a compact parameter count of merely 0.59M. When compared to established models such as MobileNetV1, MobileNetV2, NASNetMobile, DenseNet-121, InceptionV3, and Xception, LSCDNet outperformed with accuracy gains of 2.65%, 4.87%, 8.71%, 5.04%, 6.32%, and 8.2% respectively, accompanied by substantially fewer parameters. Lastly, we deployed the LSCDNet model on Raspberry Pi for practical testing and application, achieving an average recognition accuracy of 85.36%, thereby meeting real-world operational requirements.