agricultural automation

农业自动化
  • 文章类型: Journal Article
    土壤参数是增加农业产量的关键方面。尽管孟加拉国严重依赖农业,关于其自动化的研究很少。农业自动化的一个重要方面是预测土壤参数。一般来说,与土壤参数有关的传感器非常昂贵,并且通常在诸如温室之类的受控环境中进行。然而,这种昂贵的传感器的大规模实施是不可行的。这项工作试图找到一种廉价的解决方案来预测土壤湿度和温度等土壤参数,这两者都对作物的生长至关重要。我们专注于寻找上述土壤参数与附近的天气参数(如湿度和温度)之间的稳健关系,不管天气如何。我们应用不同的机器学习模型,如多层感知器(MLP),随机森林,等。预测土壤参数,考虑到周围环境的湿度和温度。对于我们使用定制数据集的所有实验,其中包含大约9000个土壤湿度和温度的数据点,环境湿度和温度。数据是通过廉价的传感器在不受控制的农业床上收集的。我们的结果表明,XGBoost回归器对于土壤湿度和土壤温度数据分别以0.93和0.99的R2得分为最佳结果。这表明天气参数和土壤参数之间的相关性非常高。该模型还描绘了土壤水分的均方根误差和平均绝对误差为0.037和0.015,土壤温度为0.001和0.0008。我们的结果表明,确实可以从相应的天气中找到土壤参数,这将对大众农业自动化产生重大影响。该数据集已在https://github.com/Nadimulhaque0403/Soil_parameter_prediction_dataset上公开提供。
    Soil parameters are crucial aspects in increasing agricultural production. Even though Bangladesh is heavily dependent on agriculture, little research has been done regarding its automation. And a vital aspect of agricultural automation is predicting soil parameters. Generally, sensors relating to soil parameters are quite expensive and are often done in a controlled environment such as a greenhouse. However, a large scale implementation of such expensive sensors is not very feasible. This work tries to find an inexpensive solution towards predicting soil parameters such as soil moisture and temperature, both of which are crucial to the growth of crops. We focus on finding a robust relation between the above mentioned soil parameters with the nearby weather parameters such as humidity and temperature, irrespective of the weather. We apply different machine learning models like multilayer perceptron (MLP), random forest, etc. to predict the soil parameters, given the humidity and temperature of the surrounding environment. For all the experiments we have used a custom made dataset, which contains around 9000 datapoints of soil moisture & temperature, ambient humidity & temperature. The data has been collected in an uncontrolled agriculture bed via inexpensive sensors. Our results show that XGBoost regressor achieves the best results with an R2 score of 0.93 and 0.99 for soil moisture and soil temperature data respectively. This suggests very high correlation between the weather parameters and soil parameters. The model also portrayed a very low root mean squared error and mean absolute error of 0.037 & 0.015 for soil moisture and 0.001 & 0.0008 for soil temperature. Our results show that it is indeed possible to find the soil parameters from the corresponding weather, which will have great impact on mass agricultural automation. The dataset has been made publicly available at https://github.com/Nadimulhaque0403/Soil_parameter_prediction_dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    如今,基于卷积神经网络(CNN)的深度学习方法被广泛应用于从故障中检测和分类水果,颜色和尺寸特征。在这项研究中,采用两种不同的神经网络模型估计器,使用单点多盒检测(SSD)Mobilenet和FasterRegion-CNN(FasterR-CNN)模型架构来检测苹果,使用从红苹果物种生成的自定义数据集。每个神经网络模型都使用4000个苹果图像使用创建的数据集进行训练。使用经过训练的模型,在商业生产的苹果园中使用开发的飞行机器人系统(FRS)自主检测和计数苹果。这样,旨在使生产者在达成商业协议之前做出准确的产量预测。在本文中,使用许多研究中引用的COCO数据集训练的SSD-Mobilenet和FasterR-CNN架构模型,和SSD-Mobilenet和使用自定义数据集训练的学习率范围为0.015-0.04的FasterR-CNN模型在性能方面进行了实验比较。在实施的实验中,据观察,所提出的模型的准确率提高到93%的水平。因此,已经观察到,更快的R-CNN模型,这是开发的,通过将损失值降低到0.1以下,可以做出非常成功的确定。
    Nowadays, Convolution Neural Network (CNN) based deep learning methods are widely used in detecting and classifying fruits from faults, color and size characteristics. In this study, two different neural network model estimators are employed to detect apples using the Single-Shot Multibox Detection (SSD) Mobilenet and Faster Region-CNN (Faster R-CNN) model architectures, with the custom dataset generated from the red apple species. Each neural network model is trained with created dataset using 4000 apple images. With the trained model, apples are detected and counted autonomously using the developed Flying Robotic System (FRS) in a commercially produced apple orchard. In this way, it is aimed that producers make accurate yield forecasts before commercial agreements. In this paper, SSD-Mobilenet and Faster R-CNN architecture models trained with COCO datasets referenced in many studies, and SSD-Mobilenet and Faster R-CNN models trained with a learning rate ranging from 0.015-0.04 using the custom dataset are compared experimentally in terms of performance. In the experiments implemented, it is observed that the accuracy rates of the proposed models increased to the level of 93%. Consequently, it has been observed that the Faster R-CNN model, which is developed, makes extremely successful determinations by lowering the loss value below 0.1.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习和计算机视觉的进步导致发现了许多有效的解决方案,以解决农业自动化领域的挑战性问题。为提高绿芦笋自主采收过程中的检测精度,在这篇文章中,我们提出了DA-MaskRCNN模型,它利用区域建议网络中的深度信息。首先,将深度残差网络和特征金字塔网络相结合,形成骨干网络。其次,DA-MaskRCNN模型添加了一个深度过滤器来辅助锚分类中的softmax功能。之后,区域提案由检测头单位进一步处理。训练和测试图像主要来自长江流域的不同地区。在捕获过程中,考虑了各种天气和照明条件,包括晴朗的天气,阳光明媚但黯然失色的条件,多云的天气,白天温室条件以及夜间温室条件。性能实验,比较实验,利用构建的5个数据集进行消融实验,验证了模型的有效性。Precision,召回,和F1分数值用于评估不同方法的性能。总体实验结果表明,提出的DA-MaskRCNN模型在精度和速度上的平衡优于现有算法。
    Advancements in deep learning and computer vision have led to the discovery of numerous effective solutions to challenging problems in the field of agricultural automation. With the aim to improve the detection precision in the autonomous harvesting process of green asparagus, in this article, we proposed the DA-Mask RCNN model, which utilizes the depth information in the region proposal network. Firstly, the deep residual network and feature pyramid network were combined to form the backbone network. Secondly, the DA-Mask RCNN model added a depth filter to aid the softmax function in anchor classification. Afterwards, the region proposals were further processed by the detection head unit. The training and test images were mainly acquired from different regions in the basin of the Yangtze River. During the capturing process, various weather and illumination conditions were taken into account, including sunny weather, sunny but overshadowed conditions, cloudy weather, and daytime greenhouse conditions as well as nighttime greenhouse conditions. Performance experiments, comparison experiments, and ablation experiments were carried out using the five constructed datasets to verify the effectiveness of the proposed model. Precision, recall, and F1-score values were applied to evaluate the performances of different approaches. The overall experimental results demonstrate that the balance of the precision and speed of the proposed DA-Mask RCNN model outperform those of existing algorithms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在水下环境中,确保人们的安全是复杂的,有可能危及生命的结果,尤其是当潜水员必须在更深的条件下工作时。为了改善在这种环境中使用机器人的可用解决方案,我们建议验证机器人从海底获取物体时的控制策略。提出的控制策略基于系统模型中的加速度反馈。使用这个模型,位置的参考值,估计速度和加速度,然后可以计算位置误差信号。当获得所需的位置时,然后可以获得位置误差。使用三个不同的物体进行验证:一个球,一瓶,和一种植物。实验包括使用这种控制策略来获取这些对象,机器人携带了一段时间,以验证稳定控制,并根据角度和深度进行控制。机器人由飞行员从游泳池外操作,并使用摄像机和声纳以远程操作的方式进行引导。作为这种控制策略的优点,机器人所基于的模型是解耦的,允许控制每个未耦合平面的机器人,这是这些测试的主要发现。这表明机器人可以通过基于解耦模型的控制策略来控制,考虑到机器人的流体力学参数。
    In underwater environments, ensuring people\'s safety is complicated, with potentially life-threatening outcomes, especially when divers have to work in deeper conditions. To improve the available solutions for working with robots in this kind of environment, we propose the validation of a control strategy for robots when taking objects from the seabed. The control strategy proposed is based on acceleration feedback in the model of the system. Using this model, the reference values for position, velocity and acceleration are estimated, and then the position error signal can be computed. When the desired position is obtained, it is possible to then obtain the position error. The validation was carried out using three different objects: a ball, a bottle, and a plant. The experiment consisted of using this control strategy to take those objects, which the robot carried for a moment to validate the stabilisation control and reference following the control in terms of angle and depth. The robot was operated by a pilot from outside of the pool and was guided using a camera and sonar in a teleoperated way. As an advantage of this control strategy, the model upon which the robot is based is decoupled, allowing control of the robot for each uncoupled plane, this being the main finding of these tests. This demonstrates that the robot can be controlled by a control strategy based on a decoupled model, taking into account the hydrodynamic parameters of the robot.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    The real-time detection and counting of rice ears in fields is one of the most important methods for estimating rice yield. The traditional manual counting method has many disadvantages: it is time-consuming, inefficient and subjective. Therefore, the use of computer vision technology can improve the accuracy and efficiency of rice ear counting in the field. The contributions of this article are as follows. (1) This paper establishes a dataset containing 3300 rice ear samples, which represent various complex situations, including variable light and complex backgrounds, overlapping rice and overlapping leaves. The collected images were manually labeled, and a data enhancement method was used to increase the sample size. (2) This paper proposes a method that combines the LC-FCN (localization-based counting fully convolutional neural network) model based on transfer learning with the watershed algorithm for the recognition of dense rice images. The results show that the model is superior to traditional machine learning methods and the single-shot multibox detector (SSD) algorithm for target detection. Moreover, it is currently considered an advanced and innovative rice ear counting model. The mean absolute error (MAE) of the model on the 300-size test set is 2.99. The model can be used to calculate the number of rice ears in the field. In addition, it can provide reliable basic data for rice yield estimation and a rice dataset for research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Posture changes in pigs during growth are often precursors of disease. Monitoring pigs\' behavioral activities can allow us to detect pathological changes in pigs earlier and identify the factors threatening the health of pigs in advance. Pigs tend to be farmed on a large scale, and manual observation by keepers is time consuming and laborious. Therefore, the use of computers to monitor the growth processes of pigs in real time, and to recognize the duration and frequency of pigs\' postural changes over time, can prevent outbreaks of porcine diseases. The contributions of this article are as follows: (1) The first human-annotated pig-posture-identification dataset in the world was established, including 800 pictures of each of the four pig postures: standing, lying on the stomach, lying on the side, and exploring. (2) When using a deep separable convolutional network to classify pig postures, the accuracy was 92.45%. The results show that the method proposed in this paper achieves adequate pig-posture recognition in a piggery environment and may be suitable for livestock farm applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    In the process of collaborative operation, the unloading automation of the forage harvester is of great significance to improve harvesting efficiency and reduce labor intensity. However, non-standard transport trucks and unstructured field environments make it extremely difficult to identify and properly position loading containers. In this paper, a global model with three coordinate systems is established to describe a collaborative harvesting system. Then, a method based on depth perception is proposed to dynamically identify and position the truck container, including data preprocessing, point cloud pose transformation based on the singular value decomposition (SVD) algorithm, segmentation and projection of the upper edge, edge lines extraction and corner points positioning based on the Random Sample Consensus (RANSAC) algorithm, and fusion and visualization of results on the depth image. Finally, the effectiveness of the proposed method has been verified by field experiments with different trucks. The results demonstrated that the identification accuracy of the container region is about 90%, and the absolute error of center point positioning is less than 100 mm. The proposed method is robust to containers with different appearances and provided a methodological reference for dynamic identification and positioning of containers in forage harvesting.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    通过农业自动化提高资源效率需要更多关于生产过程的信息,以及过程和机械状态。传感器是必要的,通过识别周围的结构,如物体,监测生产的状态和条件,场结构,天然或人工标记,和障碍。目前,三维(3-D)传感器在很大程度上经济实惠,技术先进,因此,如果将足够的研究项目商业化,就已经有可能取得突破。这篇综述论文的目的是研究农业中3D视觉系统的最新技术,以及根据光学3-D传感器的最新进展,只有3-D数据才能提供有关环境结构的信息的作用和价值。本研究的结构包括不同的光学三维视觉技术的概述,基于基本原则。之后,综述了它们在农业中的应用。主要关注车辆导航,以及农作物和畜牧业。3-D传感器带来的深度维度提供了关键信息,极大地促进了自动化和机器人在农业中的实施。
    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号