aerial image

  • 文章类型: Journal Article
    该研究分析了Ostrava-Karviná矿区(捷克共和国)170多年来的景观变化。在受地下煤矿开采影响的关注区域,研究中确定了受变化影响的区域和土地覆盖保护区。通过使用景观指标和指数,可以对景观变化进行详细评估,即发展指数和景观总量变化指数。基础数据来自稳定的地籍图(从1836年开始)和1947年,1971年和2009年的航空图像。航空图像的视觉摄影解释和稳定地籍地图的解释使根据CORINE土地覆盖类别创建土地覆盖图成为可能。获得的有关各个土地覆盖类别表示的信息用于识别和分析受硬煤开采影响的景观变化。
    The study presents an analysis of changes in the landscape of the Ostrava-Karviná Mining District (in the Czech Republic) covering the period of more than 170 years. In the area of interest affected by underground coal mining, both areas affected by changes and land cover preserving areas were identified in the study. A detailed assessment of the landscape changes was enabled by using landscape metrics and indices, namely the development index and total landscape change index. The underlying data were obtained from maps of stable cadastre (from the year 1836) and aerial images of the years 1947, 1971, and 2009. Visual photointerpretation of aerial images and interpretation of the maps of stable cadastre made it possible to create land cover maps according to CORINE Land Cover categories. Obtained information on the representation of individual land cover categories were used to identify and to analyze changes in the landscape affected by hard coal mining.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    航拍图像目标检测对城市规划至关重要,交通监控,和灾害评估。然而,现有的检测算法在复杂环境下难以识别小目标和准确性。为了解决这个问题,提出了一种基于YOLOv8的改进模型MPE-YOLO。最初,采用多级特征集成器(MFI)模块来增强小目标特征的表示,在特征融合过程中精心缓解了信息丢失。对于该模型的骨干网络,引入了感知增强卷积(PEC)模块来取代传统的卷积层,从而扩展网络的细粒度特征处理能力。此外,设计了一个增强型scope-C2f(ES-C2f)模块,利用信道扩展和多尺度卷积内核的堆叠来增强网络捕获小目标细节的能力。在VisDrone上进行了一系列实验之后,RSOD,和AI-TOD数据集,与现有的高级算法相比,该模型不仅在航空图像检测任务中表现出优越的性能,而且实现了轻量级的模型结构。实验结果证明了MPE-YOLO在提高空中目标检测的准确性和操作效率方面的潜力。代码将在线提供(https://github.com/zhanderen/MPE-YOLO)。
    Aerial image target detection is essential for urban planning, traffic monitoring, and disaster assessment. However, existing detection algorithms struggle with small target recognition and accuracy in complex environments. To address this issue, this paper proposes an improved model based on YOLOv8, named MPE-YOLO. Initially, a multilevel feature integrator (MFI) module is employed to enhance the representation of small target features, which meticulously moderates information loss during the feature fusion process. For the backbone network of the model, a perception enhancement convolution (PEC) module is introduced to replace traditional convolutional layers, thereby expanding the network\'s fine-grained feature processing capability. Furthermore, an enhanced scope-C2f (ES-C2f) module is designed, utilizing channel expansion and stacking of multiscale convolutional kernels to enhance the network\'s ability to capture small target details. After a series of experiments on the VisDrone, RSOD, and AI-TOD datasets, the model has not only demonstrated superior performance in aerial image detection tasks compared to existing advanced algorithms but also achieved a lightweight model structure. The experimental results demonstrate the potential of MPE-YOLO in enhancing the accuracy and operational efficiency of aerial target detection. Code will be available online (https://github.com/zhanderen/MPE-YOLO).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着科学技术的进步,人工智能广泛应用于各个学科,并产生了惊人的成果。目标检测算法的研究显著提高了无人机的性能和作用,在预防森林火灾方面发挥着不可替代的作用,疏散拥挤的人,测量和救援探险家。在这个阶段,在无人机中部署的目标检测算法已经应用到生产和生活中,但使检测精度更高、适应性更好仍是研究者继续研究的动力。在航拍图像中,由于拍摄高度高,小尺寸,分辨率低,功能少,传统的目标检测算法难以检测到。在本文中,UN-YOLOv5s算法能较好地解决小目标检测的难题。更精确的小目标检测(MASD)机制,大大提高了中小目标的检测精度,将多尺度特征融合(MCF)路径相结合,融合图像的语义信息和位置信息,以提高新颖模型的表达能力。引入新的卷积SimAM残差(CSR)模块,使网络更加稳定和集中。在VisDrone数据集上,你只看一次v5s(UN-YOLOv5s)的平均平均精度(MAP)比原始算法高8.4%。与同一版本相比,YOLOv5l,mAP增加了2.2%,每秒千兆浮点运算(GFLOP)减少了65.3%。与同系列的YOLOv3相比,mAP增加了1.8%,GFLOP减少了75.8%。与同系列的YOLOv8s相比,mAP的检测精度提高了1.1%。
    With the progress of science and technology, artificial intelligence is widely used in various disciplines and has produced amazing results. The research of the target detection algorithm has significantly improved the performance and role of unmanned aerial vehicles (UAVs), and plays an irreplaceable role in preventing forest fires, evacuating crowded people, surveying and rescuing explorers. At this stage, the target detection algorithm deployed in UAVs has been applied to production and life, but making the detection accuracy higher and better adaptability is still the motivation for researchers to continue to study. In aerial images, due to the high shooting height, small size, low resolution and few features, it is difficult to be detected by conventional target detection algorithms. In this paper, the UN-YOLOv5s algorithm can solve the difficult problem of small target detection excellently. The more accurate small target detection (MASD) mechanism is used to greatly improve the detection accuracy of small and medium targets, The multi-scale feature fusion (MCF) path is combined to fuse the semantic information and location information of the image to improve the expression ability of the novel model. The new convolution SimAM residual (CSR) module is introduced to make the network more stable and focused. On the VisDrone dataset, the mean average precision (mAP) of UAV necessity you only look once v5s(UN-YOLOv5s) is 8.4% higher than that of the original algorithm. Compared with the same version, YOLOv5l, the mAP is increased by 2.2%, and the Giga Floating-point Operations Per Second (GFLOPs) is reduced by 65.3%. Compared with the same series of YOLOv3, the mAP is increased by 1.8%, and GFLOPs is reduced by 75.8%. Compared with the same series of YOLOv8s, the detection accuracy of the mAP is improved by 1.1%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在这项研究中,提出了一种算法来提高微小目标分割的准确性,以精确检测沥青路面上的坑洼。该方法包括一个三步过程:MOED,蒸汽,和异常处理,旨在提取坑洼的边缘,验证结果,并管理检测到的异常。提出的算法解决了以前方法的局限性,并提供了几个优点,包括更广泛的覆盖范围。我们通过使用无人机在30-70m的高空拍摄韩国各个地区的道路,对所提出算法的性能进行了实验评估。结果表明,我们的算法在针对坑洼等小物体的实例分割性能方面优于以前的方法。我们的研究为坑洼检测提供了一种实用有效的解决方案,并有助于道路安全维护和监控。
    In this study, we propose an algorithm to improve the accuracy of tiny object segmentation for precise pothole detection on asphalt pavements. The approach comprises a three-step process: MOED, VAPOR, and Exception Processing, designed to extract pothole edges, validate the results, and manage detected abnormalities. The proposed algorithm addresses the limitations of previous methods and offers several advantages, including wider coverage. We experimentally evaluated the performance of the proposed algorithm by filming roads in various regions of South Korea using a UAV at high altitudes of 30-70 m. The results show that our algorithm outperforms previous methods in terms of instance segmentation performance for small objects such as potholes. Our study offers a practical and efficient solution for pothole detection and contributes to road safety maintenance and monitoring.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在本文中,我们提出了一种基于尽可能投影(APAP)算法的航空图像拼接方法,针对问题工件,扭曲,或由于具有一定视差的多光谱空间图像的特征点较少而导致拼接失败。我们的方法将加速非线性扩散算法(AKAZE)引入APAP算法。首先,我们使用快速稳定的AKAZE来提取航空图像的特征点,然后,基于APAP算法的配准模型,我们增加了线路保护约束,全局相似性约束,和局部相似性约束来保护图像结构信息,制作全景。在多个数据集上的实验结果表明,该方法在处理多光谱航空图像时是有效的。我们的方法可以抑制伪影,扭曲,减少不完全拼接。与最先进的图像拼接方法相比,包括APAP和自适应尽可能自然的图像拼接(AANAP),和两个最受欢迎的无人机图像拼接工具,Pix4D和OpenDroneMap(ODM),我们的方法实现了它们的定量和定性。
    In this paper, we propose an aerial images stitching method based on an as-projective-as-possible (APAP) algorithm, aiming at the problem artifacts, distortions, or stitching failure due to fewer feature points for multispectral aerial image with certain parallax. Our method incorporates accelerated nonlinear diffusion algorithm (AKAZE) into APAP algorithm. First, we use the fast and stable AKAZE to extract the feature points of aerial images, and then, based on the registration model of the APAP algorithm, we add line protection constraints, global similarity constraints, and local similarity constraints to protect the image structure information, to produce a panorama. Experimental results on several datasets demonstrate that proposed method is effective when dealing with multispectral aerial images. Our method can suppress artifacts, distortions, and reduce incomplete splicing. Compared with state-of-the-art image stitching methods, including APAP and adaptive as-natural-as-possible image stitching (AANAP), and two of the most popular UAV image stitching tools, Pix4D and OpenDroneMap (ODM), our method achieves them both quantitatively and qualitatively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于CNN的目标探测器近年来取得了巨大的成功。可用的检测器采用水平边界框来定位各种对象。然而,在一些独特的场景中,航拍图像中的建筑物和车辆等物体可能会密集排列并具有明显的取向。因此,一些方法将水平边界框扩展到定向边界框以更好地提取对象,通常通过直接回归的角度或角落。然而,这遭受了由角度周期性或拐角顺序引起的不连续边界问题。在本文中,我们提出了一种基于YOLOv4架构的简单而高效的面向对象检测器。我们回归对象的前点而不是其角度或拐角的偏移,以避免上述问题。此外,为了使训练过程更加稳定,我们引入了交集联合(IoU)校正因子。在两个公共数据集上的实验结果,DOTA和HRSC2016表明,所提出的方法在保持高精度的同时,在检测速度方面明显优于其他方法。在DOTA,我们提出的方法对于正面外观突出的类实现了最高的mAP,例如小型车辆,大型车辆,和船只。与其他方法相比,YOLOv4的高效架构提高了25%以上的检测速度。
    CNN-based object detectors have achieved great success in recent years. The available detectors adopted horizontal bounding boxes to locate various objects. However, in some unique scenarios, objects such as buildings and vehicles in aerial images may be densely arranged and have apparent orientations. Therefore, some approaches extend the horizontal bounding box to the oriented bounding box to better extract objects, usually carried out by directly regressing the angle or corners. However, this suffers from the discontinuous boundary problem caused by angular periodicity or corner order. In this paper, we propose a simple but efficient oriented object detector based on YOLOv4 architecture. We regress the offset of an object\'s front point instead of its angle or corners to avoid the above mentioned problems. In addition, we introduce the intersection over union (IoU) correction factor to make the training process more stable. The experimental results on two public datasets, DOTA and HRSC2016, demonstrate that the proposed method significantly outperforms other methods in terms of detection speed while maintaining high accuracy. In DOTA, our proposed method achieved the highest mAP for the classes with prominent front-side appearances, such as small vehicles, large vehicles, and ships. The highly efficient architecture of YOLOv4 increases more than 25% detection speed compared to the other approaches.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    加州的乳制品部门占该州温室气体(GHG)排放清单中人为CH4排放量的50%。虽然加州乳制品设施的位置和牛群规模随着时间的推移而变化,大气逆建模研究依赖于已有十年历史的设施尺度地理空间信息。第一次,我们将人工智能(AI)应用于航空图像,以估计加州圣华金河谷(SJV)的乳制品CH4排放量,一个拥有该州90%乳业人口的地区。使用AI方法,我们处理了316,882张图像,以估计整个SJV的设施规模群体规模。人工智能方法预测的牛群规模与人类视觉检查的规模密切相关(>95%),为劳动密集型库存开发过程提供低成本替代方案。我们使用预测的牛群规模估计2018年SJV的乳品肠溶和粪肥CH4排放量为496-763Gg/年(平均值=624;置信度为95%)。我们还应用AI方法来估算厌氧消化器部署的CH4减排量。我们确定了162个大型(第90个百分位数)农场,并通过采用厌氧消化器估计这些大型设施的CH4还原潜力为83GgCH4/年。结果表明,我们的人工智能方法可以应用于表征粪便系统(例如,使用厌氧泻湖)并估计其他部门的温室气体排放量。
    California\'s dairy sector accounts for ∼50% of anthropogenic CH4 emissions in the state\'s greenhouse gas (GHG) emission inventory. Although California dairy facilities\' location and herd size vary over time, atmospheric inverse modeling studies rely on decade-old facility-scale geospatial information. For the first time, we apply artificial intelligence (AI) to aerial imagery to estimate dairy CH4 emissions from California\'s San Joaquin Valley (SJV), a region with ∼90% of the state\'s dairy population. Using an AI method, we process 316,882 images to estimate the facility-scale herd size across the SJV. The AI approach predicts herd size that strongly (>95%) correlates with that made by human visual inspection, providing a low-cost alternative to the labor-intensive inventory development process. We estimate SJV\'s dairy enteric and manure CH4 emissions for 2018 to be 496-763 Gg/yr (mean = 624; 95% confidence) using the predicted herd size. We also apply our AI approach to estimate CH4 emission reduction from anaerobic digester deployment. We identify 162 large (90th percentile) farms and estimate a CH4 reduction potential of 83 Gg CH4/yr for these large facilities from anaerobic digester adoption. The results indicate that our AI approach can be applied to characterize the manure system (e.g., use of an anaerobic lagoon) and estimate GHG emissions for other sectors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    人工评估草原上不同开花植物物种的花丰度是一个耗时的过程。我们提出了一种自动方法,通过使用深度学习(FasterR-CNN)对象检测方法,从基于无人机的航拍图像中确定草原上的花朵丰度,对两个地点的五个航班的数据进行了培训和评估。我们的深度学习网络能够识别和分类单个花朵。新方法允许生成满足或超过手动计数数据外推方法精度的花的空间明确图,同时减少了劳动强度。结果对某些类型的花非常好,准确率和召回率接近或高于90%。由于缺乏足够的训练数据等原因,其他花朵的检测效果不佳,由于物候学引起的外观变化,或花朵太小,无法在航拍图像上可靠区分。该方法能够精确估计许多开花植物物种的丰度。在未来,更多的训练数据的收集将允许更好的预测花还没有很好的预测。开发的管道可以应用于任何类型的空中物体检测问题。
    Manual assessment of flower abundance of different flowering plant species in grasslands is a time-consuming process. We present an automated approach to determine the flower abundance in grasslands from drone-based aerial images by using deep learning (Faster R-CNN) object detection approach, which was trained and evaluated on data from five flights at two sites. Our deep learning network was able to identify and classify individual flowers. The novel method allowed generating spatially explicit maps of flower abundance that met or exceeded the accuracy of the manual-count-data extrapolation method while being less labor intensive. The results were very good for some types of flowers, with precision and recall being close to or higher than 90%. Other flowers were detected poorly due to reasons such as lack of enough training data, appearance changes due to phenology, or flowers being too small to be reliably distinguishable on the aerial images. The method was able to give precise estimates of the abundance of many flowering plant species. In the future, the collection of more training data will allow better predictions for the flowers that are not well predicted yet. The developed pipeline can be applied to any sort of aerial object detection problem.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在基于深度学习的航空图像目标检测领域,这很难提取特征,因为图像是从自上而下的角度获得的。因此,有许多错误的检测盒。现有的后处理方法主要是去除重叠的检测盒,但是很难消除错误的检测盒。所提出的双重非最大抑制(双重NMS)将针对每个检测到的对象生成的检测框的密度与对应的分类置信度相结合,以自主地去除错误检测框。以双NMS作为后处理方法,在保持召回率不变的前提下,精度大大提高。在航拍图像中的车辆检测(VEDAI)和航拍图像中的对象检测(DOTA)数据集的数据集中,误检盒的去除率超过50%。此外,根据航拍图像的特点,提出了特征通道分离的相关计算层和扩张卷积引导结构,以增强网络的特征提取能力,这些结构构成了相关网络(CorNet)。与你只看一次(YOLOv3)相比,DOTA的CorNet的平均精度(mAP)增加了9.78%。与双NMS混合,在航拍图像中的检测效果得到显著提高。
    In the field of aerial image object detection based on deep learning, it\'s difficult to extract features because the images are obtained from a top-down perspective. Therefore, there are numerous false detection boxes. The existing post-processing methods mainly remove overlapped detection boxes, but it\'s hard to eliminate false detection boxes. The proposed dual non-maximum suppression (dual-NMS) combines the density of detection boxes that are generated for each detected object with the corresponding classification confidence to autonomously remove the false detection boxes. With the dual-NMS as a post-processing method, the precision is greatly improved under the premise of keeping recall unchanged. In vehicle detection in aerial imagery (VEDAI) and dataset for object detection in aerial images (DOTA) datasets, the removal rate of false detection boxes is over 50%. Additionally, according to the characteristics of aerial images, the correlation calculation layer for feature channel separation and the dilated convolution guidance structure are proposed to enhance the feature extraction ability of the network, and these structures constitute the correlation network (CorrNet). Compared with you only look once (YOLOv3), the mean average precision (mAP) of the CorrNet for DOTA increased by 9.78%. Commingled with dual-NMS, the detection effect in aerial images is significantly improved.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    In aerial images, corner points can be detected to describe the structural information of buildings for city modeling, geo-localization, and so on. For this specific vision task, the existing generic corner detectors perform poorly, as they are incapable of distinguishing corner points on buildings from those on other objects such as trees and shadows. Recently, fully convolutional networks (FCNs) have been developed for semantic image segmentation that are able to recognize a designated kind of object through a training process with a manually labeled dataset. Motivated by this achievement, an FCN-based approach is proposed in the present work to detect building corners in aerial images. First, a DeepLab model comprised of improved FCNs and fully-connected conditional random fields (CRFs) is trained end-to-end for building region segmentation. The segmentation is then further improved by using a morphological opening operation to increase its accuracy. Corner points are finally detected on the contour curves of building regions by using a scale-space detector. Experimental results show that the proposed building corner detection approach achieves an F-measure of 0.83 in the test image set and outperforms a number of state-of-the-art corner detectors by a large margin.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号