edge detection

边缘检测
  • 文章类型: Journal Article
    人工智能和计算机视觉的最新改进使自动检测医学图像中的异常成为可能。皮肤病变是其中的一大类。有些类型的病变会导致皮肤癌,再次与几种类型。黑色素瘤是最致命的皮肤癌之一。早期诊断至关重要。通过快速准确地诊断这些疾病,人工智能极大地帮助了治疗。当使用基本图像处理方法进行边缘检测时,皮肤病变内部边界的识别和描绘已显示出希望。关于边缘检测的进一步增强是可能的。在本文中,探讨了利用分数阶微分进行改进的边缘检测在皮损检测中的应用。提出了一种基于分数阶微分滤波器的皮肤病变图像边缘检测框架,可以提高恶性黑色素瘤的自动检测率。导出的图像用于增强输入图像。获得的图像然后经历基于深度学习的分类过程。在实验中使用了经过充分研究的HAM10000数据集。该系统使用所提出的基于分数导数的增强,使用EfficientNet模型实现了81.04%的精度,而使用原始图像时,精度约为77.94%。在几乎所有的实验中,增强的图像提高了准确性。结果表明,该方法提高了识别性能。
    Recent improvements in artificial intelligence and computer vision make it possible to automatically detect abnormalities in medical images. Skin lesions are one broad class of them. There are types of lesions that cause skin cancer, again with several types. Melanoma is one of the deadliest types of skin cancer. Its early diagnosis is at utmost importance. The treatments are greatly aided with artificial intelligence by the quick and precise diagnosis of these conditions. The identification and delineation of boundaries inside skin lesions have shown promise when using the basic image processing approaches for edge detection. Further enhancements regarding edge detections are possible. In this paper, the use of fractional differentiation for improved edge detection is explored on the application of skin lesion detection. A framework based on fractional differential filters for edge detection in skin lesion images is proposed that can improve automatic detection rate of malignant melanoma. The derived images are used to enhance the input images. Obtained images then undergo a classification process based on deep learning. A well-studied dataset of HAM10000 is used in the experiments. The system achieves 81.04% accuracy with EfficientNet model using the proposed fractional derivative based enhancements whereas accuracies are around 77.94% when using original images. In almost all the experiments, the enhanced images improved the accuracy. The results show that the proposed method improves the recognition performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    太阳能电池板可能会出现故障,这可能会产生高温并显着降低其发电量。为了检测大型光伏电站中太阳能电池板的故障,带有红外摄像机的无人机已经实现。无人机可以捕获大量的红外图像。手动分析如此大量的红外图像是不现实的。为了解决这个问题,我们开发了一种基于深度边缘的故障检测(DEBFD)方法,根据捕获的红外图像,应用卷积神经网络(CNN)进行边缘检测和目标检测。特别是,基于机器学习的轮廓过滤器被设计用于消除不正确的背景轮廓。然后检测太阳能电池板的故障。根据这些故障检测结果,太阳能电池板可以分为两类,即,正常和有故障的(即,宏的)。我们在多个场景中收集了2060张图像,并获得了较高的宏F1得分。我们的方法在NVIDIAGeForceRTX2080TiGPU上的太阳能电池板红外图像上实现了28fps的帧速率。
    Solar panels may suffer from faults, which could yield high temperature and significantly degrade their power generation. To detect faults of solar panels in large photovoltaic plants, drones with infrared cameras have been implemented. Drones may capture a huge number of infrared images. It is not realistic to manually analyze such a huge number of infrared images. To solve this problem, we develop a Deep Edge-Based Fault Detection (DEBFD) method, which applies convolutional neural networks (CNNs) for edge detection and object detection according to the captured infrared images. Particularly, a machine learning-based contour filter is designed to eliminate incorrect background contours. Then faults of solar panels are detected. Based on these fault detection results, solar panels can be classified into two classes, i.e., normal and faulty ones (i.e., macro ones). We collected 2060 images in multiple scenes and achieved a high macro F1 score. Our method achieved a frame rate of 28 fps over infrared images of solar panels on an NVIDIA GeForce RTX 2080 Ti GPU.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    针对前臂血管功能状态的分析,本文充分考虑了血管骨骼的方位和血管的几何特征,提出了一种基于前臂近红外图像切圆半径估计(RETC)的血管宽度计算算法。首先,通过图像裁剪对红外相机获得的初始红外图像进行预处理,对比拉伸,去噪,增强,和初始分割。第二,Zhang-Suen细化算法用于提取血管骨架。第三,Canny边缘检测方法用于血管边缘检测。最后,开发了一种RETC算法来计算血管宽度。本文评估了所提出的RETC算法的准确性,实验结果表明,我们的算法得到的血管宽度与参考血管宽度之间的平均绝对误差低至0.36,方差仅为0.10,与传统的计算测量相比,可以显着降低。
    In response to the analysis of the functional status of forearm blood vessels, this paper fully considers the orientation of the vascular skeleton and the geometric characteristics of blood vessels and proposes a blood vessel width calculation algorithm based on the radius estimation of the tangent circle (RETC) in forearm near-infrared images. First, the initial infrared image obtained by the infrared camera is preprocessed by image cropping, contrast stretching, denoising, enhancement, and initial segmentation. Second, the Zhang-Suen refinement algorithm is used to extract the vascular skeleton. Third, the Canny edge detection method is used to perform vascular edge detection. Finally, a RETC algorithm is developed to calculate the vessel width. This paper evaluates the accuracy of the proposed RETC algorithm, and experimental results show that the mean absolute error between the vessel width obtained by our algorithm and the reference vessel width is as low as 0.36, with a variance of only 0.10, which can be significantly reduced compared to traditional calculation measurements.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在二维图像处理和计算机视觉领域,在重叠或模糊的场景中准确检测和分割对象仍然是一个挑战。在分析法医调查中使用的鞋印时,这种困难更加严重,因为它们嵌入在诸如地面之类的嘈杂环境中,并且可能不清楚。传统卷积神经网络(CNN)尽管他们在各种图像分析任务中取得了成功,由于在噪声背景下分割交织的纹理和边界的复杂性,因此很难准确地描绘重叠的对象。本研究引入并采用了通过边缘检测和图像分割技术增强的YOLO(YouOnlyLookOnce)模型,以改善重叠鞋印的检测。通过关注鞋印纹理和地面之间的关键边界信息,我们的方法证明了灵敏度和精度的提高,对于最小重叠的图像,置信水平达到85%以上,对于广泛重叠的实例,置信水平保持在70%以上。生成卷积层的热图,以显示网络如何使用这些增强功能向成功检测收敛。这项研究可以提供一种潜在的方法来解决在嘈杂的背景下检测多个重叠物体的更广泛的挑战。
    In the field of 2-D image processing and computer vision, accurately detecting and segmenting objects in scenarios where they overlap or are obscured remains a challenge. This difficulty is worse in the analysis of shoeprints used in forensic investigations because they are embedded in noisy environments such as the ground and can be indistinct. Traditional convolutional neural networks (CNNs), despite their success in various image analysis tasks, struggle with accurately delineating overlapping objects due to the complexity of segmenting intertwined textures and boundaries against a background of noise. This study introduces and employs the YOLO (You Only Look Once) model enhanced by edge detection and image segmentation techniques to improve the detection of overlapping shoeprints. By focusing on the critical boundary information between shoeprint textures and the ground, our method demonstrates improvements in sensitivity and precision, achieving confidence levels above 85% for minimally overlapped images and maintaining above 70% for extensively overlapped instances. Heatmaps of convolution layers were generated to show how the network converges towards successful detection using these enhancements. This research may provide a potential methodology for addressing the broader challenge of detecting multiple overlapping objects against noisy backgrounds.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于后前胸部X射线(P-ACXR)图像的心胸比(CTR)是最常用的心脏测量方法之一,是最初评估心脏病的指标。然而,与肺野相比,心脏在P-ACXR图像上不易观察到。因此,放射科医师通常根据P-ACXR图像手动确定与心脏相邻的左、右肺区域的CTR右心边界点。同时,基于P-ACXR图像的手动CTR测量需要经验丰富的放射科医生,并且费时费力。
    基于上述内容,这篇文章提出了一个小说,基于使用卷积神经网络(CNN)从P-ACXR图像中提取的肺场的全自动CTR计算方法,克服了心脏分割的局限性,避免了心脏分割中的错误。首先,基于预训练的CNN从P-ACXR图像中提取肺场掩模图像。第二,基于肺场掩模图像的二维投影形态学,提出了一种新的心脏左右边界点定位方法。
    结果表明,基于各种预训练CNNs的测试集T1(21×512×512×512静态P-ACXR图像)和T2(13×512×512动态P-ACXR图像)中CTR的四个关键点在x轴方向的平均距离误差分别为4.1161和3.2116像素,分别。此外,基于四个提出的模型,测试集T1和T2的平均CTR误差分别为0.0208和0.0180。
    我们提出的模型实现了与以前的CardioNet模型相同的CTR计算性能,克服了心脏分割,并且花费更少的时间。因此,我们提出的方法是切实可行的,可能成为初步评估心脏病的有效工具.
    UNASSIGNED: The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR\'s right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious.
    UNASSIGNED: Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart\'s right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics.
    UNASSIGNED: The results show that the mean distance errors at the x-axis direction of the CTR\'s four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively.
    UNASSIGNED: Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    荧光活体显微镜捕获各种器官(如肺部)内动态多细胞相互作用的大型数据集,肝脏,和生活主体的大脑。在医学成像中,边缘检测用于准确识别和描绘图像内部的重要结构和边界。为了提高边缘清晰度,边缘检测通常需要包含低级特征。在这里,需要一种机器学习方法来自动检测微循环内明显标记的血细胞的多细胞聚集体的边缘。在这项工作中,结构化自适应提升树算法(AdaBoost.S)被提议为克服与医学图像相关的一些边缘检测挑战的贡献。算法设计基于以下观察:图像掩模上的边缘通常表现出特殊的结构并且相互依赖。可以使用从覆盖图像边缘掩模的较大图像块提取的特征来预测这样的结构。拟议的AdaBoost。S用于从暴露于电子烟蒸气的小鼠的荧光肺活体图像中检测血管内的多细胞聚集体。针对三种常规机器学习算法,评估了这种方法用于检测肺血管内血小板-中性粒细胞聚集体的预测能力:随机森林,XGBoost和决策树AdaBoost.S表现出平均召回,F分数,精度分别为0.81、0.79和0.78。与所有三种现有算法相比,AdaBoost.S在召回和F评分方面具有统计上更好的性能。虽然AdaBoost。S在精度上没有超过随机森林,它仍然优于XGBoost和决策树算法。拟议的AdaBoost。S广泛适用于分析其他荧光活体显微镜应用,包括癌症,感染,和心血管疾病。
    Fluorescence intravital microscopy captures large data sets of dynamic multicellular interactions within various organs such as the lungs, liver, and brain of living subjects. In medical imaging, edge detection is used to accurately identify and delineate important structures and boundaries inside the images. To improve edge sharpness, edge detection frequently requires the inclusion of low-level features. Herein, a machine learning approach is needed to automate the edge detection of multicellular aggregates of distinctly labeled blood cells within the microcirculation. In this work, the Structured Adaptive Boosting Trees algorithm (AdaBoost.S) is proposed as a contribution to overcome some of the edge detection challenges related to medical images. Algorithm design is based on the observation that edges over an image mask often exhibit special structures and are interdependent. Such structures can be predicted using the features extracted from a bigger image patch that covers the image edge mask. The proposed AdaBoost.S is applied to detect multicellular aggregates within blood vessels from the fluorescence lung intravital images of mice exposed to e-cigarette vapor. The predictive capabilities of this approach for detecting platelet-neutrophil aggregates within the lung blood vessels are evaluated against three conventional machine learning algorithms: Random Forest, XGBoost and Decision Tree. AdaBoost.S exhibits a mean recall, F-score, and precision of 0.81, 0.79, and 0.78, respectively. Compared to all three existing algorithms, AdaBoost.S has statistically better performance for recall and F-score. Although AdaBoost.S does not outperform Random Forest in precision, it remains superior to the XGBoost and Decision Tree algorithms. The proposed AdaBoost.S is widely applicable to analysis of other fluorescence intravital microscopy applications including cancer, infection, and cardiovascular disease.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文提出了一种新颖的分割算法,专门为具有高变异性和噪声的3D点云应用而开发,特别适用于文物建筑的三维数据。该方法可以在基于边缘检测的分割过程中进行分类。此外,它使用从3D点云的超体素化生成的基于图形的拓扑结构,用于使边缘点闭合并定义不同的段。该算法为生成结果提供了有价值的工具,这些结果可用于后续的分类任务和处理3D点云的更广泛的计算机应用。这种分割方法的特点之一是它是无监督的,这使得它对于标记数据稀缺的传统应用特别有利。它也很容易适应不同的边缘点检测和超体素化算法。最后,结果表明,三维数据可以分割成不同的建筑元素,这对于进一步分类或识别很重要。对历史建筑的真实数据进行的大量测试证明了该方法的有效性。结果表明,与其他三种分割方法相比,性能更优越,无论是在全球范围内还是在历史建筑的平面和弯曲区域的分割中。
    This paper presents a novel segmentation algorithm specially developed for applications in 3D point clouds with high variability and noise, particularly suitable for heritage building 3D data. The method can be categorized within the segmentation procedures based on edge detection. In addition, it uses a graph-based topological structure generated from the supervoxelization of the 3D point clouds, which is used to make the closure of the edge points and to define the different segments. The algorithm provides a valuable tool for generating results that can be used in subsequent classification tasks and broader computer applications dealing with 3D point clouds. One of the characteristics of this segmentation method is that it is unsupervised, which makes it particularly advantageous for heritage applications where labelled data is scarce. It is also easily adaptable to different edge point detection and supervoxelization algorithms. Finally, the results show that the 3D data can be segmented into different architectural elements, which is important for further classification or recognition. Extensive testing on real data from historic buildings demonstrated the effectiveness of the method. The results show superior performance compared to three other segmentation methods, both globally and in the segmentation of planar and curved zones of historic buildings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究引入了一种新的方法,通过一种新的算法来提高图像边缘检测的精度,该算法植根于从SCt子类导出的系数,ρ(CSKP模型)。我们的方法对输入图像像素采用卷积运算,在八个不同方向上利用CSKP掩码窗口,培养对边缘特征的全面、多方位分析。为了衡量我们算法的有效性,图像质量是通过感知显著的指标来评估的,包括对比,相关性,能源,同质性,和熵。该研究旨在通过提供强大而创新的解决方案来增强图像边缘检测,从而为计算机视觉和医学成像等多种应用提供有价值的工具。结果显示有显著的改善,肯定了所提出的算法的潜力,以推进当前国家的最先进的图像处理。
    This research introduces a new approach to elevate the precision of image edge detection through a new algorithm rooted in the coefficients derived from the subclass SCt,ρ (CSKP model). Our method employs convolution operations on input image pixels, utilizing the CSKP mask window in eight distinct directions, fostering a comprehensive and multi-directional analysis of edge features. To gauge the efficacy of our algorithm, image quality is assessed through perceptually significant metrics, including contrast, correlation, energy, homogeneity, and entropy. The study aims to contribute a valuable tool for diverse applications such as computer vision and medical imaging by presenting a robust and innovative solution to enhance image edge detection. The results demonstrate notable improvements, affirming the potential of the proposed algorithm to advance the current state-of-the-art in image processing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着新兴智能化的快速发展,灵活,透明,和可穿戴电子设备,例如基于量子点的微型发光二极管(micro-LED),薄膜晶体管(TFT),和灵活的传感器,出现了许多像素级打印技术。其中,喷墨打印已被证明是一致打印微米级墨滴的有用和有效的工具,例如,小于50μm,到可穿戴电子设备上。然而,快速准确地确定打印质量,这对电子设备的性能很重要,是具有挑战性的,由于大量和微米尺寸的墨滴。因此,利用现有的图像处理算法,我们开发了一种有效的方法和软件,用于快速检测喷墨打印中打印油墨的形态。该方法基于边缘检测技术。我们相信这种方法可以极大地满足喷墨打印中对快速评估打印质量的日益增长的需求。
    With the rapid development of the emerging intelligent, flexible, transparent, and wearable electronic devices, such as quantum-dot-based micro light-emitting diodes (micro-LEDs), thin-film transistors (TFTs), and flexible sensors, numerous pixel-level printing technologies have emerged. Among them, inkjet printing has proven to be a useful and effective tool for consistently printing micron-level ink droplets, for instance, smaller than 50 µm, onto wearable electronic devices. However, quickly and accurately determining the printing quality, which is significant for the electronic device performance, is challenging due to the large quantity and micron size of ink droplets. Therefore, leveraging existing image processing algorithms, we have developed an effective method and software for quickly detecting the morphology of printed inks served in inkjet printing. This method is based on the edge detection technology. We believe this method can greatly meet the increasing demands for quick evaluation of print quality in inkjet printing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:X射线计算机断层扫描(CT)是测量土壤中植物根系生长的有力工具。然而,用更大的罐子快速扫描,这是吞吐量优先的作物育种所必需的,导致高噪声水平,低分辨率,CT体积中的根段模糊。此外,虽然植物根系分割对于根系量化至关重要,关于分割嘈杂根段的详细条件研究很少。本研究旨在研究扫描时间和基于深度学习的图像质量恢复对CT体积中模糊水稻(Oryzasativa)根段语义分割的影响。
    结果:VoxResNet,基于卷积神经网络的逐体素残差网络,被用作分割模型。使用在33、66、150、300和600s的扫描时间获得的CT体积比较模型的训练效率。样本的学习效率相似,除了33和66s的扫描时间。此外,预测体积的噪声水平因扫描条件而异,说明扫描时间≥150s的噪声水平不影响模型训练效率。传统的过滤方法,如中值滤波和边缘检测,在任何条件下,培训效率都提高了约10%。然而,33和66s扫描样本的训练效率仍然相对较低。我们得出结论,扫描时间必须至少为150s,以免影响分割。最后,我们构建了150个s扫描CT体积的语义分割模型,骰子损失达到0.093。该模型无法预测侧根,这些数据不包括在训练数据中。这种限制将通过准备适当的训练数据来解决。
    结论:即使使用具有高噪声水平的快速扫描CT体积,也可以构建语义分割模型。鉴于扫描时间≥150s不影响分割结果,这种技术有望用于快速和低剂量扫描。这项研究提供了对具有高噪声水平的CT体积以外的图像的见解,这些图像在注释时具有挑战性。
    BACKGROUND: X-ray computed tomography (CT) is a powerful tool for measuring plant root growth in soil. However, a rapid scan with larger pots, which is required for throughput-prioritized crop breeding, results in high noise levels, low resolution, and blurred root segments in the CT volumes. Moreover, while plant root segmentation is essential for root quantification, detailed conditional studies on segmenting noisy root segments are scarce. The present study aimed to investigate the effects of scanning time and deep learning-based restoration of image quality on semantic segmentation of blurry rice (Oryza sativa) root segments in CT volumes.
    RESULTS: VoxResNet, a convolutional neural network-based voxel-wise residual network, was used as the segmentation model. The training efficiency of the model was compared using CT volumes obtained at scan times of 33, 66, 150, 300, and 600 s. The learning efficiencies of the samples were similar, except for scan times of 33 and 66 s. In addition, The noise levels of predicted volumes differd among scanning conditions, indicating that the noise level of a scan time ≥ 150 s does not affect the model training efficiency. Conventional filtering methods, such as median filtering and edge detection, increased the training efficiency by approximately 10% under any conditions. However, the training efficiency of 33 and 66 s-scanned samples remained relatively low. We concluded that scan time must be at least 150 s to not affect segmentation. Finally, we constructed a semantic segmentation model for 150 s-scanned CT volumes, for which the Dice loss reached 0.093. This model could not predict the lateral roots, which were not included in the training data. This limitation will be addressed by preparing appropriate training data.
    CONCLUSIONS: A semantic segmentation model can be constructed even with rapidly scanned CT volumes with high noise levels. Given that scanning times ≥ 150 s did not affect the segmentation results, this technique holds promise for rapid and low-dose scanning. This study offers insights into images other than CT volumes with high noise levels that are challenging to determine when annotating.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号