point cloud data

  • 文章类型: Journal Article
    在材料的三维微观结构重建中建立精确的结构-性能联系和精确的相体积精度仍然具有挑战性,特别是有限的样本。本文提出了一种用于重建各种材料的3D微结构的优化方法。包括具有两相和三相的各向同性和各向异性类型,使用卷积占用网络和来自微观结构内层的点云。该方法强调精确的相位表示和与点云数据的兼容性。连接质量函数(QCF)重复循环中的一个阶段优化了卷积占用网络模型的权重,以最小化微观结构的统计属性与重建模型之间的误差。该模型成功地从初始2D系列图像重建3D表示。与筛选的泊松表面重建和局部隐式网格方法的比较证明了模型的有效性。所开发的模型证明适用于高质量的三维微结构重建,有助于结构-性能联系和有限元分析。
    Establishing accurate structure-property linkages and precise phase volume accuracy in 3D microstructure reconstruction of materials remains challenging, particularly with limited samples. This paper presents an optimized method for reconstructing 3D microstructures of various materials, including isotropic and anisotropic types with two and three phases, using convolutional occupancy networks and point clouds from inner layers of the microstructure. The method emphasizes precise phase representation and compatibility with point cloud data. A stage within the Quality of Connection Function (QCF) repetition loop optimizes the weights of the convolutional occupancy networks model to minimize error between the microstructure\'s statistical properties and the reconstructive model. This model successfully reconstructs 3D representations from initial 2D serial images. Comparisons with screened Poisson surface reconstruction and local implicit grid methods demonstrate the model\'s efficacy. The developed model proves suitable for high-quality 3D microstructure reconstruction, aiding in structure-property linkages and finite element analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    利用人类大脑识别和处理复杂数据的非凡能力对研究人员来说是一个重大挑战,特别是在点云分类领域-一种旨在复制大脑神经结构以进行空间识别的技术。最初的3D点云数据通常会受到噪音的影响,稀疏,和混乱,使准确分类成为一项艰巨的任务,特别是在提取本地信息特征时。因此,在这项研究中,我们提出了一种新颖的基于注意力的端到端点云下采样分类方法,被称为Pointas,这是一种实验性算法,旨在适应各种下游任务。PointAS由两个主要模块组成:自适应采样模块和注意模块。具体来说,注意模块将全局特征与输入点云数据聚合,而自适应模块提取局部特征。在点云分类任务中,我们的方法在很大程度上超过了现有的向下抽样方法,允许更精确地提取边缘数据点,以准确地捕获整体轮廓特征。在各种抽样比率下,PointAS的分类精度始终超过80%,即使在超高采样率下,也具有75.37%的显著精度。此外,我们的方法在实验中表现出鲁棒性,在不同的噪声干扰下,分类精度保持在72.50%或更高。定性和定量实验都肯定了我们的方法在抽样分类任务中的有效性,为研究人员提供了一种更准确的方法来识别和分类神经元,突触,和其他结构,从而促进对神经系统的更深入的了解。
    Harnessing the remarkable ability of the human brain to recognize and process complex data is a significant challenge for researchers, particularly in the domain of point cloud classification-a technology that aims to replicate the neural structure of the brain for spatial recognition. The initial 3D point cloud data often suffers from noise, sparsity, and disorder, making accurate classification a formidable task, especially when extracting local information features. Therefore, in this study, we propose a novel attention-based end-to-end point cloud downsampling classification method, termed as PointAS, which is an experimental algorithm designed to be adaptable to various downstream tasks. PointAS consists of two primary modules: the adaptive sampling module and the attention module. Specifically, the attention module aggregates global features with the input point cloud data, while the adaptive module extracts local features. In the point cloud classification task, our method surpasses existing downsampling methods by a significant margin, allowing for more precise extraction of edge data points to capture overall contour features accurately. The classification accuracy of PointAS consistently exceeds 80% across various sampling ratios, with a remarkable accuracy of 75.37% even at ultra-high sampling ratios. Moreover, our method exhibits robustness in experiments, maintaining classification accuracies of 72.50% or higher under different noise disturbances. Both qualitative and quantitative experiments affirm the efficacy of our approach in the sampling classification task, providing researchers with a more accurate method to identify and classify neurons, synapses, and other structures, thereby promoting a deeper understanding of the nervous system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    融合多个传感器感知,特别是激光雷达和摄像头,是自动驾驶系统中常见的目标识别方法。传统的目标检测算法受限于LiDAR点云的稀疏性,导致融合性能差,特别是用于探测小目标和远距离目标。在本文中,构造了基于Transformer的多任务并行神经网络,以同时执行深度完成和目标检测。对损失函数进行了重新设计,以降低深度完井中的环境噪声,并设计了一个新的融合模块来增强网络对前景和背景的感知。该网络利用RGB像素之间的相关性进行深度完成,完成LiDAR点云并解决稀疏LiDAR特征和密集像素特征之间的不匹配。随后,我们提取深度图特征,并有效地将它们与RGB特征融合,充分利用前景和背景之间的深度特征差异来增强目标检测性能,特别是对于具有挑战性的目标。与基线网络相比,提高了4.78%,8.93%,在汽车的困难指标中实现了15.54%,行人,和骑自行车的人,分别。实验结果还表明,该网络达到了38fps的速度,验证了该方法的有效性和可行性。
    Fusing multiple sensor perceptions, specifically LiDAR and camera, is a prevalent method for target recognition in autonomous driving systems. Traditional object detection algorithms are limited by the sparse nature of LiDAR point clouds, resulting in poor fusion performance, especially for detecting small and distant targets. In this paper, a multi-task parallel neural network based on the Transformer is constructed to simultaneously perform depth completion and object detection. The loss functions are redesigned to reduce environmental noise in depth completion, and a new fusion module is designed to enhance the network\'s perception of the foreground and background. The network leverages the correlation between RGB pixels for depth completion, completing the LiDAR point cloud and addressing the mismatch between sparse LiDAR features and dense pixel features. Subsequently, we extract depth map features and effectively fuse them with RGB features, fully utilizing the depth feature differences between foreground and background to enhance object detection performance, especially for challenging targets. Compared to the baseline network, improvements of 4.78%, 8.93%, and 15.54% are achieved in the difficult indicators for cars, pedestrians, and cyclists, respectively. Experimental results also demonstrate that the network achieves a speed of 38 fps, validating the efficiency and feasibility of the proposed method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着深海石油勘探和钻井平台规模的不断扩大,评估,维护,海洋结构的优化变得至关重要。传统的检测和手动测量方法不足以满足这些需求,但是三维激光扫描技术提供了一个有前途的解决方案。然而,海洋环境的复杂性,包括波浪和风,通常会导致以噪声点和冗余为特征的有问题的点云数据。为了应对这一挑战,本文提出了一种将K最近邻滤波与基于双曲函数的加权混合滤波相结合的方法。实验结果表明,该算法在处理海上油气平台点云数据方面具有出色的性能。与现有技术相比,该方法将噪声点滤波效率提高了约11%,并将总误差降低了0.6个百分点。这种方法不仅可以准确地处理高密度区域中的异常,而且还可以在保留重要细节的同时去除噪声。此外,本文提出的研究方法特别适用于复杂海洋环境中的大型点云数据的处理。提高了数据精度,优化了海上油气平台的三维重建,为这些平台的陆基预制提供可靠的尺寸信息。
    With the increasing scale of deep-sea oil exploration and drilling platforms, the assessment, maintenance, and optimization of marine structures have become crucial. Traditional detection and manual measurement methods are inadequate for meeting these demands, but three-dimensional laser scanning technology offers a promising solution. However, the complexity of the marine environment, including waves and wind, often leads to problematic point cloud data characterized by noise points and redundancy. To address this challenge, this paper proposes a method that combines K-Nearest-Neighborhood filtering with a hyperbolic function-based weighted hybrid filtering. The experimental results demonstrate the exceptional performance of the algorithm in processing point cloud data from offshore oil and gas platforms. The method improves noise point filtering efficiency by approximately 11% and decreases the total error by 0.6 percentage points compared to existing technologies. Not only does this method accurately process anomalies in high-density areas-it also removes noise while preserving important details. Furthermore, the research method presented in this paper is particularly suited for processing large point cloud data in complex marine environments. It enhances data accuracy and optimizes the three-dimensional reconstruction of offshore oil and gas platforms, providing reliable dimensional information for land-based prefabrication of these platforms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    头部姿势估计服务于各种应用,比如凝视估计,疲劳驱动检测,和虚拟现实。尽管如此,由于对单一数据源的依赖,实现精确和有效的预测仍然具有挑战性。因此,这项研究介绍了一种涉及多模态特征融合的技术,以提高头部姿态估计的准确性。所提出的方法合并了来自不同来源的数据,包括RGB和深度图像,为了构建头部的全面三维表示,通常称为点云。该方法值得注意的创新包括PointNet中的剩余多层感知器结构,旨在应对与梯度相关的挑战,以及旨在降低噪声的空间自我注意机制。增强的PointNet和ResNet网络用于从点云和图像中提取特征。这些提取的特征经历融合。此外,评分模块的合并增强了鲁棒性,特别是在涉及面部遮挡的场景中。这是通过保留得分最高的点云中的特征来实现的。此外,采用预测模块,结合分类和回归方法来准确估计头部姿势。该方法提高了头部姿态估计的准确性和鲁棒性,尤其是涉及面部阻塞的病例。这些进步通过使用BIWI数据集进行的实验得到了证实,证明了该方法相对于现有技术的优越性。
    Head pose estimation serves various applications, such as gaze estimation, fatigue-driven detection, and virtual reality. Nonetheless, achieving precise and efficient predictions remains challenging owing to the reliance on singular data sources. Therefore, this study introduces a technique involving multimodal feature fusion to elevate head pose estimation accuracy. The proposed method amalgamates data derived from diverse sources, including RGB and depth images, to construct a comprehensive three-dimensional representation of the head, commonly referred to as a point cloud. The noteworthy innovations of this method encompass a residual multilayer perceptron structure within PointNet, designed to tackle gradient-related challenges, along with spatial self-attention mechanisms aimed at noise reduction. The enhanced PointNet and ResNet networks are utilized to extract features from both point clouds and images. These extracted features undergo fusion. Furthermore, the incorporation of a scoring module strengthens robustness, particularly in scenarios involving facial occlusion. This is achieved by preserving features from the highest-scoring point cloud. Additionally, a prediction module is employed, combining classification and regression methodologies to accurately estimate head poses. The proposed method improves the accuracy and robustness of head pose estimation, especially in cases involving facial obstructions. These advancements are substantiated by experiments conducted using the BIWI dataset, demonstrating the superiority of this method over existing techniques.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究提出了一种新颖的混合仿真技术,用于使用光检测和测距(LiDAR)扫描点云数据(PCD)和多项式回归处理来分析结构变形和应力。该方法从PCD估计变形结构的边缘和角点。它转换为Dirichlet边界条件,用于使用粒子差法(PDM)进行数值模拟,它仅基于强大的公式利用节点,它有利于处理基本边界和节点重排,包括分析步骤之间的节点生成和删除。与以往的研究不同,依赖于带有附加目标的数字图像,这项研究使用在没有任何目标的加载过程中通过LiDAR扫描获得的PCD。基本边界条件的实现自然会为PDM仿真建立边界值问题。通过弹性梁问题和橡胶梁上的三点弯曲试验验证了开发的混合模拟技术。将结果与ANSYS分析的结果进行了比较,表明该技术精确地近似变形的边缘形状,从而导致精确的应力计算。当使用线性应变模型并增加PDM模型节点数时,精度得到了提高。此外,PCD处理和边缘点提取过程中出现的误差受多项式回归方程阶数的影响。在将数值分析与数字图像联系起来具有挑战性以及难以直接进行机械仪表测量的情况下,仿真技术具有优势。此外,它在结构健康监测和涉及机器领先技术的智能建筑中具有潜在的应用。
    This study proposes a novel hybrid simulation technique for analyzing structural deformation and stress using light detection and ranging (LiDAR)-scanned point cloud data (PCD) and polynomial regression processing. The method estimates the edge and corner points of the deformed structure from the PCD. It transforms into a Dirichlet boundary condition for the numerical simulation using the particle difference method (PDM), which utilizes nodes only based on the strong formulation, and it is advantageous for handling essential boundaries and nodal rearrangement, including node generation and deletion between analysis steps. Unlike previous studies, which relied on digital images with attached targets, this research uses PCD acquired through LiDAR scanning during the loading process without any target. Essential boundary condition implementation naturally builds a boundary value problem for the PDM simulation. The developed hybrid simulation technique was validated through an elastic beam problem and a three-point bending test on a rubber beam. The results were compared with those of ANSYS analysis, showing that the technique accurately approximates the deformed edge shape leading to accurate stress calculations. The accuracy improved when using a linear strain model and increasing the number of PDM model nodes. Additionally, the error that occurred during PCD processing and edge point extraction was affected by the order of polynomial regression equation. The simulation technique offers advantages in cases where linking numerical analysis with digital images is challenging and when direct mechanical gauge measurement is difficult. In addition, it has potential applications in structural health monitoring and smart construction involving machine leading techniques.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    影响牛肉品质的主要因素,消费者满意度,购买决定是牛肉的嫩度。在这项研究中,提出了一种基于气流压力结合结构光三维视觉技术的牛肉嫩度快速无损检测方法。利用结构光3D相机扫描气流作用于牛肉表面1.8s后的三维点云变形信息。利用去噪得到牛肉表面凹陷区域的六个变形特征和三个点云特征,点云旋转,点云分割,点云下降采样,alphaShape,和其他算法。共有9个特征主要集中在前5个主成分(PC)。因此,前五台PC被放入三种不同的型号。结果表明,极限学习机(ELM)模型对牛肉剪切力的预测效果较高,预测均方根误差(RMSEP)为11.1389,相关系数(R)为0.8356。此外,ELM模型对嫩牛肉的正确分类准确率达到92.96%。总体分类准确率达到93.33%。因此,该方法和技术可用于牛肉嫩度的检测。
    The main factor affecting beef quality, consumer satisfaction, and purchase decisions is beef tenderness. In this study, a rapid nondestructive testing method for beef tenderness based on airflow pressure combined with structural light 3D vision technology was proposed. The structural light 3D camera was used to scan the 3D point cloud deformation information of the beef surface after the airflow acted on it for 1.8 s. Six deformation characteristics and three point cloud characteristics of the beef surface depression region were obtained by using denoising, point cloud rotation, point cloud segmentation, point cloud descending sampling, alphaShape, and other algorithms. A total of nine characteristics were mainly concentrated in the first five principal components (PCs). Therefore, the first five PCs were put into three different models. The results showed that the Extreme Learning Machine (ELM) model had a comparatively higher prediction effect for the prediction of beef shear force, with a root mean square error of prediction (RMSEP) of 11.1389 and a correlation coefficient (R) of 0.8356. In addition, the correct classification accuracy of the ELM model for tender beef achieved 92.96%. The overall classification accuracy reached 93.33%. Consequently, the proposed methods and technology can be applied for beef tenderness detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目前,三维激光扫描点云在许多重要领域得到了广泛的应用,如非接触式测量和逆向工程。然而,如何高效、精确地提取随机性强、不确定性明显的无组织点云数据的边界特征是一个巨大的挑战。因此,将基于并发Delaunay三角网格(CDTM)开发一种新型的边界提取方法,将公共数据点周围所有CDTM的顶角加在一起作为评价指标,以判断该目标点是否会出现在边界区域。根据对每个数据点的CDTM数量的统计分析,另一种新型的基于CDTM的边界提取方法将通过提前滤除大部分潜在的非边缘点而得到进一步改进。然后,这两种基于CDTM的方法和流行的α-shape方法将用于对多个点云数据集进行边界提取,以进行比较分析,并详细讨论其提取精度和时间消耗。最后,所有获得的结果都可以有力地证明,这两种基于CDTM的方法在提取各种无组织点云的边界特征方面都具有优越的准确性和强大的鲁棒性。但是统计上改进的版本可以大大减少时间消耗。
    Currently, three-dimensional (3D) laser-scanned point clouds have been broadly applied in many important fields, such as non-contact measurements and reverse engineering. However, it is a huge challenge to efficiently and precisely extract the boundary features of unorganized point cloud data with strong randomness and distinct uncertainty. Therefore, a novel type of boundary extraction method will be developed based on concurrent Delaunay triangular meshes (CDTMs), which adds the vertex-angles of all CDTMs around a common data point together as an evaluation index to judge whether this targeted point will appear at boundary regions. Based on the statistical analyses on the CDTM numbers of every data point, another new type of CDTM-based boundary extraction method will be further improved by filtering out most of potential non-edge points in advance. Then these two CDTM-based methods and popular α-shape method will be employed in conducting boundary extractions on several point cloud datasets for comparatively analyzing and discussing their extraction accuracies and time consumptions in detail. Finally, all obtained results can strongly demonstrate that both these two CDTM-based methods present superior accuracies and strong robustness in extracting the boundary features of various unorganized point clouds, but the statistically improved version can greatly reduce time consumption.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    自动导引车广泛用于仓储环境中的自动化托盘处理,是构建智能物流系统的基础部分之一。托盘检测是自动导引车的关键技术,直接影响生产效率。提出了一种基于点云数据的自动导引车托盘检测方法,它由五个模块组成,包括点云预处理,关键点提取,功能说明,表面匹配和点云配准。该方法将颜色与托盘点云的几何特征相结合,通过自适应选择最优邻域来构造新的自适应颜色快速点特征直方图(ACFPFH)特征描述符。此外,提出了一种新的曲面匹配方法,称为双向最近邻距离比-近似全等三角形邻域(BNNDR-ACTN)。所提出的方法克服了当前方法效率低等问题,鲁棒性差,随机参数选择,而且很耗时。要验证性能,在两种实际情况下,将所提出的方法与传统和改进的迭代最近点(ICP)方法进行了比较。结果表明,均方根误差(RMSE)降低到0.009,运行时间降低到0.989s,结果表明,该方法具有更快的配准速度,同时保持了更高的配准精度。
    Automated guided vehicles are widely used in warehousing environments for automated pallet handling, which is one of the fundamental parts to construct intelligent logistics systems. Pallet detection is a critical technology for automated guided vehicles, which directly affects production efficiency. A novel pallet detection method for automated guided vehicles based on point cloud data is proposed, which consists of five modules including point cloud preprocessing, key point extraction, feature description, surface matching and point cloud registration. The proposed method combines the color with the geometric features of the pallet point cloud and constructs a new Adaptive Color Fast Point Feature Histogram (ACFPFH) feature descriptor by selecting the optimal neighborhood adaptively. In addition, a new surface matching method called the Bidirectional Nearest Neighbor Distance Ratio-Approximate Congruent Triangle Neighborhood (BNNDR-ACTN) is proposed. The proposed method overcomes the problems of current methods such as low efficiency, poor robustness, random parameter selection, and being time-consuming. To verify the performance, the proposed method is compared with the traditional and modified Iterative Closest Point (ICP) methods in two real-world cases. The results show that the Root Mean Square Error (RMSE) is reduced to 0.009 and the running time is reduced to 0.989 s, which demonstrates that the proposed method has faster registration speed while maintaining higher registration accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    体积晶体结构索引和取向映射是几乎任何定量研究局部化学组成特征和材料微观结构之间空间相关性的关键数据处理步骤。对于电子和X射线衍射方法,可以开发索引工具,该工具比较测量和分析计算的图案,以解码感兴趣的局部区域内的结构和相对取向。因此,存在许多数值上有效和自动化的软件工具来解决上述表征任务。对于原子探针层析成像(APT)实验,然而,在测量模式和分析计算模式之间进行比较的策略不太稳健,因为许多APT数据集包含大量噪声。鉴于此类噪声的足够通用的预测模型仍然难以捉摸,用于APT的晶体学工具面临着几个限制:它们对噪声的鲁棒性有限,因此,它们识别和区分不同晶体结构和取向的能力也是如此。此外,这些工具是顺序的,需要大量的手动交互。在组合中,这使得对潜在晶体学信息进行自动化高通量研究的可靠不确定性量化成为APT数据的一项艰巨任务。为了改善这种状况,回顾了现有的方法,并讨论了它们如何与电子和X射线衍射界当前使用的方法联系起来。因此,修改了一些APT方法,以产生更可靠的原子排列描述符。还报道了如何开发一种开源软件工具,以实现晶体结构的强大缩放和自动识别,以及具有多相的纳米晶APT数据集中的晶体取向映射。
    Volumetric crystal structure indexing and orientation mapping are key data processing steps for virtually any quantitative study of spatial correlations between the local chemical composition features and the microstructure of a material. For electron and X-ray diffraction methods it is possible to develop indexing tools which compare measured and analytically computed patterns to decode the structure and relative orientation within local regions of interest. Consequently, a number of numerically efficient and automated software tools exist to solve the above characterization tasks. For atom-probe tomography (APT) experiments, however, the strategy of making comparisons between measured and analytically computed patterns is less robust because many APT data sets contain substantial noise. Given that sufficiently general predictive models for such noise remain elusive, crystallography tools for APT face several limitations: their robustness to noise is limited, and therefore so too is their capability to identify and distinguish different crystal structures and orientations. In addition, the tools are sequential and demand substantial manual interaction. In combination, this makes robust uncertainty quantification with automated high-throughput studies of the latent crystallographic information a difficult task with APT data. To improve the situation, the existing methods are reviewed and how they link to the methods currently used by the electron and X-ray diffraction communities is discussed. As a result of this, some of the APT methods are modified to yield more robust descriptors of the atomic arrangement. Also reported is how this enables the development of an open-source software tool for strong scaling and automated identification of a crystal structure, and the mapping of crystal orientation in nanocrystalline APT data sets with multiple phases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号