关键词: 3D segmentation LiDAR sensor data annotation deep learning intensity rendering object detection

来  源:   DOI:10.3390/s24144475   PDF(Pubmed)

Abstract:
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements.
摘要:
在基于LiDAR传感器的自动驾驶汽车的背景下,分割网络在准确识别和分类对象中起着至关重要的作用。然而,用于训练网络的LiDAR传感器类型与部署在现实驾驶环境中的LiDAR传感器类型之间的差异可能会由于输入张量属性的差异而导致性能下降,比如x,y,和z坐标,和强度。为了解决这个问题,我们提出了新颖的强度渲染和数据插值技术。我们的研究通过将这些方法应用于现实场景中的对象跟踪来评估这些方法的有效性。提出的解决方案旨在协调传感器数据之间的差异,从而提高自主车辆感知系统的深度学习网络的性能和可靠性。此外,我们的算法防止性能下降,即使不同类型的传感器用于训练数据和实际应用。这种方法允许使用公开可用的开放数据集,而无需花费大量时间使用实际部署的传感器进行数据集构建和注释。从而大大节省时间和资源。当应用提出的方法时,与没有这些增强的场景相比,我们观察到mIoU性能提高了大约20%。
公众号