关键词: adaptive Unscented Kalman Filter (UKF) binocular vision depth propagation monocular vision robust estimation visual ranging

来  源:   DOI:10.3390/s24134178   PDF(Pubmed)

Abstract:
Visual ranging technology holds great promise in various fields such as unmanned driving and robot navigation. However, complex dynamic environments pose significant challenges to its accuracy and robustness. Existing monocular visual ranging methods are susceptible to scale uncertainty, while binocular visual ranging is sensitive to changes in lighting and texture. To overcome the limitations of single visual ranging, this paper proposes a fusion method for monocular and binocular visual ranging based on an adaptive Unscented Kalman Filter (AUKF). The proposed method first utilizes a monocular camera to estimate the initial distance based on the pixel size, and then employs the triangulation principle with a binocular camera to obtain accurate depth. Building upon this foundation, a probabilistic fusion framework is constructed to dynamically fuse monocular and binocular ranging using the AUKF. The AUKF employs nonlinear recursive filtering to estimate the optimal distance and its uncertainty, and introduces an adaptive noise-adjustment mechanism to dynamically update the observation noise based on fusion residuals, thus suppressing outlier interference. Additionally, an adaptive fusion strategy based on depth hypothesis propagation is designed to autonomously adjust the noise prior of the AUKF by combining current environmental features and historical measurement information, further enhancing the algorithm\'s adaptability to complex scenes. To validate the effectiveness of the proposed method, comprehensive evaluations were conducted on large-scale public datasets such as KITTI and complex scene data collected in real-world scenarios. The quantitative results demonstrate that the fusion method significantly improves the overall accuracy and stability of visual ranging, reducing the average relative error within an 8 m range by 43.1% and 40.9% compared to monocular and binocular ranging, respectively. Compared to traditional methods, the proposed method significantly enhances ranging accuracy and exhibits stronger robustness against factors such as lighting changes and dynamic targets. The sensitivity analysis further confirmed the effectiveness of the AUKF framework and adaptive noise strategy. In summary, the proposed fusion method effectively combines the advantages of monocular and binocular vision, significantly expanding the application range of visual ranging technology in intelligent driving, robotics, and other fields while ensuring accuracy, robustness, and real-time performance.
摘要:
视觉测距技术在无人驾驶和机器人导航等各个领域都具有广阔的前景。然而,复杂的动态环境对其准确性和鲁棒性提出了重大挑战。现有的单目视觉测距方法容易受到尺度不确定性的影响,而双目视觉测距对光照和纹理的变化很敏感。为了克服单一视觉测距的局限性,本文提出了一种基于自适应无迹卡尔曼滤波(AUKF)的单目和双目视觉测距融合方法。所提出的方法首先利用单目相机基于像素大小来估计初始距离,然后利用双目相机的三角测量原理来获得精确的深度。在这个基础上,构建了一个概率融合框架,使用AUKF动态融合单目和双目测距。AUKF采用非线性递归滤波来估计最佳距离及其不确定性,并介绍了一种自适应噪声调整机制,基于融合残差动态更新观测噪声,从而抑制异常干扰。此外,基于深度假设传播的自适应融合策略,通过结合当前环境特征和历史测量信息,自主调整AUKF的噪声先验,进一步增强了算法对复杂场景的适应性。为了验证该方法的有效性,对KITTI等大型公共数据集和在现实世界场景中收集的复杂场景数据进行了综合评估。定量结果表明,该融合方法显著提高了视觉测距的整体精度和稳定性,与单目和双目测距相比,8米范围内的平均相对误差分别降低了43.1%和40.9%,分别。与传统方法相比,该方法显著提高了测距精度,对光照变化和动态目标等因素具有较强的鲁棒性。敏感性分析进一步证实了AUKF框架和自适应噪声策略的有效性。总之,该融合方法有效地结合了单目视觉和双目视觉的优点,显著拓展了视觉测距技术在智能驾驶中的应用范围,机器人,和其他领域,同时确保准确性,鲁棒性,和实时性能。
公众号