motion estimation

运动估计
  • 文章类型: Journal Article
    已经提出了许多基于硬件和基于软件的策略来消除运动伪影,以改善3D光学相干断层扫描(OCT)图像质量。然而,基于硬件的策略必须采用额外的硬件来记录运动补偿信息。许多基于软件的策略必须以更长的采集时间为代价需要额外的扫描以进行运动校正。为了解决这个问题,提出了一种用于眼前节OCT体积成像的运动伪影校正和运动估计方法,无需额外的硬件和冗余扫描。已经在实验中证明了体内3D-OCT的具有亚像素精度的运动校正效果。此外,成像对象的生理信息,包括呼吸曲线和呼吸频率,已经使用所提出的方法进行了实验提取。所提出的方法为眼科的科学研究和临床诊断提供了强大的工具,并且可以进一步扩展到其他生物医学体积成像应用。
    A number of hardware-based and software-based strategies have been suggested to eliminate motion artifacts for improvement of 3D-optical coherence tomography (OCT) image quality. However, the hardware-based strategies have to employ additional hardware to record motion compensation information. Many software-based strategies have to need additional scanning for motion correction at the expense of longer acquisition time. To address this issue, we propose a motion artifacts correction and motion estimation method for OCT volumetric imaging of anterior segment, without requirements of additional hardware and redundant scanning. The motion correction effect with subpixel accuracy for in vivo 3D-OCT has been demonstrated in experiments. Moreover, the physiological information of imaging object, including respiratory curve and respiratory rate, has been experimentally extracted using the proposed method. The proposed method offers a powerful tool for scientific research and clinical diagnosis in ophthalmology and may be further extended for other biomedical volumetric imaging applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    受试者运动是磁共振成像(MRI)的一个长期存在的问题,这会严重恶化图像质量。已经提出了各种前瞻性和回顾性方法用于MRI运动校正,其中深度学习方法已经实现了最先进的运动校正性能。这篇调查论文旨在全面回顾基于深度学习的MRI运动矫正方法。详细描述了用于图像域或频域中的运动伪影减少和运动估计的神经网络。此外,除了运动校正MRI重建,简要介绍了估计运动如何应用于其他下游任务,旨在加强不同研究领域之间的互动。最后,我们确定了当前的局限性,并指出了基于深度学习的MRI运动校正的未来方向。
    Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    运动估计是无人机(UAV)应用中的主要问题。本文提出了一种使用来自惯性测量单元(IMU)和单目相机的信息来解决此问题的完整解决方案。该解决方案包括两个步骤:视觉定位和多感官数据融合。在本文中,IMU提供的姿态信息用作卡尔曼方程中的参数,这与纯视觉定位方法不同。然后,获得系统的位置,它将被用作数据融合中的观测。考虑到传感器的多个更新频率和视觉观察的延迟,提出了一种基于卡尔曼滤波器的多速率时延补偿最优估计器,它可以融合信息并获得3D位置的估计以及平移速度。此外,对估计器进行了修改,以最大限度地减少计算负担,这样它就可以在船上实时运行。使用四旋翼系统的现场实验评估了整体解决方案的性能,与其他一些方法的估计结果以及地面实况数据进行比较。实验结果表明了该方法的有效性。
    Motion estimation is a major issue in applications of Unmanned Aerial Vehicles (UAVs). This paper proposes an entire solution to solve this issue using information from an Inertial Measurement Unit (IMU) and a monocular camera. The solution includes two steps: visual location and multisensory data fusion. In this paper, attitude information provided by the IMU is used as parameters in Kalman equations, which are different from pure visual location methods. Then, the location of the system is obtained, and it will be utilized as the observation in data fusion. Considering the multiple updating frequencies of sensors and the delay of visual observation, a multi-rate delay-compensated optimal estimator based on the Kalman filter is presented, which could fuse the information and obtain the estimation of 3D positions as well as translational speed. Additionally, the estimator was modified to minimize the computational burden, so that it could run onboard in real time. The performance of the overall solution was assessed using field experiments on a quadrotor system, compared with the estimation results of some other methods as well as the ground truth data. The results illustrate the effectiveness of the proposed method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    运动捕获系统极大地受益于航空航天领域中对人机交互的研究。鉴于光学运动捕获系统的高成本和对照明条件的敏感性,以及考虑IMU传感器的漂移,本文利用一种低成本可穿戴传感器的融合方法进行混合上肢运动跟踪。我们提出了一种新颖的算法,该算法结合了四阶Runge-Kutta(RK4)Madgwick互补方向滤波器和Kalman滤波器,通过惯性测量单元(IMU)和超宽带(UWB)的数据融合来进行运动估计。MadgwickRK4定向滤波器用于通过磁的最佳融合来补偿陀螺仪漂移,角速度,和重力(MARG)系统,不需要噪声分布的知识来实施。然后,考虑到UWB系统提供的误差分布,我们采用卡尔曼滤波器来估计和融合UWB测量值,以进一步减少漂移误差。采用四个锚的立方体分布,UWB定位卡尔曼滤波器获得的无漂移位置用于融合IMU计算的位置。所提出的算法已通过各种运动进行了测试,并证明了从IMU方法到IMU/UWB融合方法的RMSE平均降低了1.2cm。实验结果表明,我们提出的算法可以准确跟踪人体上肢的运动,具有很高的可行性和稳定性。
    Motion capture systems have enormously benefited the research into human-computer interaction in the aerospace field. Given the high cost and susceptibility to lighting conditions of optical motion capture systems, as well as considering the drift in IMU sensors, this paper utilizes a fusion approach with low-cost wearable sensors for hybrid upper limb motion tracking. We propose a novel algorithm that combines the fourth-order Runge-Kutta (RK4) Madgwick complementary orientation filter and the Kalman filter for motion estimation through the data fusion of an inertial measurement unit (IMU) and an ultrawideband (UWB). The Madgwick RK4 orientation filter is used to compensate gyroscope drift through the optimal fusion of a magnetic, angular rate, and gravity (MARG) system, without requiring knowledge of noise distribution for implementation. Then, considering the error distribution provided by the UWB system, we employ a Kalman filter to estimate and fuse the UWB measurements to further reduce the drift error. Adopting the cube distribution of four anchors, the drift-free position obtained by the UWB localization Kalman filter is used to fuse the position calculated by IMU. The proposed algorithm has been tested by various movements and has demonstrated an average decrease in the RMSE of 1.2 cm from the IMU method to IMU/UWB fusion method. The experimental results represent the high feasibility and stability of our proposed algorithm for accurately tracking the movements of human upper limbs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:超声心动图通常用于监测心肌功能障碍。然而,它具有超声心动图图像质量差和医生主观判断等局限性。
    方法:在本文中,提出了一种基于超声心动图光流跟踪的计算模型,用于定量估计节段壁的运动。为了提高光流估计的准确性,提出了一种基于置信度优化的多分辨率(COM)光流模型的方法,以减少由于大幅度的心肌运动和存在的“阴影”和其他图像质量问题而引起的估计误差。此外,运动矢量分解和心室感兴趣区域的动态跟踪用于提取关于心肌节段运动的信息。使用模拟图像和50例临床病例(25名患者和25名健康志愿者)进行心肌运动分析,对所提出的方法进行了验证。
    结果:结果表明,所提出的方法可以很好地跟踪心肌节段的运动信息,并减少由于使用低质量的超声心动图图像而引起的光流估计误差。
    结论:所提出的方法提高了对心脏心室壁的运动估计的准确性。
    Ultrasonic echocardiography is commonly used for monitoring myocardial dysfunction. However, it has limitations such as poor quality of echocardiography images and subjective judgment of doctors.
    In this paper, a calculation model based on optical flow tracking of echocardiogram is proposed for the quantitative estimation motion of the segmental wall. To improve the accuracy of optical flow estimation, a method based on confidence-optimized multiresolution(COM) optical flow model is proposed to reduce the estimation errors caused by the large amplitude of myocardial motion and the presence of \"shadows\" and other image quality problems. In addition, motion vector decomposition and dynamic tracking of the ventricular region of interest are used to extract information regarding the myocardial segmental motion. The proposed method was validated using simulation images and 50 clinical cases (25 patients and 25 healthy volunteers) for myocardial motion analysis.
    The results demonstrated that the proposed method could track the motion information of myocardial segments well and reduce the estimation errors of optical flow caused due to the use of low-quality echocardiogram images.
    The proposed method improves the accuracy of motion estimation for the cardiac ventricular wall.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:高强度聚焦超声(HIFU)目前用于治疗各种疾病,但它在术前阶段仍然缺乏一种可靠的技术来准确地将其“能量刀片”放置在患病的目标上。最近引入了声辐射力成像(ARFI)来解决这个问题,但其适用性和局限性尚不清楚。
    目的:本研究的目的是评估ARFI方法在术前预测HIFU局灶性定位的性能。
    方法:点扩散函数(PSF)定位方法,这是从超声波超分辨率领域借来的,用于验证ARFI过程中基于核心自相关的运动估计算法。用于估计HIFU焦点的ARFI方法的准确性通过临床等效HIFU系统的体外和离体实验来测试。在切开测试对象后,对估计的焦点位置和受损区域的位置进行了比较。
    结果:结果表明,仅当组织位移较大时,PSF定位才能够用作运动检测的验证方法。使用ARFI方法,HIFU焦点的位置可以在术前通过2D运动图准确预测,轴向空间误差小于0.5mm。然而,导出的2D运动图只有在ARFI中的声学刺激足够强时才有价值,这可能是由于HIFU焦点位置在大深度和超声成像信号具有低信噪比的事实。
    结论:ARFI方法确实是一种在体外和离体预测HIFU病灶的术前准确的技术。如果要考虑临床应用,特别是在深层组织中,可能需要努力提高超声运动估计技术的能力。
    BACKGROUND: High-intensity focused ultrasound (HIFU) is currently used for the treatment of various diseases, but it still lacks a reliable technique in the preoperative stage to accurately place its \"energy blade\" onto diseased targets. Acoustic radiation force imaging (ARFI) was recently introduced to tackle this issue, but its applicability and limitations were not clear.
    OBJECTIVE: The aim of this study was to evaluate the performance of ARFI method in prediction of HIFU focal location at the preoperative stage.
    METHODS: A point spread function (PSF) localization method, which was borrowed from the ultrasound super resolution field, was used to validate the core autocorrelation-based motion estimation algorithm in the ARFI procedure. Accuracy of the ARFI method for estimating the HIFU focus were tested with in vitro and ex vivo experiments with a clinically equivalent HIFU system. Comparisons were made between the estimated focal locations and those of the damaged area after the testing objects were cut open.
    RESULTS: Results showed that the PSF localization was able to serve as a validating method for motion detection only when the tissue displacement was large. With the ARFI method, location of the HIFU focus could be accurately predicted by a 2D motion map preoperatively, and the axial spatial errors were less than 0.5 mm. However, the derived 2D motion maps can only be valuable when the acoustic stimulation in ARFI were strong enough, which was probably due to the fact that the HIFU focal locations were at large depths and the ultrasound imaging signal had low signal to noise ratio.
    CONCLUSIONS: The ARFI method was indeed an accurate technique for preoperatively predicting HIFU focus in vitro and ex vivo. If clinical applications were to be considered, particularly in deep tissues, efforts might need to be made to improve ability of the ultrasound motion estimation technique.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    未经证实:应变分析为心肌收缩提供了更全面的时空特征,有助于早期发现心功能不全。使用深度学习(DL)自动测量超声心动图视频中的心肌应变最近引起了人们的关注。然而,包括分割和运动估计在内的关键技术的发展仍然是一个挑战。在这项工作中,我们开发了一种新的基于DL的心肌分割和运动估计框架,以从超声心动图视频生成应变测量.
    UNASSIGNED:开发了三维(3D)卷积神经网络(CNN)用于心肌分割和用于运动估计的光流网络。分割网络用于定义感兴趣区域(ROI),并且使用光流网络来估计ROI中的像素运动。我们执行了模型架构搜索以识别用于运动估计的最佳基础架构。最终的工作流设计和相关的超参数是仔细实现的结果。此外,我们将DL模型与传统的斑点跟踪算法进行了比较,外部临床数据。每个视频由超声专家和DL专家使用斑点跟踪超声心动图(STE)和DL方法进行双盲测量,分别。
    UNASSIGNED:DL方法成功执行了自动分割,运动估计,和全球纵向应变(GLS)测量在所有的检查。三维分割具有较好的时空平滑性,平均骰子相关性达到0.82,目标帧效果优于以往的二维网络。最佳运动估计网络实现了每帧0.05±0.03mm的平均终点误差,比以前报道的最先进的。在GLS测量中,DL方法相对于传统方法没有显着差异,Spearman相关性为0.90(p<0.001),平均偏差为-1.2±1.5%。
    未经批准:总而言之,我们的方法具有更好的分割和运动估计性能,并证明了DL方法用于自动应变分析的可行性。DL方法有助于减少时间消耗和人力,这对转化研究和精准医学的努力有着巨大的希望。
    UNASSIGNED: Strain analysis provides more thorough spatiotemporal signatures for myocardial contraction, which is helpful for early detection of cardiac insufficiency. The use of deep learning (DL) to automatically measure myocardial strain from echocardiogram videos has garnered recent attention. However, the development of key techniques including segmentation and motion estimation remains a challenge. In this work, we developed a novel DL-based framework for myocardial segmentation and motion estimation to generate strain measures from echocardiogram videos.
    UNASSIGNED: Three-dimensional (3D) Convolutional Neural Network (CNN) was developed for myocardial segmentation and optical flow network for motion estimation. The segmentation network was used to define the region of interest (ROI), and the optical flow network was used to estimate the pixel motion in the ROI. We performed a model architecture search to identify the optimal base architecture for motion estimation. The final workflow design and associated hyperparameters are the result of a careful implementation. In addition, we compared the DL model with a traditional speck tracking algorithm on an independent, external clinical data. Each video was double-blind measured by an ultrasound expert and a DL expert using speck tracking echocardiography (STE) and DL method, respectively.
    UNASSIGNED: The DL method successfully performed automatic segmentation, motion estimation, and global longitudinal strain (GLS) measurements in all examinations. The 3D segmentation has better spatio-temporal smoothness, average dice correlation reaches 0.82, and the effect of target frame is better than that of previous 2D networks. The best motion estimation network achieved an average end-point error of 0.05 ± 0.03 mm per frame, better than previously reported state-of-the-art. The DL method showed no significant difference relative to the traditional method in GLS measurement, Spearman correlation of 0.90 (p < 0.001) and mean bias -1.2 ± 1.5%.
    UNASSIGNED: In conclusion, our method exhibits better segmentation and motion estimation performance and demonstrates the feasibility of DL method for automatic strain analysis. The DL approach helps reduce time consumption and human effort, which holds great promise for translational research and precision medicine efforts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的。实时三维(3D)磁共振(MR)成像由于MR信号采集缓慢而具有挑战性,导致高度欠采样的k空间数据。这里,我们提出了一个基于深度学习的,用于实时3DMR成像的k空间驱动的可变形配准网络(KS-RegNet)。通过合并先前的信息,KS-RegNet在完全采样的先前图像和从高度欠采样的k空间数据获取的机载图像之间执行可变形图像配准,生成高质量的车载图像以进行实时运动跟踪。方法。KS-RegNet是一个端到端的,由输入数据生成块组成的无监督网络,随后的U-Net核心块,以及以下操作以计算数据保真度和正则化损失。输入数据涉及完全采样,复值先验图像,和机载的k空间数据,实时MR图像(MRI)。从k空间数据中,通过数据生成块重建欠采样实时MRI,以输入到U-Net核心。此外,训练U-Net核心来学习欠采样工件,使用与实时MRI相同的读出轨迹对先前图像的k空间数据进行有意的欠采样,并重建以提供额外的输入。U-Net核心预测了一个变形矢量场,该变形矢量场将先前的MRI变形为机载实时MRI。为了避免量化图像相似性对伪影图像的不利影响,直接在k空间中评估变形的数据保真度损失。结果。与Elastix和其他深度学习网络架构相比,KS-RegNet表现出更好,更稳定的性能。心脏数据集上KS-RegNet的平均(±s.d.)DICE系数为5-,9-,和13个辐条k空间采集分别为0.884±0.025、0.889±0.024和0.894±0.022;相应的平均质心误差(COMEs)为1.21±1.09、1.29±1.22和1.01±0.86mm,分别。KS-RegNet还在腹部数据集上提供了最佳性能。结论。KS-RegNet允许以亚秒延迟的实时MRI生成。它可以实现潜在的实时MR引导的软组织跟踪,肿瘤定位,和放疗计划适应。
    Purpose. Real-time three-dimensional (3D) magnetic resonance (MR) imaging is challenging because of slow MR signal acquisition, leading to highly under-sampled k-space data. Here, we proposed a deep learning-based, k-space-driven deformable registration network (KS-RegNet) for real-time 3D MR imaging. By incorporating prior information, KS-RegNet performs a deformable image registration between a fully-sampled prior image and on-board images acquired from highly-under-sampled k-space data, to generate high-quality on-board images for real-time motion tracking.Methods. KS-RegNet is an end-to-end, unsupervised network consisting of an input data generation block, a subsequent U-Net core block, and following operations to compute data fidelity and regularization losses. The input data involved a fully-sampled, complex-valued prior image, and the k-space data of an on-board, real-time MR image (MRI). From the k-space data, under-sampled real-time MRI was reconstructed by the data generation block to input into the U-Net core. In addition, to train the U-Net core to learn the under-sampling artifacts, the k-space data of the prior image was intentionally under-sampled using the same readout trajectory as the real-time MRI, and reconstructed to serve an additional input. The U-Net core predicted a deformation vector field that deforms the prior MRI to on-board real-time MRI. To avoid adverse effects of quantifying image similarity on the artifacts-ridden images, the data fidelity loss of deformation was evaluated directly in k-space.Results. Compared with Elastix and other deep learning network architectures, KS-RegNet demonstrated better and more stable performance. The average (±s.d.) DICE coefficients of KS-RegNet on a cardiac dataset for the 5- , 9- , and 13-spoke k-space acquisitions were 0.884 ± 0.025, 0.889 ± 0.024, and 0.894 ± 0.022, respectively; and the corresponding average (±s.d.) center-of-mass errors (COMEs) were 1.21 ± 1.09, 1.29 ± 1.22, and 1.01 ± 0.86 mm, respectively. KS-RegNet also provided the best performance on an abdominal dataset.Conclusion. KS-RegNet allows real-time MRI generation with sub-second latency. It enables potential real-time MR-guided soft tissue tracking, tumor localization, and radiotherapy plan adaptation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:面部修复手术(FRS)需要准确安全快速地导航关键解剖结构。本文的目的是开发一种使用从单个摄像机获取的视频数据直接跟踪患者位置的方法,可以实现非侵入性,实时,在FRS中定位精度高。
    方法:我们的方法首先进行相机校准,并将从计算机断层扫描分割的表面配准到患者。然后,两步约束算法,包括特征局部约束和距离标准差约束,用于快速找到最佳特征匹配对。最后,从图像运动矩阵分解的摄像机和患者的运动用于跟踪摄像机和患者,分别。
    结果:所提出的方法在颅骨模型中实现了1.44±0.35、1.50±0.15、1.63±0.03mm的融合误差RMS,尸体下颌骨,还有人体实验,分别。该方法的上述误差低于基于光学跟踪系统的方法。此外,所提出的方法可以处理高达每秒24帧的视频流,可以满足FRS的实时性要求。
    结论:所提出的方法不依赖于附着在患者身上的跟踪标记;它可以自动执行,以保持正确的增强现实场景,并克服手术期间患者运动引起的定位精度下降。
    OBJECTIVE: Facial repair surgeries (FRS) require accuracy for navigating the critical anatomy safely and quickly. The purpose of this paper is to develop a method to directly track the position of the patient using video data acquired from the single camera, which can achieve noninvasive, real time, and high positioning accuracy in FRS.
    METHODS: Our method first performs camera calibration and registers the surface segmented from computed tomography to the patient. Then, a two-step constraint algorithm, which includes the feature local constraint and the distance standard deviation constraint, is used to find the optimal feature matching pair quickly. Finally, the movements of the camera and the patient decomposed from the image motion matrix are used to track the camera and the patient, respectively.
    RESULTS: The proposed method achieved fusion error RMS of 1.44 ± 0.35, 1.50 ± 0.15, 1.63 ± 0.03 mm in skull phantom, cadaver mandible, and human experiments, respectively. The above errors of the proposed method were lower than those of the optical tracking system-based method. Additionally, the proposed method could process video streams up to 24 frames per second, which can meet the real-time requirements of FRS.
    CONCLUSIONS: The proposed method does not rely on tracking markers attached to the patient; it could be executed automatically to maintain the correct augmented reality scene and overcome the decrease in positioning accuracy caused by patient movement during surgery.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    This study presents a 2-D lidar odometry based on an ICP (iterative closest point) variant used in a simple and straightforward platform that achieves real-time and low-drift performance. With a designated multi-scale feature extraction procedure, the lidar cloud information can be utilized at multiple levels and the speed of data association can be accelerated according to the multi-scale data structure, thereby achieving robust feature extraction and fast scan-matching algorithms. First, on a large scale, the lidar point cloud data are classified according to the curvature into two parts: smooth collection and rough collection. Then, on a small scale, noise and unstable points in the smooth or rough collection are filtered, and edge points and corner points are extracted. Then, the proposed tangent-vector-pairs based on edge and corner points are applied to evaluate the rotation term, which is significant for producing a stable solution in motion estimation. We compare our performance with two excellent open-source SLAM algorithms, Cartographer and Hector SLAM, using collected and open-access datasets in structured indoor environments. The results indicate that our method can achieve better accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号