millimeter-wave radar

毫米波雷达
  • 文章类型: Journal Article
    这项研究探索了一种室内系统,用于跟踪多个人并检测跌倒,使用德州仪器的三台毫米波雷达。与可穿戴设备和相机方法相比,毫米波雷达不会受到机动性不便的困扰,照明条件,或隐私问题。我们对雷达特性进行了初步评估,覆盖雷达和覆盖区域之间的干扰等方面。然后,我们建立了一个实时框架来整合从这些雷达接收到的信号,允许我们非侵入性地追踪人类目标的位置和身体状态。此外,我们引入了创新战略,包括基于信号信噪比水平的动态基于密度的噪声应用空间聚类(DBSCAN)聚类,增强目标跟踪的概率矩阵,用于跌倒检测的目标状态预测,和用于降噪的反馈回路。我们使用超过300分钟的数据进行了广泛的评估,相当于大约360,000帧。我们的原型系统表现出非凡的性能,在人类跟踪场景中,单个目标的跟踪精度为98.9%,两个和三个目标的跟踪精度为96.5%和94.0%,分别。此外,在人体跌倒检测领域,该系统具有96.3%的高准确率,强调其在区分跌倒与其他状态方面的有效性。
    This study explored an indoor system for tracking multiple humans and detecting falls, employing three Millimeter-Wave radars from Texas Instruments. Compared to wearables and camera methods, Millimeter-Wave radar is not plagued by mobility inconveniences, lighting conditions, or privacy issues. We conducted an initial evaluation of radar characteristics, covering aspects such as interference between radars and coverage area. Then, we established a real-time framework to integrate signals received from these radars, allowing us to track the position and body status of human targets non-intrusively. Additionally, we introduced innovative strategies, including dynamic Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering based on signal SNR levels, a probability matrix for enhanced target tracking, target status prediction for fall detection, and a feedback loop for noise reduction. We conducted an extensive evaluation using over 300 min of data, which equated to approximately 360,000 frames. Our prototype system exhibited a remarkable performance, achieving a precision of 98.9% for tracking a single target and 96.5% and 94.0% for tracking two and three targets in human-tracking scenarios, respectively. Moreover, in the field of human fall detection, the system demonstrates a high accuracy rate of 96.3%, underscoring its effectiveness in distinguishing falls from other statuses.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着汽车智能化的不断发展,车辆乘员检测技术受到越来越多的关注。尽管在这一领域进行了各种类型的研究,一个简单的,可靠,缺乏高度私密的检测方法。本文提出了一种利用毫米波雷达进行车辆乘员检测的方法。具体来说,本文概述了利用毫米波雷达进行车辆乘员检测的系统设计。通过采集FMCW雷达的原始信号,并应用Range-FFT和DoA估计算法,生成了距离方位角热图,直观地描绘车内人员的当前状态。此外,利用收集的乘客距离-方位角热图,本文将FasterR-CNN深度学习网络与雷达信号处理相结合,以识别乘客信息。最后,为了测试本文提出的检测方法的性能,在汽车上进行了实验验证,并将结果与传统的机器学习算法进行了比较。结果表明,本实验采用的方法具有较高的准确性,达到约99%。
    With the continuous development of automotive intelligence, vehicle occupant detection technology has received increasing attention. Despite various types of research in this field, a simple, reliable, and highly private detection method is lacking. This paper proposes a method for vehicle occupant detection using millimeter-wave radar. Specifically, the paper outlines the system design for vehicle occupant detection using millimeter-wave radar. By collecting the raw signals of FMCW radar and applying Range-FFT and DoA estimation algorithms, a range-azimuth heatmap was generated, visually depicting the current status of people inside the vehicle. Furthermore, utilizing the collected range-azimuth heatmap of passengers, this paper integrates the Faster R-CNN deep learning networks with radar signal processing to identify passenger information. Finally, to test the performance of the detection method proposed in this article, an experimental verification was conducted in a car and the results were compared with those of traditional machine learning algorithms. The findings indicated that the method employed in this experiment achieves higher accuracy, reaching approximately 99%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自动驾驶技术被认为是未来交通的发展趋势。毫米波雷达,凭借其远程探测和全天候运行的能力,是自动驾驶的关键传感器。自动驾驶中各种技术的发展依赖于广泛的模拟测试,其中通过雷达模型模拟真实雷达的输出起着至关重要的作用。目前,有许多独特的雷达建模方法。为便于更好地应用和发展雷达建模方法,本研究首先分析了雷达探测的机理和所面临的干扰因素,明确建模内容和影响建模质量的关键因素。然后,根据实际应用需求,提出了衡量雷达模型性能的关键指标。此外,全面介绍了各种雷达建模技术,原理和相关研究进展。评估这些建模方法的优缺点以确定其特征。最后,考虑到自动驾驶技术的发展趋势,分析了雷达建模技术的未来发展方向。通过以上内容,本文为雷达建模方法的发展和应用提供了有益的参考和帮助。
    Autonomous driving technology is considered the trend of future transportation. Millimeter-wave radar, with its ability for long-distance detection and all-weather operation, is a key sensor for autonomous driving. The development of various technologies in autonomous driving relies on extensive simulation testing, wherein simulating the output of real radar through radar models plays a crucial role. Currently, there are numerous distinctive radar modeling methods. To facilitate the better application and development of radar modeling methods, this study first analyzes the mechanism of radar detection and the interference factors it faces, to clarify the content of modeling and the key factors influencing modeling quality. Then, based on the actual application requirements, key indicators for measuring radar model performance are proposed. Furthermore, a comprehensive introduction is provided to various radar modeling techniques, along with the principles and relevant research progress. The advantages and disadvantages of these modeling methods are evaluated to determine their characteristics. Lastly, considering the development trends of autonomous driving technology, the future direction of radar modeling techniques is analyzed. Through the above content, this paper provides useful references and assistance for the development and application of radar modeling methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着自动驾驶和监控技术的不断进步,非侵入性目标监测和识别越来越受到重视。本文提出了一种ArcFaceSE注意力模型不可知元学习方法(AS-MAML),通过元学习将注意力机制集成到残差网络中,使用调频连续波(FMCW)毫米波雷达进行行人步态识别。我们利用信道注意机制增强基网络的特征提取能力,并将加性角裕度损失函数(ArcFaceloss)集成到MAML的内环中,以约束内环优化,提高雷达鉴别力。然后,该网络用于对从毫米波雷达获得的小样本微多普勒图像进行分类,作为姿态识别的数据源。对姿态估计和图像分类任务进行了实验测试。结果表明,显著的检测和识别性能,准确率为94.5%,伴随着95%的置信区间。此外,在开源数据集DIAT-μRadHAR上,经过特殊处理以增加分类难度,该网络的分类准确率为85.9%。
    With the continuous advancement of autonomous driving and monitoring technologies, there is increasing attention on non-intrusive target monitoring and recognition. This paper proposes an ArcFace SE-attention model-agnostic meta-learning approach (AS-MAML) by integrating attention mechanisms into residual networks for pedestrian gait recognition using frequency-modulated continuous-wave (FMCW) millimeter-wave radar through meta-learning. We enhance the feature extraction capability of the base network using channel attention mechanisms and integrate the additive angular margin loss function (ArcFace loss) into the inner loop of MAML to constrain inner loop optimization and improve radar discrimination. Then, this network is used to classify small-sample micro-Doppler images obtained from millimeter-wave radar as the data source for pose recognition. Experimental tests were conducted on pose estimation and image classification tasks. The results demonstrate significant detection and recognition performance, with an accuracy of 94.5%, accompanied by a 95% confidence interval. Additionally, on the open-source dataset DIAT-μRadHAR, which is specially processed to increase classification difficulty, the network achieves a classification accuracy of 85.9%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人群运动分析是公共安全领域的一项关键技术。该技术通过分析人群聚集和分散行为,为识别公共场所潜在危险提供参考。传统的视频处理技术在分析人群运动时容易受到环境光照和景深等因素的影响,所以不能准确定位事件的源头。雷达,另一方面,提供全天候的距离和角度测量,有效弥补了视频监控的不足。提出了一种基于雷达粒子流(RPF)的人群运动分析方法。首先,利用光流法从毫米波雷达点集的相邻帧中提取雷达粒子流。然后,定义了微来源的新概念来描述任何两个RPF向量是否起源于或到达相同的位置。最后,在每个地方,内部微源被计数以形成局部扩散电势,这表征了人群的运动状态。所提出的算法在实际场景中得到了验证。通过对聚集雷达数据的分析和处理,色散,和正常的运动,该算法能够有效地识别这些运动,准确率不低于88%。
    Crowd movement analysis (CMA) is a key technology in the field of public safety. This technology provides reference for identifying potential hazards in public places by analyzing crowd aggregation and dispersion behavior. Traditional video processing techniques are susceptible to factors such as environmental lighting and depth of field when analyzing crowd movements, so cannot accurately locate the source of events. Radar, on the other hand, offers all-weather distance and angle measurements, effectively compensating for the shortcomings of video surveillance. This paper proposes a crowd motion analysis method based on radar particle flow (RPF). Firstly, radar particle flow is extracted from adjacent frames of millimeter-wave radar point sets by utilizing the optical flow method. Then, a new concept of micro-source is defined to describe whether any two RPF vectors originated from or reach the same location. Finally, in each local area, the internal micro-sources are counted to form a local diffusion potential, which characterizes the movement state of the crowd. The proposed algorithm is validated in real scenarios. By analyzing and processing radar data on aggregation, dispersion, and normal movements, the algorithm is able to effectively identify these movements with an accuracy rate of no less than 88%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    步态识别,在生物识别和行为分析中至关重要,在人机交互中具有应用,身份验证,和健康监测。传统传感器在复杂或光线不足的设置中面临限制。基于RF的方法,特别是毫米波技术,正在为他们的隐私获得牵引力,对光条件不敏感,和高分辨率的无线传感应用。在本文中,我们提出了一种称为多维点云步态识别(PGGait)的步态识别系统。该系统使用商用毫米波雷达,通过专门设计的预处理管道提取高质量的点云。接下来是空间聚类算法,以分离用户并执行目标跟踪。同时,我们通过增加速度和信噪比来增强原始点云数据,形成多维点云的输入。最后,系统将点云数据输入到神经网络中,提取时空特征进行用户识别。我们使用市售的77GHz毫米波雷达实现了PGGait系统,并进行了全面的测试以验证其性能。实验结果表明,PGGait在识别单用户径向路径时的准确率高达96.75%,在两人情况下的识别准确率超过94.30%。该研究为各种应用的用户步态识别提供了一种高效可行的解决方案。
    Gait recognition, crucial in biometrics and behavioral analytics, has applications in human-computer interaction, identity verification, and health monitoring. Traditional sensors face limitations in complex or poorly lit settings. RF-based approaches, particularly millimeter-wave technology, are gaining traction for their privacy, insensitivity to light conditions, and high resolution in wireless sensing applications. In this paper, we propose a gait recognition system called Multidimensional Point Cloud Gait Recognition (PGGait). The system uses commercial millimeter-wave radar to extract high-quality point clouds through a specially designed preprocessing pipeline. This is followed by spatial clustering algorithms to separate users and perform target tracking. Simultaneously, we enhance the original point cloud data by increasing velocity and signal-to-noise ratio, forming the input of multidimensional point clouds. Finally, the system inputs the point cloud data into a neural network to extract spatial and temporal features for user identification. We implemented the PGGait system using a commercially available 77 GHz millimeter-wave radar and conducted comprehensive testing to validate its performance. Experimental results demonstrate that PGGait achieves up to 96.75% accuracy in recognizing single-user radial paths and exceeds 94.30% recognition accuracy in the two-person case. This research provides an efficient and feasible solution for user gait recognition with various applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人体运动识别是利用感知技术来收集肢体或身体所呈现的一些动作。这种做法涉及使用无线信号,processing,和分类来识别人体的一些规律运动。具有广泛的应用前景,包括智能养老金,远程健康监测,儿童监督。在传统的人体运动识别方法中,基于视频图像的识别技术和基于Wi-Fi的识别技术。然而,在一些昏暗和不完美的天气环境中,使用视频图像进行人体运动识别要保持较高的性能和识别率并不容易。在复杂环境的情况下,人体运动的Wi-Fi识别存在识别度低的问题。以往关于人体运动识别的研究大多是基于LiDAR感知技术。使用三维静态点云的LiDAR扫描只能呈现静态物体的点云特征;它难以反映运动物体的所有特征。此外,由于考虑到隐私和安全问题,在以往研究中采用的动态毫米波雷达点云对人体运动识别存在的问题,在非视线情况下识别人体运动特征以及更好地保护人们的隐私。在本文中,提出了一种基于毫米波雷达三维点云时空信息的人体运动特征识别系统(PNHM),设计了一个基于PointNet++网络的神经网络,以有效地识别人体运动特征,并研究了基于阈值法的四种人体运动。构建了人体在两个实验环境中以两个角度进行的四种运动的数据集。本文比较了该系统的四种标准主流3D点云人体动作识别模型。实验结果表明,人体直立行走时的识别准确率可以达到94%,从蹲到站的识别准确率可以达到84%,从站立到坐着可以达到87%,下降的识别准确率可达93%。
    Human movement recognition is the use of perceptual technology to collect some of the limb or body movements presented. This practice involves the use of wireless signals, processing, and classification to identify some of the regular movements of the human body. It has a wide range of application prospects, including in intelligent pensions, remote health monitoring, and child supervision. Among the traditional human movement recognition methods, the widely used ones are video image-based recognition technology and Wi-Fi-based recognition technology. However, in some dim and imperfect weather environments, it is not easy to maintain a high performance and recognition rate for human movement recognition using video images. There is the problem of a low recognition degree for Wi-Fi recognition of human movement in the case of a complex environment. Most of the previous research on human movement recognition is based on LiDAR perception technology. LiDAR scanning using a three-dimensional static point cloud can only present the point cloud characteristics of static objects; it struggles to reflect all the characteristics of moving objects. In addition, due to its consideration of privacy and security issues, the dynamic millimeter-wave radar point cloud used in the previous study on the existing problems of human body movement recognition performance is better, with the recognition of human movement characteristics in non-line-of-sight situations as well as better protection of people\'s privacy. In this paper, we propose a human motion feature recognition system (PNHM) based on spatiotemporal information of the 3D point cloud of millimeter-wave radar, design a neural network based on the network PointNet++ in order to effectively recognize human motion features, and study four human motions based on the threshold method. The data set of the four movements of the human body at two angles in two experimental environments was constructed. This paper compares four standard mainstream 3D point cloud human action recognition models for the system. The experimental results show that the recognition accuracy of the human body\'s when walking upright can reach 94%, the recognition accuracy when moving from squatting to standing can reach 84%, that when moving from standing to sitting can reach 87%, and the recognition accuracy of falling can reach 93%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于多传感器融合感知的多目标跟踪是实现汽车智能驾驶的关键技术之一,已成为智能驾驶领域的研究热点。然而,目前大多数基于毫米波雷达和激光雷达信息融合的自主车辆目标跟踪方法都在努力保证测量数据的准确性和可靠性,不能有效解决复杂场景下的多目标跟踪问题。鉴于此,基于分布式多传感器多目标跟踪(DMMT)系统,本文提出了一种综合考虑目标跟踪等关键技术的自动驾驶汽车多目标跟踪方法,传感器配准,轨道协会,基于毫米波雷达和激光雷达的数据融合。首先,提出了一种适用于毫米波雷达和激光雷达的单传感器多目标跟踪方法,以形成各自的目标航迹;第二,采用卡尔曼滤波时间配准方法和残差偏估计空间配准方法实现毫米波雷达和激光雷达数据的时间和空间配准;第三,使用基于新目标密度的序列m-best方法来找到不同传感器的跟踪相关性;最后,采用中频异构传感器融合算法对毫米波雷达和激光雷达提供的航迹信息进行最佳组合,最终形成稳定的高精度全局航迹。为了验证所提出的方法,在高速场景中进行了多目标跟踪仿真验证。结果表明,本文提出的多目标跟踪方法能够实现高速行驶场景下对多目标车辆的航迹跟踪。与单雷达跟踪器相比,位置,速度,尺寸,航迹融合跟踪器的方向估计误差降低了85.5%,64.6%,75.3%,和9.5%,GOSPA指标的平均值降低了19.8%;与单雷达跟踪器相比,可以获得更准确的目标状态信息。
    Multitarget tracking based on multisensor fusion perception is one of the key technologies to realize the intelligent driving of automobiles and has become a research hotspot in the field of intelligent driving. However, most current autonomous-vehicle target-tracking methods based on the fusion of millimeter-wave radar and lidar information struggle to guarantee accuracy and reliability in the measured data, and cannot effectively solve the multitarget-tracking problem in complex scenes. In view of this, based on the distributed multisensor multitarget tracking (DMMT) system, this paper proposes a multitarget-tracking method for autonomous vehicles that comprehensively considers key technologies such as target tracking, sensor registration, track association, and data fusion based on millimeter-wave radar and lidar. First, a single-sensor multitarget-tracking method suitable for millimeter-wave radar and lidar is proposed to form the respective target tracks; second, the Kalman filter temporal registration method and the residual bias estimation spatial registration method are used to realize the temporal and spatial registration of millimeter-wave radar and lidar data; third, use the sequential m-best method based on the new target density to find the track the correlation of different sensors; and finally, the IF heterogeneous sensor fusion algorithm is used to optimally combine the track information provided by millimeter-wave radar and lidar, and finally form a stable and high-precision global track. In order to verify the proposed method, a multitarget-tracking simulation verification in a high-speed scene is carried out. The results show that the multitarget-tracking method proposed in this paper can realize the track tracking of multiple target vehicles in high-speed driving scenarios. Compared with a single-radar tracker, the position, velocity, size, and direction estimation errors of the track fusion tracker are reduced by 85.5%, 64.6%, 75.3%, and 9.5% respectively, and the average value of GOSPA indicators is reduced by 19.8%; more accurate target state information can be obtained than a single-radar tracker.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自动驾驶技术是指车载传感器、计算机等设备,通过获取外界环境信息,对车辆进行自主导航和控制。为了实现自动驾驶,车辆必须能够感知周围环境,识别和理解交通标志,交通信号,行人,和其他交通参与者,以及准确规划和控制他们的路径。交通标志和信号的识别是自动驾驶技术的重要组成部分,手势识别是交通信号识别的一个重要方面。本文介绍了mm-TPG,基于毫米波点云的交警手势识别系统。该系统采用60GHz调频连续波(FMCW)毫米波雷达作为传感器,实现对交警手势的高精度识别。最初,采用双阈值滤波算法对毫米波原始数据进行去噪,然后对生成的点云数据进行多帧合成处理,并使用ResNet18网络进行特征提取。最后,门控经常性单位用于分类,以识别不同的交警手势。实验结果表明,mm-TPG系统具有较高的准确性和鲁棒性,能够在光照和天气条件变化等复杂环境下有效识别交警手势,为交通安全提供有力支持。
    Automatic driving technology refers to equipment such as vehicle-mounted sensors and computers that are used to navigate and control vehicles autonomously by acquiring external environmental information. To achieve automatic driving, vehicles must be able to perceive the surrounding environment and recognize and understand traffic signs, traffic signals, pedestrians, and other traffic participants, as well as accurately plan and control their path. Recognition of traffic signs and signals is an essential part of automatic driving technology, and gesture recognition is a crucial aspect of traffic-signal recognition. This article introduces mm-TPG, a traffic-police gesture recognition system based on a millimeter-wave point cloud. The system uses a 60 GHz frequency-modulated continuous-wave (FMCW) millimeter-wave radar as a sensor to achieve high-precision recognition of traffic-police gestures. Initially, a double-threshold filtering algorithm is used to denoise the millimeter-wave raw data, followed by multi-frame synthesis processing of the generated point cloud data and feature extraction using a ResNet18 network. Finally, gated recurrent units are used for classification to enable the recognition of different traffic-police gestures. Experimental results demonstrate that the mm-TPG system has high accuracy and robustness and can effectively recognize traffic-police gestures in complex environments such as varying lighting and weather conditions, providing strong support for traffic safety.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文提出了一种基于深度学习的毫米波雷达和RGB相机传感器早期融合的目标检测和跟踪方法及其面向ADAS应用的嵌入式系统实现。所提出的系统不仅可以用于ADAS系统,还可以应用于运输系统中的智能路侧单元(RSU),以监控实时交通流量并警告道路使用者可能的危险情况。由于毫米波雷达的信号受恶劣天气和多云等照明的影响较小,sunny,下雪了,夜光,雨天,它可以在正常和不利条件下有效地工作。与单独使用RGB相机进行对象检测和跟踪相比,毫米波雷达和RGB相机技术的早期融合可以弥补RGB相机由于恶劣天气和/或照明条件而出现故障时的不良性能。所提出的方法结合了雷达和RGB相机的特征,并直接从端到端训练的深度神经网络输出结果。此外,整个系统的复杂性也降低了,因此所提出的方法可以在PC上以及在嵌入式系统(如NVIDIAJetsonXavier)上以17.39fps的速度实现。
    This paper proposes a deep learning-based mmWave radar and RGB camera sensor early fusion method for object detection and tracking and its embedded system realization for ADAS applications. The proposed system can be used not only in ADAS systems but also to be applied to smart Road Side Units (RSU) in transportation systems to monitor real-time traffic flow and warn road users of probable dangerous situations. As the signals of mmWave radar are less affected by bad weather and lighting such as cloudy, sunny, snowy, night-light, and rainy days, it can work efficiently in both normal and adverse conditions. Compared to using an RGB camera alone for object detection and tracking, the early fusion of the mmWave radar and RGB camera technology can make up for the poor performance of the RGB camera when it fails due to bad weather and/or lighting conditions. The proposed method combines the features of radar and RGB cameras and directly outputs the results from an end-to-end trained deep neural network. Additionally, the complexity of the overall system is also reduced such that the proposed method can be implemented on PCs as well as on embedded systems like NVIDIA Jetson Xavier at 17.39 fps.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号