关键词: IEEE 802.11ah deep reinforcement learning (DRL) restricted access window (RAW)

来  源:   DOI:10.3390/s24103031   PDF(Pubmed)

Abstract:
The IEEE 802.11ah standard is introduced to address the growing scale of internet of things (IoT) applications. To reduce contention and enhance energy efficiency in the system, the restricted access window (RAW) mechanism is introduced in the medium access control (MAC) layer to manage the significant number of stations accessing the network. However, to achieve optimized network performance, it is necessary to appropriately determine the RAW parameters, including the number of RAW groups, the number of slots in each RAW, and the duration of each slot. In this paper, we optimize the configuration of RAW parameters in the uplink IEEE 802.11ah-based IoT network. To improve network throughput, we analyze and establish a RAW parameters optimization problem. To effectively cope with the complex and dynamic network conditions, we propose a deep reinforcement learning (DRL) approach to determine the preferable RAW parameters to optimize network throughput. To enhance learning efficiency and stability, we employ the proximal policy optimization (PPO) algorithm. We construct network environments with periodic and random traffic in an NS-3 simulator to validate the performance of the proposed PPO-based RAW parameters optimization algorithm. The simulation results reveal that using the PPO-based DRL algorithm, optimized RAW parameters can be obtained under different network conditions, and network throughput can be improved significantly.
摘要:
IEEE802.11ah标准的引入是为了解决不断增长的物联网(IoT)应用的规模。为了减少系统中的争用并提高能源效率,在媒体访问控制(MAC)层中引入了限制访问窗口(RAW)机制,以管理大量访问网络的站点。然而,为了实现优化的网络性能,有必要适当地确定RAW参数,包括RAW组的数量,每个RAW中的插槽数量,和每个时隙的持续时间。在本文中,我们优化了基于IEEE802.11ah的物联网上行网络中RAW参数的配置。为了提高网络吞吐量,分析并建立了一个RAW参数优化问题。为有效应对复杂动态的网络条件,我们提出了一种深度强化学习(DRL)方法来确定优选的RAW参数以优化网络吞吐量。为了提高学习效率和稳定性,我们采用近端策略优化(PPO)算法。我们在NS-3模拟器中构建具有周期性和随机流量的网络环境,以验证所提出的基于PPO的RAW参数优化算法的性能。仿真结果表明,采用基于PPO的DRL算法,可以在不同的网络条件下获得优化的RAW参数,网络吞吐量可以显著提高。
公众号