关键词: buffer schedule input saturation intrinsic motivation model-based reinforcement learning robotic manipulator uncertain environment

来  源:   DOI:10.3389/fnbot.2024.1376215   PDF(Pubmed)

Abstract:
In uncertain environments with robot input saturation, both model-based reinforcement learning (MBRL) and traditional controllers struggle to perform control tasks optimally. In this study, an algorithmic framework of Curiosity Model Policy Optimization (CMPO) is proposed by combining curiosity and model-based approach, where tracking errors are reduced via training agents on control gains for traditional model-free controllers. To begin with, a metric for judging positive and negative curiosity is proposed. Constrained optimization is employed to update the curiosity ratio, which improves the efficiency of agent training. Next, the novelty distance buffer ratio is defined to reduce bias between the environment and the model. Finally, CMPO is simulated with traditional controllers and baseline MBRL algorithms in the robotic environment designed with non-linear rewards. The experimental results illustrate that the algorithm achieves superior tracking performance and generalization capabilities.
摘要:
在机器人输入饱和的不确定环境中,基于模型的强化学习(MBRL)和传统控制器都难以最佳地执行控制任务。在这项研究中,结合好奇心和基于模型的方法,提出了好奇心模型策略优化(CMPO)的算法框架,其中,通过训练代理对传统的无模型控制器的控制增益来减少跟踪误差。首先,提出了判断积极和消极好奇心的指标。采用约束优化来更新好奇心比,提高了代理人培训的效率。接下来,定义新颖性距离缓冲比以减少环境与模型之间的偏差。最后,在具有非线性奖励设计的机器人环境中,使用传统控制器和基线MBRL算法模拟CMPO。实验结果表明,该算法具有优越的跟踪性能和泛化能力。
公众号