首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于PPO的移动平台自主导航
引用本文:徐国艳,熊绎维,周彬,陈冠宏.基于PPO的移动平台自主导航[J].北京航空航天大学学报,2022,48(11):2138-2145.
作者姓名:徐国艳  熊绎维  周彬  陈冠宏
作者单位:北京航空航天大学 交通科学与工程学院 特种车辆无人运输技术工业和信息化部重点实验室, 北京 100191
基金项目:国家自然科学基金51775016
摘    要:为解决强化学习算法在自主导航任务中动作输出不连续、训练收敛困难等问题, 提出了一种基于近似策略优化(PPO)算法的移动平台自主导航方法。在PPO算法的基础上设计了基于正态分布的动作策略函数, 解决了移动平台整车线速度和横摆角速度的输出动作连续性问题。设计了一种改进的人工势场算法作为自身位置评价, 有效提高强化学习模型在自主导航场景中的收敛速度。针对导航场景设计了模型的网络框架和奖励函数, 并在Gazebo仿真环境中进行模型训练, 结果表明, 引入自身位置评价的模型收敛速度明显提高。将收敛模型移植入真实环境中, 验证了所提方法的有效性。 

关 键 词:近似策略优化算法    移动平台    自主导航    强化学习    人工势场
收稿时间:2021-03-02

Autonomous navigation based on PPO for mobile platform
Institution:Key Laboratory of Autonomous Transportation Technology for Special Vehicles, Ministry of Industry and Information Technology, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China
Abstract:This paper presents an autonomous navigation method based on proximal policy optimization (PPO) algorithm for mobile platform. In this method, GNSS and LADAR are used for sensing environment information. To define the state of reinforcement learning model, an ego position evaluation method is introduced based on improved artificial potential field algorithm. After that, on the basis of PPO algorithm, a kind of action policy function is designed based on Gaussian distribution, which solves the continuity problem of the vehicle linear velocity and yaw velocity. Furthermore, the network framework and reward function of the model are also designed for navigation scenarios. In order to train the navigation model, a virtual environment based on Gazebo is built. The training results show that the ego position evaluation method obviously helps to improve the speed of model convergence. Finally, the navigation model is transplanted to a real environment, which verifies the effectiveness of the proposed method. 
Keywords:
点击此处可从《北京航空航天大学学报》浏览原始摘要信息
点击此处可从《北京航空航天大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号