首页 | 本学科首页   官方微博 | 高级检索  
     检索      

深度确定性策略梯度算法用于无人飞行器控制
引用本文:黄旭,柳嘉润,贾晨辉,王昭磊,张隽.深度确定性策略梯度算法用于无人飞行器控制[J].航空学报,2021,42(11):524688-524688.
作者姓名:黄旭  柳嘉润  贾晨辉  王昭磊  张隽
作者单位:1. 北京航天自动控制研究所, 北京 100854;2. 宇航智能控制技术国家级重点实验室, 北京 100854
基金项目:国家自然科学基金(61773341)
摘    要:对深度确定性策略梯度算法训练智能体学习小型无人飞行器的飞行控制策略进行了探索研究。以多数据帧的速度、位置和姿态角等信息作为智能体的观察状态,舵摆角和发动机推力指令作为智能体的输出动作,飞行器的非线性模型和飞行环境作为智能体的学习环境。智能体在与环境交互过程中除了获得包含误差信息的密集惩罚外,也有达成一定目标的稀疏奖励,该设计有效提高了飞行数据的样本多样性,增强了智能体的学习效率。最后智能体实现了从位置、速度和姿态角等信息到控制量的端到端飞行控制,并进行了变航迹点、模型参数拉偏、注入扰动和故障条件下的飞行控制仿真,结果表明智能体除了能有效完成训练任务外,还能应对多种训练时未学习的飞行任务,具有优秀的泛化能力和鲁棒性,该方法具有一定的研究价值和工程参考价值。

关 键 词:深度确定性策略梯度  小型无人飞行器  飞行控制  端到端  稀疏奖励  
收稿时间:2020-08-31
修稿时间:2020-09-04

Deep deterministic policy gradient algorithm for UAV control
HUANG Xu,LIU Jiarun,JIA Chenhui,WANG Zhaolei,ZHANG Jun.Deep deterministic policy gradient algorithm for UAV control[J].Acta Aeronautica et Astronautica Sinica,2021,42(11):524688-524688.
Authors:HUANG Xu  LIU Jiarun  JIA Chenhui  WANG Zhaolei  ZHANG Jun
Institution:1. Beijing Aerospace Automatic Control Institute, Beijing 100854, China;2. National Key Laboratory of Science and Technology on Aerospace Intelligent Control, Beijing 100854, China
Abstract:The deep deterministic policy gradient algorithm is used to train the agent to learn the flight control strategy of a small UAV. The velocity, position and attitude angle of multi data frames are taken as the observation state of the agent, the rudder deflection angle and engine thrust command the output actions of the agent, and the nonlinear model and flight environment of the UAV the learning environment of the agent. In the interaction process between the agent and the environment, sparse rewards are provided to achieve certain goals, in addition to the dense punishment including error information, thereby effectively improving the diversity of flight data samples and enhancing the learning efficiency of the agent. The agent finally realizes the end-to-end flight control from the position, velocity and attitude angle to the control variables. In addition, the flight control simulations under the conditions of variable track point, model parameter deviation, disturbance and fault are carried out. Simulation results show that the agent can not only effectively complete the training task, but also deal with a variety of flight tasks not learned during training, showing excellent generalization ability and exhibiting certain research value and engineering reference value of the method.
Keywords:deep deterministic policy gradient  small UAV  flight control  end to end  sparse reward  
点击此处可从《航空学报》浏览原始摘要信息
点击此处可从《航空学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号