首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于信任域策略优化的末制导控制量学习算法CSCD
引用本文:刘士荣,王天一,刘扬.基于信任域策略优化的末制导控制量学习算法CSCD[J].导航定位于授时,2022(6):77-84.
作者姓名:刘士荣  王天一  刘扬
作者单位:哈尔滨工业大学计算学部,哈尔滨 150001
基金项目:国家自然科学基金(62071154)
摘    要:近年来,深度强化学习在解决序列决策问题上取得了很大进展,无模型强化学习算法在与环境不断交互的过程中学习策略,不需要提前对环境建模,使其适用于许多问题。针对以往使用强化学习进行末制导策略学习的训练不稳定问题,使用信任域策略优化算法直接学习末制导控制量,同时设计了一种新颖的奖励函数,可以提高训练稳定性和算法性能。在二维环境下进行了实验,结果表明,该算法具有良好的训练稳定性,并可以达到很好的命中效果。

关 键 词:末制导控制量  学习算法  深度强化学习  末制导  信任域策略优化

Learning Algorithm of Terminal Guidance Control Quantity Based on Trust Region Policy Optimization
LIU Shi-rong,WANG Tian-yi,LIU Yang.Learning Algorithm of Terminal Guidance Control Quantity Based on Trust Region Policy Optimization[J].Navigation Positioning & Timing,2022(6):77-84.
Authors:LIU Shi-rong  WANG Tian-yi  LIU Yang
Institution:Faculty of Computing, Harbin Institute of Technology, Harbin 150001, China
Abstract:Recently, deep reinforcement learning has made great progress in sequential decision problems. Model-free reinforcement learning algorithms learn policies by interacting with the environment, they don''t need to model the environment in advance, making them suitable for many problems. In order to solve the problem of training instability when learning terminal guidance control quantity by reinforcement learning, we use trust region policy optimization algorithm to learn terminal guidance control quantity directly, and design a novel reward function to improve training stability and algorithm performance. Experiments are carried out in a two-dimensional environment, the results show that our algorithm has good training stability and achieves good hit performance.
Keywords:Terminal guidance control quantity  Learning algorithm  Deep reinforcement learning  Terminal guidance  Trust region policy optimization
本文献已被 维普 等数据库收录!
点击此处可从《导航定位于授时》浏览原始摘要信息
点击此处可从《导航定位于授时》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号