引用本文
  •    [点击复制]
  •    [点击复制]
【打印本页】 【在线阅读全文】【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

←前一篇|后一篇→

过刊浏览    高级检索

本文已被:浏览 142次   下载 107 本文二维码信息
码上扫一扫!
一种基于语义信息辅助的无人机视觉/惯性融合定位方法
吕品,何容,赖际舟,杨子寒,袁诚
0
(南京航空航天大学自动化学院,南京 211106)
摘要:
视觉传感器在无人机室内定位中发挥着重要作用。传统基于特征点的视觉里程计算法通过底层亮度关系进行描述匹配,抗干扰能力不足,会出现匹配错误甚至失败的情况,导航系统的精度及鲁棒性有待提升。由于室内环境存在丰富的语义信息,提出了一种基于语义信息辅助的无人机视觉/惯性融合定位方法。首先,将室内语义信息进行因子建模,并与传统的视觉里程计方法进行融合;然后,基于惯性预积分方法,在因子图优化中添加惯性约束,以进一步提高无人机在动态复杂环境下的定位精度和鲁棒性;最后,通过无人机室内飞行试验对算法的定位精度进行了分析。试验结果表明,相较于传统的视觉里程计算法,该方法具有更高的精度和鲁棒性。
关键词:  视觉/惯性定位  语义信息  因子图优化  无人机
DOI:
基金项目:
A Semantic Information Aided Visual/Inertial Fusion Localization Method of UAV
LYU Pin,HE Rong,LAI Ji-zhou,YANG Zi-han,YANG Zi-han
(College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China)
Abstract:
Vision sensors play an important role in the indoor positioning of unmanned aerial vehicles (UAVs). The traditional visual odometry calculation method based on feature points descri-bes and matches through the underlying brightness relationship, which has insufficient anti-interference ability, and may cause matching errors or even failures. The accuracy and robustness of the navigation system need to be improved. Due to the rich semantic information in the indoor environment, a semantic information aided visual/inertial fusion localization method of UAV is proposed. Firstly, the indoor semantic information is factored and fused with the traditional visual odometry method. Then, based on the inertial pre-integration method, inertial constraint is added to the factor graph optimization to further improve the positioning accuracy and robustness of the UAV in complex dynamic situations. Finally, the positioning accuracy of the algorithm is analyzed through the UAV indoor flight experiment. The experimental results show that the proposed method has higher accuracy and robustness than the traditional visual odometry method.
Key words:  Visual/inertial localization  Semantic information  Factor graph optimization  UAV

用微信扫一扫

用微信扫一扫