首页 | 本学科首页   官方微博 | 高级检索  
     

非合作大目标位姿测量的线结构光视觉方法
引用本文:高学海,梁斌,潘乐,徐文福. 非合作大目标位姿测量的线结构光视觉方法[J]. 宇航学报, 2012, 33(6): 728-735. DOI: 10.3873/j.issn.1000-1328.2012.06.007
作者姓名:高学海  梁斌  潘乐  徐文福
作者单位:1. 哈尔滨工业大学空间智能系统研究所,哈尔滨 150001;2. 深圳航天东方红海特卫星有限公司,深圳 518057;3. 哈尔滨工业大学深圳研究生院,深圳 518055
摘    要:空间机器人与非合作大目标交会接近最终段,单目相机不能获取完整的特征图像而无法完成相对位姿测量。针对此问题,提出基于线结构光和单目视觉的相对位姿测量方法。以非合作大目标上的局部矩形特征为测量对象,首先,建立相对位姿测量模型并给出四个测量坐标系之间的关系;其次,通过相机对不完整矩形和线结构光的约束获得四个特征点在相机坐标系下的坐标;然后,利用四个特征点计算相机坐标系与目标坐标系之间的转移矩阵;最后,将转移矩阵分解得到矩形特征的相对位姿。通过改变影响测量精度的输入误差和标定误差等因素对该方法进行仿真验证,结果表明该测量方法是有效的。

关 键 词:单目相机  线结构光  非合作大目标  位姿测量  空间机器人  
收稿时间:2011-09-26

Pose Measurement of Large Non-Cooperative Target using Line Structured Light Vision
GAO Xue-hai , LIANG Bin , PAN Le , XU Wen-fu. Pose Measurement of Large Non-Cooperative Target using Line Structured Light Vision[J]. Journal of Astronautics, 2012, 33(6): 728-735. DOI: 10.3873/j.issn.1000-1328.2012.06.007
Authors:GAO Xue-hai    LIANG Bin    PAN Le    XU Wen-fu
Affiliation:1. Institute of Space Intelligent System, Harbin Institute of Technology, Harbin 150001, China;2. Aerospace Dongfanghong Development Ltd. Shenzhen, Shenzhen 518057, China;3. Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China
Abstract:In final approach of rendezvous between a space robot and a large non-cooperative target, a monocular camera of the space robot can not observe a complete feature image of the target to determine relative position and attitude. To overcome this problem, a line structured light is introduced to aid measurement. A partial rectangular framework of the target is chosen as measurement object. Firstly, a measurement model is built and four measurement coordinate systems are presented. Secondly, according to camera projection constraints on the partial rectangular framework and the line structured light, four feature points are calculated in the camera coordinate system. Thirdly, using the four feature points, transform matrix between camera coordinate system and target coordinate system is computed. Lastly, the relative position and attitude of the partial rectangular framework is derived from the transform matrix. Numerical simulations are studied to verify the method under different input errors and calibration errors. The results show that this method is effective.
Keywords:Monocular camera  Line structured light  Non-cooperative large target  Pose determination  Space robot  
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《宇航学报》浏览原始摘要信息
点击此处可从《宇航学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号