首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于权值分配及多特征表示的在线多示例学习跟踪
引用本文:杨红红,曲仕茹,米秀秀.基于权值分配及多特征表示的在线多示例学习跟踪[J].北京航空航天大学学报,2016,42(10):2146-2154.
作者姓名:杨红红  曲仕茹  米秀秀
作者单位:西北工业大学 自动化学院, 西安 710072
基金项目:航空科学基金(2012ZC53043),高等学校博士学科点专项科研基金(20096102110027),航天科技创新基金(CASC201104)Aeronautical Science Foundation of China(2012ZC53043),Specialized Research Fund for the Doctoral Program of Higher Education of China(20096102110027),Astronautic Science and Technology Innovation Foundation(CASC201104)
摘    要:针对复杂环境下目标跟踪过程中由于遮挡、目标姿势及光照条件变化引起跟踪漂移的问题,提出一种基于多示例学习(MIL)框架的在线视觉目标跟踪算法。该算法针对多示例跟踪算法采用单一haar-like特征不能准确描述目标外观变化及在学习过程中对样本包中各正负样本示例采用相同权值,忽略不同正负样本示例在学习过程中对包的重要性不同的特点,采用多特征联合表示目标外观构造分类器,通过将多特征互补特性融入在线多示例学习过程中,利用多特征的互补属性建立准确的目标外观模型,克服在线多示例跟踪算法对目标外观变化描述不足的问题;同时,依据不同正负样本示例对样本包的重要程度进行权值分配,提高跟踪精度。实验结果表明,本文跟踪算法对场景光线剧烈变化、遮挡、尺度变化及平面旋转等干扰具有较强的跟踪鲁棒性,通过对不同视频序列进行测试,文中算法在5组测试视频序列上的平均中心位置误差远小于对比增量式学习跟踪,仅为10.14像素,其对比算法IVT、MIL和OAB的中心位置误差分别为17.99、20.29和33.64像素。 

关 键 词:多示例学习    多特征联合表示    权值分配    目标跟踪    分类器
收稿时间:2015-09-30

Tracking approach based on online multiple instance learning with weight distribution and multiple feature representation
YANG Honghong,QU Shiru,MI Xiuxiu.Tracking approach based on online multiple instance learning with weight distribution and multiple feature representation[J].Journal of Beijing University of Aeronautics and Astronautics,2016,42(10):2146-2154.
Authors:YANG Honghong  QU Shiru  MI Xiuxiu
Institution:School of Automation, Northwestern Polytechnical University, Xi'an 710072, China
Abstract:When most existing tracking algorithms are used, target drift problem is easy to occur under a complex environment such as occlusion, pose and illumination change. This paper proposes an online visual target tracking algorithm based on the framework of multiple instance learning (MIL) tracking. The MIL tracking algorithm cannot describe the target appearance accurately because it only uses single haar-like feature, adopts the same weight during the process of learning sample packages which contain positive samples and negative samples, and ignores the characteristic of different positive samples and negative samples having different importance to the sample bags. Therefore, this paper combines the multiple features to represent the target, constructs the classifiers, integrates the complementary characteristic of multiple features to the MIL online learning process, exploits the characteristics of complementary properties to establish more accurate target appearance model, and overcomes the problem of MIL tracking algorithm that it is insufficient to describe the target appearance. Simultaneously, the weights are assigned based on the importance of different positive samples and negative samples to the sample bags, and the tracking precision is improved. The experimental results show that the proposed algorithm can effectively handle video scene occlusions, illumination changes and scale changes with high accuracy and strong robustness. Compared with incremental learning of visual tracing (IVT), MIL and online AdaBoost (OAB) tracking algorithms, through the different challenging video sequences, the average center position error of the proposed algorithm in 5 groups of test videos is far smaller than the other three algorithms, which is only 10.14 pixel, while those of IVT, MIL and OAB algorithms are 17.99, 20.29 and 33.64 pixel, respectively.
Keywords:multiple instance learning  joint multiple feature representation  weight distribution  target tracking  classifier
本文献已被 万方数据 等数据库收录!
点击此处可从《北京航空航天大学学报》浏览原始摘要信息
点击此处可从《北京航空航天大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号