基于RANSAC的奇异值剔除的单目视觉里程计

孙作雷+黄嘉明+张波






DOI:10.13340/j.jsmu.2016.04.016
文章编号:1672-9498(2016)04008705
摘要:为提高单目视觉里程计算法的性能,从视觉特征选取和特征误匹配剔除两个方面进行研究.采用SURF描述子提取单目图像的特征点,并匹配相邻图像序列的特征,使用归一化线性八点法依次得到基础矩阵和本质矩阵.利用三角测量求解匹配点的三维坐标,进而根据2D2D模型解算出两帧图像间相机运动的旋转和平移,从而构建单目视觉里程计系统.为提高算法性能,使用RANSAC算法清除初次计算的特征误匹配,并利用地面数据获取相机运动的平移尺度.实验结果验证了RANSAC算法能够有效剔除特征误匹配,降低单目视觉里程计的累积误差.
关键词:
机器人定位; 视觉里程计; 特征提纯; 机器视觉; SURF; RANSAC
中图分类号: TP242
文献标志码: A
Monocular visual odometry with RANSACbased outlier rejection
SUN Zuolei1, HUANG Jiaming1, ZHANG Bo2
(1. Information Engineering College, Shanghai Maritime University, Shanghai 201306, China;
2. Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China)
Abstract:
In order to enhance the algorithm performance of the monocular visual odometry, the visual feature extraction and the mismatched feature rejection are studied. The SURF descriptor is employed to extract features of monocular images and match features in the adjacent image sequence. The fundamental matrix and essential matrix are derived using the normalized eightpoint method. The 3D coordinates of matching points are calculated with the triangulation, and then the camera translation and rotation between two frames of images are estimated based on 2D2D model. As a result, the system of monocular visual odometry is constructed. To improve the algorithm performance, RANSAC algorithm is adopted to reject the feature mismatching in the first calculation, and the camera translation scale is achieved by the ground data. The experiment result demonstrates that RANSAC algorithm can effectively eliminate feature mismatching and reduce the cumulative error of the monocular visual odometry.
Key words:
robot localization; visual odometry; feature refining; computer vision; SURF; RANSAC
收稿日期: 20151208修回日期: 20160324
基金项目: 国家自然科学基金(61105097, 51279098, 61401270);上海市教育委员会科研创新项目(13YZ081)
作者简介:
孙作雷(1982—),男,山东枣庄人,副教授,博士,研究方向为移动机器人导航和机器学习, (Email)szl@mpig.com.cn
4结论
本文使用RANSAC算法提纯SURF特征点匹配,以提升单目视觉里程计(VO)性能.实验结果证明:单纯地使用比率测试法去除误匹配的误差较大;使用RANSAC算法进行误匹配剔除并配合归一化线性八点法能有效降低VO的累积误差.
参考文献:
[1]SCARAMUZZA D, FRAUNDORFER F. Visual odometry part I: the first 30 years and fundanmentals[J]. IEEE Robotics & Automation Magazine, 2011, 18(4): 8092. DOI: 10.1109/MRA.2011.943233.
[2]FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semidirect monocular visual odometry[C]// Robotics and Automation (ICRA), 2014 IEEE International Conference on. Hong Kong: IEEE, 2014: 1522. DOI: 10.1109/ICRA.2014.6906584.
[3]HANSEN P, ALISMAIL H, RANDER P, et al. Monocular visual odometry for robot localization in LNG pipes[C]// Robotics and Automation (ICRA), 2011 IEEE International Conference on. Shanghai: IEEE, 2011: 31113116. DOI: 10.1109/ICRA.2011.5979681.
[4]郑驰, 项志宇, 刘济林. 融合光流与特征点匹配的单目视觉里程计[J]. 浙江大学学报(工学版), 2014, 48(2): 279284. DOI: 10.3785/j.issn.1008973X.2014.02.014.
[5]FRAUNDORFER F, SCARAMUZZA D. Visual odometry part II: matching, robustness, optimization, and applications[J]. IEEE Robotics & Automation Magazine, 2012, 19(2): 7890. DOI: 10.1109/MRA.2012.2182810.
[6]SANGINETO E. Pose and expression independent facial landmark localization using denseSURF and the Hausdorff distance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(3): 624638. DOI: 10.1109/TPAMI.2012.87.
[7]吴福朝. 计算机视觉中的数学方法[M]. 北京: 科学出版社, 2008: 6377.
[8]CHOI S, PARK J, YU W. Resolving scale ambiguity for monocular visual odometry[C]// Ubiquitous Robots and Ambient Intelligence (URAI), 2013 10th International Conference on. Jeju: IEEE, 2013: 604608. DOI: 10.1109/URAI.2013.6677403.
[9]GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013: 32(11): 12311237.
[10]GEIGER A, ZIEGLER J, STILLER C. Stereoscan: dense 3D reconstruction in realtime[C]// Intelligent Vehicles Symposium (IV), 2011 IEEE. BadenBaden: IEEE, 2011: 963968. DOI: 10.1109/IVS.2011.5940405.
(编辑贾裙平)